AI-based technologies are rapidly learning to see, converse, calculate and create. One thing they still don’t do well, however, is measure or “feel” surfaces — a purely mechanical function.
“AI has more or less acquired the sense of sight, through advances in computer vision and object recognition,” says Stevens physics professor Yong Meng Sua. “It has not, however, yet developed a human-like sense of touch that can discern, for example, a rough sheet of newspaper paper from a smooth and glossy sheet of magazine paper.”
Until now, that is. Researchers in Stevens’ leading-edge Center for Quantum Science and Engineering (CQSE) have just demonstrated a method of giving AI the ability to feel.
Accurate metrology for medicine, manufacturing, more
Sua, working with CQSE Director Yuping Huang and doctoral candidates Daniel Tafone and Luke McEvoy ’22 M.S. ’23, devised a quantum-lab setup that combines a photon-firing scanning laser with new algorithmic AI models trained to tell the differences among various surfaces as they are imaged with those lasers.
“This is a marriage of AI and quantum,” explains Tafone.
In their system, reported this month in the journal Applied Optics [Vol. 63, No. 30], a specially created beam of light is pulsed in short blasts at a surface to “feel” it. Reflected, back-scattered photons return from the target object carrying speckle noise, a random type of flaw that occurs in imagery.
Speckle noise is normally considered detrimental to clear, accurate imaging. However, the Stevens group’s system takes a different approach: it detects and processes these noise artifacts using an AI that has been carefully trained to interpret their characteristics as valuable data. This allows the system to accurately discern the topography of the object.
“We use the variation in photon counts over different illumination points across the surface,” says Tafone.
The team used 31 industrial sandpapers with surfaces of varying roughness, ranging from 1 to 100 microns thick, as experimental targets. (For comparison, an average human hair is about 100 microns thick.) Mode-locked lasers generated light pulses aimed at the samples.
Those pulses passed through transceivers, encountered the sandpapers, then rebounded back through the system for analysis by the team’s learning model.
During early tests, the group’s method averaged a root-mean-square error (RMSE) of about 8 microns; after working with multiple samples and averaging results across them, its accuracy improved significantly to within 4 microns, comparable to the best industrial profilometer devices currently used.
“Interestingly, our system worked best for the finest-grained surfaces, such as diamond lapping film and aluminum oxide,” notes Tafone.
The new method could be useful for a variety of applications, he adds.
When attempting to detect skin cancers, for example, mistakes are often made by the human examiners who confuse very similar-looking but harmless conditions with potentially fatal melanomas.
“Tiny differences in mole roughness, too small to see with the human eye but measurable with our proposed quantum system, could differentiate between those conditions,” explains Huang.
“Quantum interactions provide a wealth of information, using AI to quickly understand and process it is the next logical step.”
Manufacturing quality control of components, as well, often hinges on extremely small distances that can mean the difference between a perfect part and a tiny defect that could eventually cause a dangerous mechanical failure.
“Since LiDAR technology is already implemented widely in devices such as autonomous cars, smartphones and robots,” Huang concludes, “our method enriches their capabilities with surface property measurement at very small scales.”
Source : Science Daily