Categories
Select Country
Follow Us
3D Printing

Laser Focus: Computer Vision and Machine Learning Are Speeding Up 3D Printing

Scientists working at GE labs in upstate New York have spent decades building computer vision systems that can study diseased tissue, and hunt for microscopic cracks in machine parts and other features often invisible to the naked eye. “Computer vision can be used to find things we either can’t see or may not know to look for,” says Joseph Vinciquerra, who runs the Additive Research Lab at GE Global Research in Niskayuna, New York.

Now Vinciquerra and his team are using their insights to improve the way 3D printers work.

Even though companies like GE already print parts for jet engines, additive manufacturing is still a young field. It can take days to weeks to print large parts such as a compressor blade. If something goes wrong near the end of the process, precious machine time and money could go to waste.

The GE researchers are building a system that could speed up the process and eventually achieve “100 percent yield,” an engineer’s Nirvana where machines only produce good parts, beginning with the very first build. “We do a tremendous amount of work on additive powders to understand what characteristics lead to a good build,” says materials scientist Kate Gurnon, a member of the team. “We want to apply this automatically to the machines and, in real time, observe the dynamic behavior of the powder delivery to the build plate. In this way, we will have a better chance of getting to the 100 percent yield, faster.”

It can take days to weeks to print large metal parts. If something goes wrong near the end of the process, precious machine time and money could go to waste. Top and above images credits: Avio Aero

3D printing and other additive manufacturing methods print parts directly from a computer file. They can shape a component by fusing together thin layers of metal powder with a laser, for example. But even these highly-advanced machines are prone to variability. There are a number of culprits “that can make the difference between a good build and a build that has sub-optimal properties,” Vinciquerra says. They include variation in the size of the powder particles, as well as “the complex dynamic of adding new powder layers,” which can be as thin as a human hair. “We know that things happen during this re-coating process that you cannot control mechanically,” he says. “We also know that the more we reuse powder, the more opportunities exist for that powder to change and behave differently over time.”

AI and machine learning can help. “Using artificial intelligence and machine learning, we will turn 3D printers into essentially their own inspectors,” Vinciquerra says. “By eliminating the need to inspect parts after they’re completely built, we can shave days, even weeks off the entire manufacturing process and lead to a breakthrough in productivity.”

The team starts by printing simple geometric shapes like flat bars and cylinders. They use high-resolution cameras to film every layer and record streaks, pits, divots and other patterns in the powder practically invisible to humans. Next, they run the samples through a powerful CT scanner and hunt for flaws.

All of the data is stored in computer memory and a proprietary machine-learning algorithm correlates defects revealed by the scanners with powder patterns recorded on the particular layer. “The more often you do it, the smarter the system gets,” Vinciquerra says. “The computer vision alone will eventually have enough training to tell us whether we are going to have a problem.”

“Using artificial intelligence and machine learning, we will turn 3D printers into essentially their own inspectors,” says GE’s Joe Vinciquerra. Image credit: Concept Laser

But Vinciquerra and Gurnon are going deeper. They are working with other members of the team to loop the defect-spotting ability into machine controls to make the 3D printer smarter. When the computer vision spots a familiar streak that it knows will lead to cavities, for example, the printer can automatically add more power or speed to the laser beam to adjust, or change the thickness of the next layer. “The idea is that the machine has a compensation strategy based on what the computer vision sees,” Vinciquerra says. “That’s the long-term goal here.”

One way to get there is to build a “digital twin,” a virtual representation of the production process that can compare ideal conditions with reality in real time and suggest changes to reach a flawless result. Vinciquerra says that the digital twin is “sort of a recipe card” that “represents the gold standard of how the material should be processed” in the 3D printer. “It’s telling me that if I use these parameters to build a part from a certain material, I can expect this kind of behavior in the final product,” Vinciquerra says.

The next generation of the twin will be able to gather computer vision data but also collect information from other sensors monitoring the printing process, such as the shape of the tiny pool of metal rendered molten by the laser.

Vinciquerra notes that some of GE’s most advanced 3D-printed parts for GE Aviation can take more than a week to complete and require several more hours after that for processing and validation. “If we could recognize a defect very early in a part build, we would have an opportunity to stop and start over,” Vinciquerra added. “On the other hand, if a part is more than halfway through a build, we could evaluate whether the defect could be fixed by adjusting the rest of the layers being printed. We have never had this degree of flexibility in manufacturing.”

 

Subscribe to our GE Brief