Deep Learning/AI System Provides 100% Inspection for Food Producer
apetito is a leading food producer for the health and social care sector, supplying to the elderly and some of the most vulnerable individuals in society. The company was working to feed more than one million people each week but was fielding complaints regarding missing meal components. Ensuring that the customer receives the right meal with all the elements in the right proportions is critical. apetito produces these meals on 14 production lines and needed to ensure each tray meets expectations and is packed properly. The company tried different inspection techniques, for example weighing the trays to ensure all the components were present. The problem here was that if one component was heavier than it should be, it could compensate for the missing component’s weight. The company also had human operators on the line doing ad hoc inspection to try to identify problems.
apetito also tried its own approach to deep learning on its “sweets line,” which worked but wasn’t scalable. It employed a graduate student to develop the Raspberry Pi-based solution for that line. According to Kevin McDonagh, Operations Manager, apetito UK, this is what got the company on the journey of pursuing AI for the meals line. “We got that working, but we’ve taken it further, where we’re using it now with our meals assembly lines for missing components,” he says. “We had been using what we had, but we were always trying to be better. This all came about from a ‘scrum session’ with IT when I talked to them about the factory.” What is “this?” Neurala’s (Boston, MA, USA; www.neurala.com) Vision Inspection Automation (VIA) solution.
Steve Walsh, Vice President of Sales, Neurala, adds, “They had a graduate student build and train a model with thousands of images, and it worked for one of their products. What they realized is that it’s not scalable given the wide variety of SKUs and products because you would need thousands and thousands of images for every product, including defects or missing components. So, they came to us because part of our core value proposition is we can train models on less data, and we can do that very quickly with an easy-to-use software application where you don’t need specialized expertise.” He explains that food is a good application for deep learning. “Food generally is excellent because of the variability in the product that makes it a challenge for traditional rules-based machine vision,” he says.
The Challenge
There wasn’t a specific problem that required a solution at apetito. The goal is always providing complete meals. Rather, it was more about continual improvement, in this case getting to the 100% inspection threshold. It knew AI worked. The company wanted to efficiently detect errors in products coming off the line without compromising efficiency or cost. “They have a variety of dispensing technologies that dispense components into a tray after they’ve been prepared and cooked,” says Walsh. They would pass an inspection point, he continues, but because of a line or a dispensing error or the absence of a material, a tray would proceed with a missing component. “If you don’t inspect 100% of those trays, they go on to be sealed, packed, and dispatched,” says Walsh. “So, unless you inspect 100% of the meals at the moment they’re supposed to be filled, you don’t catch those errors, and conceivably you don’t catch that there was a line error down stream that you may need to remediate so you can address that in subsequent meals.”
Before the deep learning/AI system, various depositors, or even humans, would assemble the trays, dispensing components into them. The tray would then run into the tray sealing unit, which then gets metal detected, checked, weighed, and a label put across the top. “It’s down to skill and the machine dispensing the component, but on some production lines with upwards of 52 packs per minute, sometimes you could miss those,” says McDonagh.
Positioning on the line was key, according to McDonagh. “Where we have it before the sealer allows us to easily fix any issues,” he says. “We were initially thinking of having two cameras before the sealer, but we’re installing four, so two half way down that line and two closer to the sealer.” Illumination also played a role in the location. “With the LED lighting that we have around the line and the close proximity, we did not have any illumination issues,” says McDonagh. “We did have a line positioning issue. We wanted to look at doing it post filament seal, but you’re getting reflection off the clear plastic filling, so that’s why we we’ve had to keep it before the sealer.”
The system apetito uses includes four GigE Vision cameras with 8-mm lenses in waterproof housings sitting above the production line. The cameras are connected to an IoT network via Ethernet, and from there the data feed into the Neurala software.
Neurala’s VIA consists of two software programs: Inspector and Brain Builder. “Brain Builder allows users to quickly build and manage models,” says Walsh. “Inspector allows you to take those models, connect them to any GigE camera, and output via a communication protocol. When the models see a fault or a defect, we output to that communication protocol so they can code it into their PLC to take action.”
McDonagh adds, “You take the pictures, and you build it in Brain Builder, which is very easy to navigate. Once you’ve got your brain built, you can optimize it and move it across to Inspector. You just switch [Inspector] on, and it can identify that there’s something wrong with a meal. In a classification brain, you could point out that the tray is missing bacon or sausage. Or, it also can determine that it doesn’t look normal and fits into a “nothing I know” category.
VIA can also inspect multiple regions of interest (ROIs) during surface inspection. With the weight-based system, apetito could flag an incomplete tray, but it did not know what was missing. Inspecting multiple ROIs allows apetito to discover which components are missing and identify trends in missing components to avoid them in the future.
Walsh explains that apetito can build, train, and deploy a model in 10 to 20 minutes using very limited data to identify if the tray was complete based on whatever the meal apetito was producing on that day. “How we were able to do that is unlike most approaches to deep learning,” he says. “We don’t have just one algorithm in our product. We have our own patented technology that is designed to learn less data and learn on lightweight compute, which is why we only need a CPU-powered machine to train.” Walsh says that because Neurala can train quickly with a limited set of data, it actually trains multiple models to figure out programmatically what the best model for the client’s use case is based on their data.
apetito was able to build a brain in 20 to 30 minutes, says McDonagh. “We would manage it for a period of time and review at the end and enter the score for how the brain was doing in terms of accuracy.” If the system detects a fault, right now McDonagh says the line stops. “No one wants to have numerous line stops,” he says. “Line stops can be seen as a good or bad thing. Good: it shows you straightaway, and you’re able to deal with the issue and probably resolve it. Bad as in it affects efficiency.” Ultimately the plan is for apetito to create a reject system to send a defective tray off the belt and allow a human to analyze it, fix it, and place it back on the line.
apetito plans to continue expanding VIA’s use at its UK facility. “We are exploring meal rejects or label rejects,” McDonagh says. “So if something doesn’t have a label before it enters the spiral and even before it is boxed up and packaged, that’s the next area we want to explore.” Missing components are found in the tray before it’s sealed, lettered, and labeled, McDonagh explains. apetito is looking into performing inspection after it is packaged to make sure the label and all relative date information are present. “The world is our oyster as for what else we can use this for,” he concludes.
Chris Mc Loone | Editor in Chief
Former Editor in Chief Chris Mc Loone joined the Vision Systems Design team as editor in chief in 2021. Chris has been in B2B media for over 25 years. During his tenure at VSD, he covered machine vision and imaging from numerous angles, including application stories, technology trends, industry news, market updates, and new products.