The days of looking at food on Instagram in ignorant bliss are coming to an end.
According to a new study from MIT's Computer Science and Artificial Intelligence Laboratory, a deep-learning AI algorithm called "Pic2Recipe" is able to retrieve the likely ingredients of a meal based on just a picture.
Researchers gathered 1,029,720 recipes and 887,706 meal images from popular cooking websites such as All Recipes and Food.com and manually removed duplicate images as well as unwanted characters such as exclamation points or question marks. This culminated in a robust database of common meals and their ingredients.
When the Pic2Recipe AI was asked to view an image of a meal, it was able to use the database to identify the correct ingredients 65 percent of the time. This is an 80 percent improvement from the 2014 Food-101 study in which Swiss researchers created an algorithm that could correctly identify a meal's ingredients from an image 50.76 percent of the time.
Nick Hynes, an MIT CSAIL graduate student and lead author of the study, thinks that Pic2Recipe has the potential to democratize culinary and nutritional knowledge.
"It would be really amazing to someday be able to take a photo of a dish you see in a restaurant and be able to figure out exactly how you can recreate it at home," he told Motherboard. "I also imagine that people could use a tool like this to analyze their meals and determine its nutritional value. This would be particularly useful in restaurants and cafes when you don't really know what you're eating."
However, Pic2Recipe is not without its limits. It has no mechanism to understand flavor or texture, so ingredients that look similar but taste differently could be confused—such as hummus and soybean paste. It also doesn't tell the amount of each ingredient that's needed to make the meal.
Hynes also said that Pic2Recipe struggled to identify ingredients in foods like smoothies and sushi because they're made up of fine or blended ingredients, and because they appear less frequently in the database.
The CSAIL research team hopes to refine Pic2Recipe so that it can not only parse more complicated meals, but also provide preparation methods for the meals. However, Hynes said that these improvements may not come any time soon.
"Understanding recipes and their images is a tough problem that even Google research teams have difficulty with," he said. "I certainly think that [95-100 percent accuracy] is achievable, but it's going to require a much more in-depth understanding of the input."
Kaleigh Rogers contributed reporting.