One-upping Silicon Valley’s real-life hot-dog identification app, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a neural network called Pic2Recipe that — true to its name — can look at a picture of food (pic) and then work backward to figure out the recipe (that’s the “recipe” part). Basically, it’s Shazam, only instead of telling you that song on the radio is “Despacito,” it can tell you that image is spaghetti Bolognese. Or at least, that is the idea.
The very short version: Researchers combed recipe sites to develop “Recipe1M” — a database with more than a million annotated recipes with info about the various ingredients in a wide range of dishes, according to MIT News. Then they used that data to train the AI to “find patterns and make connections between the food images and the corresponding ingredients and recipes.” It has the promise, researchers say, to transform “seemingly useless photos on social media” into “valuable insight into health habits and dietary preferences.”
So far, though, the results are a little iffy. While the system is apparently very good at identifying baked goods, it has trouble “identifying more ambiguous foods, like sushi rolls and smoothies.” Also, it gets confused when there are many similar recipes for the same dish, but then, who among us does not? When the Verge tested it, Pic2Recipe could not (yet) correctly identify recipes for a number of dishes — ramen, potato chips, and rice and beans were all met with “no matches.” (For what it’s worth, it did correctly identify the Verge’s hot dog.)
As the MIT team is quick to acknowledge, though, it’s a work in progress. But the hope is that, eventually, a more refined version of the system could have real-world applications, helping home cooks re-create restaurant food, or conscious eaters figure out the nutritional value of your dinner.