Researchers in Canada are developing a new device that determines the nutritional content of your food by analyzing video footage of you eating.
Researchers from the University of Waterloo in Canada state that “the current […] food portion estimation algorithms assume that users take images of their meals one or two times.”
“[This] can be inconvenient and fail to capture food items that are not visible from a top-down perspective, such as ingredients submerged in a stew.”
This AI tracking system analyzes each spoonful as it travels to your mouth, as opposed to some existing ones that only view a photo of your plate. This makes it far more accurate.
It has a 4.4% error margin when calculating the amount of food you are consuming.
The system is being trained to recognize what you’re eating, even though it can’t now recognize the food on your spoon. In the end, it ought should be able to identify a variety of foods, including ones it hasn’t seen previously.
Yuhao Chen of the University of Waterloo in Canada told New Scientist, “We’re moving toward using those big language models like ChatGPT […] to understand what is in the food or maybe ask a basic question [like] ‘is this chicken?'”
The World Health Organization created SARAH (Smart AI Resource Assistant for Health), an AI health promoter that was only released last month. It can respond to users’ inquiries about nutrition and health care around-the-clock.
When you ask inquiries, SARAH appears on the screen as a brunette woman who alternates between smiling and frowning.
She appeared realistic when News questioned her about whether AI could accurately determine caloric intake.