AI & Food'21: Proceedings of the 3rd Workshop on AIxFood
AI & Food'21: Proceedings of the 3rd Workshop on AIxFood
SESSION: Paper Presentations
Analyzing and Recognizing Food in Constrained and Unconstrained Environments
- Marco Buzzelli
- Gianluigi Ciocca
- Paolo Napoletano
- Raimondo Schettini
Recently, Computer Vision based image analysis techniques have attracted a lot of attention because they are used to develop automatic dietary monitoring applications. Food recognition is a quite challenging task: it is a non-rigid object, and is characterized by intrinsic high iter- and intra-class variability. The proper design of a food recognition system based on Computer Vision should contain several analysis stages. This paper reports on the most recent solutions in the field of automatic food recognition using computer vision developed at the Imaging and Vision Laboratory in the last 12 years. We present and discuss the main solutions developed and results achieved for food localization, segmentation, recognition and analysis. Food localization and segmentation aim at identifying the regions in the image corresponding to food items, food recognition aims at labeling each food region with the identity of the depicted food, and food analysis aims at determining properties of the food such as its quantity or ingredients.
3D Mesh Reconstruction of Foods from a Single Image
- Shu Naritomi
- Keiji Yanai
Dietary calorie management has been an important topic in recent years, and various methods and applications on image-based food calorie estimation have been published in the multimedia community. Most of the existing methods of estimating food calorie amounts use 2D-based image recognition. On the other hand, in this extended abstract, we would like to introduce our work on 3D food volume estimation employing a recent DNN-based 3D mesh reconstruction technique. We performed 3D mesh reconstruction of a dish~(food and plate) and a plate (without foods) from a single image. We succeeded in restoring the 3D shape with high accuracy while maintaining the consistency between a plate part of an estimated 3D dish and an estimated 3D plate. To achieve this, the following contributions were made in our recent work. (1) Proposal of "Hungry Networks,'' a new network that generates two kinds of 3D volumes from a single image. (2) Introduction of plate consistency loss that matches the shapes of the plate parts of the two reconstructed models. (3) Creating a new dataset of 3D food models that are 3D scanned of actual foods and plates. We also conducted an experiment to infer the volume of only the food region from the difference of the two reconstructed volumes. As a result, it was shown that the introduced new loss function not only matches the 3D shape of the plate, but also contributes to obtaining the volume with higher accuracy. Although there are some existing studies that consider 3D shapes of foods, this is the first study to generate a 3D mesh volume from a single dish image. In addition, we have implemented a web-based 3D dish reconstruction system, "Pop'n Food'', which enables reconstruction of 3D shapes from a single dish image in a real-time way. The demo video of the system is available at https://youtu.be/YyIu8bL65EE.
A Generic Few-Shot Solution for Food Shelf-Life Prediction using Meta-Learning
- Harini S
- Jayita Dutta
- Manasi Patwardhan
- Parijat Deshpande
- Shirish Karande
- Beena Rai
Checking the quality of agricultural produce at every step of its supply chain is the need of the hour to reduce food wastage. Manual checking of food quality at every step can be inconsistent and time consuming. Automation of food quality detection, using non-invasive imagery based techniques, needs availability of ample amount of annotated data to train models. Collecting such data in large quantity in a controlled lab setting is an expensive affair. More-over, providing a point solution for every individual food item by training food item specific models is an impractical solution. Thus, there is a need for a mechanism which would capture the common meta-level visual degradation properties across a set of food items belonging to a specific category and use this meta-knowledge to predict the quality of a new food item belonging to that category with a paucity of training data. To address this challenge, as a part of the preliminary work, we conduct an initial set of experiments to demonstrate the applicability of existing Model Agnostic Meta-Learning (MAML) algorithm for fruit freshness detection task. The results indicate that for such a task, meta-learning can serve to be a more generic and efficient solution than using few-shot transfer-learning technique and traditional ML based approaches requiring explicit feature engineering.
An Integrated System for Mobile Image-Based Dietary Assessment
- Zeman Shao
- Yue Han
- Jiangpeng He
- Runyu Mao
- Janine Wright
- Deborah Kerr
- Carol Jo Boushey
- Fengqing Zhu
Accurate assessment of dietary intake requires improved tools to overcome limitations of current methods including user burden and measurement error. Emerging technologies such as image-based approaches using advanced machine learning techniques coupled with widely available mobile devices present new opportunity to improve the accuracy of dietary assessment that is cost-effective, convenient and timely. However, the quality and quantity of datasets are essential for achieving good performance for automated image analysis. Building a large image dataset with high quality groundtruth annotation is a challenging problem, especially for food images as the associated nutrition information needs to be provided or verified by trained dietitians with domain knowledge. In this paper, we present the design and development of an mobile, image-based dietary assessment system to capture and analyze dietary intake, which has been deployed in both controlled-feeding and community-dwelling dietary studies. Our system is capable of collect high quality food images in naturalistic settings and provide groudtruth annotations for developing new computational approaches.