Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 1775 2024-01-08 11:41:27 |
2 layout Meta information modification 1775 2024-01-09 01:48:00 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Baban A Erep, T.R.; Chaari, L. Food Image Dataset and Segmentation Model. Encyclopedia. Available online: https://encyclopedia.pub/entry/53548 (accessed on 09 October 2024).
Baban A Erep TR, Chaari L. Food Image Dataset and Segmentation Model. Encyclopedia. Available at: https://encyclopedia.pub/entry/53548. Accessed October 09, 2024.
Baban A Erep, Thierry Roland, Lotfi Chaari. "Food Image Dataset and Segmentation Model" Encyclopedia, https://encyclopedia.pub/entry/53548 (accessed October 09, 2024).
Baban A Erep, T.R., & Chaari, L. (2024, January 08). Food Image Dataset and Segmentation Model. In Encyclopedia. https://encyclopedia.pub/entry/53548
Baban A Erep, Thierry Roland and Lotfi Chaari. "Food Image Dataset and Segmentation Model." Encyclopedia. Web. 08 January, 2024.
Food Image Dataset and Segmentation Model
Edit

The development of vision-based dietary assessment (VBDA) systems. These systems generally consist of three main stages: food image analysis, portion estimation, and nutrient derivation. The effectiveness of the initial step is highly dependent on the use of accurate segmentation and image recognition models and the availability of high-quality training datasets. Food image segmentation still faces various challenges, and most existing research focuses mainly on Asian and Western food images. 

food segmentation semantic segmentation CamerFood10 dataset CNN

1. Introduction

Despite significant advancements in the medical field, the prevalence of Non-Communicable Diseases (NCDs), such as cardiovascular diseases, cancers, chronic respiratory diseases, obesity, and diabetes, remains alarmingly high. According to a report from the World Health Organization (WHO) [1], in 2022, NCDs were responsible for 41 million deaths, accounting for 74% of all global deaths, with 40% of these occurring prematurely before the age of 70. This NCD epidemic not only has devastating health consequences for individuals, families, and communities, but also poses a significant burden on healthcare systems worldwide. This burden makes their prevention and control a crucial priority for the 21st century.
Diet plays an important role in the prevention and treatment of NCDs [2]. Unhealthy dietary habits and a lack of knowledge about proper nutrition often contribute to poor diet choices. Fortunately, dietary assessment can help monitor daily food intake and promote healthier eating habits. In recent years, researchers in the field of computer vision and health have shown great interest in dietary assessments [3]. Tools for automating the dietary assessment process have emerged with the widespread use of smartphones with high capacities and the advancements in computer vision models. These tools are known as vision-based dietary assessment (VBDA) systems [4][5][6][7][8]. They utilize image computer vision models to directly identify food items categories, evaluate their volume and estimate nutrient content from smartphone camera pictures. VBDA systems typically involve three stages: food image analysis, portion estimation and nutrient derivation [4]. The performance of the first two stages heavily relies on the effectiveness of artificial intelligence algorithms and the availability of good food datasets, while the final stage depends on a nutritional composition database. The food image analysis stage entails segmenting food regions from the background and recognizing each type of food present in the image. The next step involves evaluating the quantity or volume of each detected food item.
Food image segmentation and recognition indeed pose significant challenges due to various factors. One of the primary challenges is the non-rigid structure of food, which differs from common objects. This characteristic makes it difficult to utilize shape as a reliable feature for machine learning models. Additionally, foods usually have high intra-class variation, meaning that the visual characteristics of the same food can differ significantly from one to another. This variation is particularly pronounced in African foods, further complicating accurate food recognition. Furthermore, inter-class resemblance is another source of potential recognition issues, as different food items can appear very similar, as illustrated in Figure 1. Some examples of generic food with such resemblances include brownies and chocolate cake, and margarine and butter. Moreover, certain dishes may contain various ingredients, resulting in the same dish with distinct visual aspects. Another significant challenge in food image segmentation and recognition is the scarcity of publicly available datasets for model training. This lack of datasets hinders the development of accurate segmentation models.
Figure 1. Different kinds of Cameroonian food with a similar yellow texture.
Current research on food image segmentation and recognition focuses mainly on images of Asian and Western foods. Unfortunately, there are only a few publicly available datasets for image segmentation, and none of them incorporate images of African foods, as shown in Table 1. However, African foods, including Cameroonian foods, present their own unique challenges. African dishes often consist of multiple mixed classes of food, as depicted in Figure 1. This complexity adds significant difficulty when attempting to segment and recognize individual food items. The more food classes mixed together on a plate, the more challenging it is to accurately detect the contours of each food component in the dish.

2. Food Image Dataset

With advancements in deep learning models for computer vision, the field of food segmentation and recognition techniques is rapidly evolving. However, the performance of these techniques heavily relies on the availability of large and diverse well-annotated image datasets. Collecting such datasets is a labor-intensive task, and the quality of annotations directly affects the performance of the models.
While some publicly available food image datasets exist, only a few of them are annotated for image segmentation and detection tasks [9][10]. The annotation process for segmentation is particularly tedious and sensitive. Image segmentation datasets vary in terms of geographic origin of the food and the methods used for image collection. Some datasets are annotated with only bounding boxes (UECFOOD100 [11], UECFOOD256 [12]), while others include polygon or mask annotations (MyFood Dataset [13], FoodSeg103 [14], UECFoodPixComplete [15]).
There are four main methods for collecting images for food image datasets. First, images can be captured in a standardized laboratory environment (e.g., UNIMIB2016 [16]), which ensures high-resolution and good-quality images. However, this method typically limits the number of collected images. Second, images can be downloaded from the internet, either from social networks [17] or search engines [13]. This approach allows for the collection of large numbers of images, but can also result in a large number of non-food images that need to be sorted. Downloaded images may vary in quality, including blurry images, images with text, low-resolution images, or retouched images. Third, images can be collected directly from users [18], which provides a realistic representation of real-life scenarios. However, implementing this method can be challenging, as it requires a large number of users and an extended period to collect a substantial amount of images. Finally, some datasets are built with images from other existing datasets. For instance, the UECFoodPixComplete [15] dataset was built by annotating UECFOOD100 [11] images. Likewise, Food201-Segmented is made from Food-101 [19]-segmented images (see Table 1).
Table 1 lists, at the present stage of researchers' investigation, the only publicly available food image datasets for detection and segmentation tasks. These datasets are classified based on their main characteristics, such as their usage, number of classes, total number of images, method of image collection, and the origin of the dishes represented in the images. Notably, the available datasets mostly focus on Asian or Western foods, and there is currently no dataset available for African foods. As part of the work, researchers propose the first dataset CamerFood10 specifically designed for image segmentation of African food.

3. Segmentation Model for Food Image

Food image segmentation occurs in the first stage of a VBDA system. Its purpose is to separate food items from the background and from each other. Food image segmentation is a challenging task when food items overlap or do not have strong visual features in contrast with the other food items on a plate. Several methods have been proposed to address issues in food image segmentation. They can be classified in three categories [9]: (i) semi-automatic approaches, (ii) automatic approaches involving machine learning (ML) with handcrafted feature extraction, and (iii) automatic approaches with deep learning feature extraction.
In semi-automatic techniques for food segmentation, the user is asked to select regions of interest in the image or mark some pixels as food items or background. A drawback of the semi-automatic method is that it is tedious. It adds many additional actions for the user, unlike automatic segmentation approaches, where the user only needs to capture the food image. Automatic food image segmentation methods with handcrafted feature (e.g., colour, texture, and shape) extraction rely on traditional image processing techniques [9][25][26], such as region growing and merging [27], Normalized Cuts [28], Simple Linear Iterative Clustering (SLIC), the Deformable Part Model (DPM), the JSEG segmentation algorithm, K-means [29], and GrabCut [30]. One of the most popular works using these techniques was presented by Matsuda and Yanai [11] and takes into consideration the problem of images with multiple foods. It detects several candidate food regions by fusing outputs of several region detectors (DPM, circle detector, and JSEG). Then, it recognizes each candidate region independently using various feature descriptors (SIFT bag, HoG, Gabor textures) and support vector machine (SVM).
With the introduction of deep learning, deep neural networks automatically extract food image features and perform better than methods using traditional image processing techniques [7][9]. Im2Calories [20] was one of the pioneering works that used deep convolutional neural networks (CNNs) for semantic segmentation of food images. Pouladzadeh et al. [31] combined graph-cut segmentation with CNNs for calorie measurement, although their approach was limited to images with a single food label. Some studies have focused on simultaneous localization and recognition of foods using object detection models, like [32], which uses Fast-RCNN, and [33], which uses YOLO. Chiang et al. [34] proposed a model based on a mask region-based convolutional neural network (Mask-RCNN) with a union post-processing technique.
In 2021, Wu et al. [14] proposed a semantic segmentation method consisting of recipe learning (ReLeM) and image segmentation modules. They used a long short-term memory (LSTM) network as an encoder and a vision transformer architecture as a decoder, and they achieved 43.9% mIoU with their dataset FoodSeg103. Okamoto and Yanai [15] used the DeepLabv3+ model on their UECFoodPixComplete dataset and obtained a 55.5% mIoU. Liang et al. [18] introduced a model called ChineseFoodSeg to address challenges specific to Chinese food images, such as blurred outlines, rich colors, and varied appearances. Their model outperformed DeepLabv3+, U-Net, and Mask-RCNN on the ChinesseDiabetesFood187 dataset, achieving an accuracy of 94% and an mIoU of 79%. However, their proposed method is more complex and less time-efficient compared to DeepLabV3+. Sharma et al. [35] proposed a novel architecture named GourmetNet, which incorporates both channel and spatial attention information in expanded multi-scale feature representation using an advanced Waterfall Atrous Spatial Pooling (WASPv2) [36] module with channel and spatial attention mechanisms. GourmetNet achieved state-of-the-art performance on the UNIMIB2016 and UECFoodPix datasets, achieving an mIoU of 71.79% and 65.13% on these datasets, respectively. A more recent work, [37], proposed a Bayesian version of DeepLabv3+ and GourmetNet [35] to perform multi-class segmentation of foods.
It is worth noting that the quality of the image dataset plays a significant role in the performance of these models. Well-arranged food on plates with good clarity often leads to better results. Unfortunately, this is not very often the case in real-world images.

References

  1. World Health Organization. Noncommunicable Diseases: Progress Monitor 2022; World Health Organization: Genova, Switzerland, 2022.
  2. Iriti, M.; Varoni, E.M.; Vitalini, S. Healthy diets and modifiable risk factors for non-communicable diseases—The European perspective. Foods 2020, 9, 940.
  3. Min, W.; Jiang, S.; Liu, L.; Rui, Y.; Jain, R. A survey on food computing. ACM Comput. Surv. 2019, 52, 1–36.
  4. Wang, W.; Min, W.; Li, T.; Dong, X.; Li, H.; Jiang, S. A review on vision-based analysis for automatic dietary assessment. Trends Food Sci. Technol. 2022, 122, 223–237.
  5. Subhi, M.A.; Ali, S.H.; Mohammed, M.A. Vision-based approaches for automatic food recognition and dietary assessment: A survey. IEEE Access 2019, 7, 35370–35381.
  6. Tay, W.; Kaur, B.; Quek, R.; Lim, J.; Henry, C.J. Current developments in digital quantitative volume estimation for the optimisation of dietary assessment. Nutrients 2020, 12, 1167.
  7. Tahir, G.A.; Loo, C.K. A comprehensive survey of image-based food recognition and volume estimation methods for dietary assessment. Healthcare 2021, 9, 1676.
  8. Lu, Y.; Stathopoulou, T.; Vasiloglou, M.F.; Pinault, L.F.; Kiley, C.; Spanakis, E.K.; Mougiakakou, S. goFOODTM: An artificial intelligence system for dietary assessment. Sensors 2020, 20, 4283.
  9. Konstantakopoulos, F.S.; Georga, E.I.; Fotiadis, D.I. A Review of Image-based Food Recognition and Volume Estimation Artificial Intelligence Systems. IEEE Rev. Biomed. Eng. 2023; online ahead of print.
  10. Park, D.; Lee, J.; Lee, J.; Lee, K. Deep learning based food instance segmentation using synthetic data. In Proceedings of the 18th International Conference on Ubiquitous Robots (UR), Gangneung, Republic of Korea, 12–14 July 2021; pp. 499–505.
  11. Matsuda, Y.; Hoashi, H.; Yanai, K. Recognition of multiple-food images by detecting candidate regions. In Proceedings of the IEEE International Conference on Multimedia and Expo, Melbourne, VIC, Australia, 9–13 July 2012; pp. 25–30.
  12. Kawano, Y.; Yanai, K. Automatic Expansion of a Food Image Dataset Leveraging Existing Categories with Domain Adaptation. In Proceedings of the Computer Vision-ECCV 2014 Workshops, Zurich, Switzerland, 6–7 September 2014; Proceedings, Part III 13. Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–17.
  13. Freitas, C.N.; Cordeiro, F.R.; Macario, V. Myfood: A food segmentation and classification system to aid nutritional monitoring. In Proceedings of the 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Virtual Event, 7–10 September 2020; pp. 234–239.
  14. Wu, X.; Fu, X.; Liu, Y.; Lim, E.P.; Hoi, S.C.; Sun, Q. A large-scale benchmark for food image segmentation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, 20–24 October 2021; pp. 506–515.
  15. Okamoto, K.; Yanai, K. UEC-FoodPIX Complete: A Large-Scale Food Image Segmentation Dataset. In Proceedings of the Pattern Recognition ICPR International Workshops and Challenges, Virtual Event, 10–15 January 2021; Proceedings, Part V. Springer: Berlin/Heidelberg, Germany, 2021; pp. 647–659.
  16. Ciocca, G.; Napoletano, P.; Schettini, R. Food recognition: A new dataset, experiments, and results. IEEE J. Biomed. Health Inform. 2016, 21, 588–598.
  17. Jalal, M.; Wang, K.; Jefferson, S.; Zheng, Y.; Nsoesie, E.O.; Betke, M. Scraping social media photos posted in Kenya and elsewhere to detect and analyze food types. In Proceedings of the 5th International Workshop on Multimedia Assisted Dietary Management, Nice, France, 21 October 2019; pp. 50–59.
  18. Liang, Y.; Li, J.; Zhao, Q.; Rao, W.; Zhang, C.; Wang, C. Image Segmentation and Recognition for Multi-Class Chinese Food. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022; pp. 3938–3942.
  19. Bossard, L.; Guillaumin, M.; Van Gool, L. Food-101—Mining Discriminative Components with Random Forests. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part VI 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 446–461.
  20. Meyers, A.; Johnston, N.; Rathod, V.; Korattikara, A.; Gorban, A.; Silberman, N.; Guadarrama, S.; Papandreou, G.; Huang, J.; Murphy, K.P. Im2Calories: Towards an automated mobile vision food diary. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1233–1241.
  21. Ege, T.; Yanai, K. Estimating food calories for multiple-dish food photos. In Proceedings of the 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR), Nanjing, China, 26–29 November 2017; pp. 646–651.
  22. Gao, J.; Tan, W.; Ma, L.; Wang, Y.; Tang, W. MUSEFood: Multi-Sensor-based food volume estimation on smartphones. In Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2019; pp. 899–906.
  23. Aslan, S.; Ciocca, G.; Mazzini, D.; Schettini, R. Benchmarking algorithms for food localization and semantic segmentation. Int. J. Mach. Learn. Cybern. 2020, 11, 2827–2847.
  24. Mohanty, S.P.; Singhal, G.; Scuccimarra, E.A.; Kebaili, D.; Héritier, H.; Boulanger, V.; Salathé, M. The food recognition benchmark: Using deep learning to recognize food in images. Front. Nutr. 2022, 9, 875143.
  25. Chopra, M.; Purwar, A. Recent studies on segmentation techniques for food recognition: A survey. Arch. Comput. Methods Eng. 2022, 29, 865–878.
  26. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542.
  27. Dehais, J.; Anthimopoulos, M.; Mougiakakou, S. Food image segmentation for dietary assessment. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Amsterdam, The Netherlands, 16 October 2016; pp. 23–28.
  28. Wang, Y.; Liu, C.; Zhu, F.; Boushey, C.J.; Delp, E.J. Efficient superpixel based segmentation for food image analysis. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2544–2548.
  29. E Silva, B.V.R.; Rad, M.G.; Cui, J.; McCabe, M.; Pan, K. A mobile-based diet monitoring system for obesity management. J. Health Med. Inform. 2018, 9, 307.
  30. Kawano, Y.; Yanai, K. Foodcam: A real-time food recognition system on a smartphone. Multimed. Tools Appl. 2015, 74, 5263–5287.
  31. Pouladzadeh, P.; Kuhad, P.; Peddi, S.V.B.; Yassine, A.; Shirmohammadi, S. Food calorie measurement using deep learning neural network. In Proceedings of the 2016 IEEE International Instrumentation and Measurement Technology Conference Proceedings, Taipei, Taiwan, 23–26 May 2016; pp. 1–6.
  32. Bolanos, M.; Radeva, P. Simultaneous food localization and recognition. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 3140–3145.
  33. Sun, J.; Radecka, K.; Zilic, Z. Foodtracker: A real-time food detection mobile application by deep convolutional neural networks. arXiv 2019, arXiv:1909.05994.
  34. Chiang, M.L.; Wu, C.A.; Feng, J.K.; Fang, C.Y.; Chen, S.W. Food calorie and nutrition analysis system based on mask R-CNN. In Proceedings of the IEEE 5th International Conference on Computer and Communications (ICCC), Chengdu, China, 6–9 December 2019; pp. 1721–1728.
  35. Sharma, U.; Artacho, B.; Savakis, A. Gourmetnet: Food segmentation using multi-scale waterfall features with spatial and channel attention. Sensors 2021, 21, 7504.
  36. Artacho, B.; Savakis, A. Omnipose: A multi-scale framework for multi-person pose estimation. arXiv 2021, arXiv:2103.10180.
  37. Aguilar, E.; Nagarajan, B.; Remeseiro, B.; Radeva, P. Bayesian deep learning for semantic segmentation of food images. Comput. Electr. Eng. 2022, 103, 108380.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : ,
View Times: 564
Revisions: 2 times (View History)
Update Date: 09 Jan 2024
1000/1000
ScholarVision Creations