CookGAN: Meal Image Synthesis from Ingredients

Paper
Code
Demo

Abstract

In this work we propose a new computational framework, based on generative deep models, for synthesis of photo-realistic food meal images from textual list of its ingredients. Previous works on synthesis of images from text typically rely on pre-trained text models to extract text features, followed by generative neural networks (GAN) aimed to generate realistic images conditioned on the text features. These works mainly focus on generating spatially compact and well-defined categories of objects, such as birds or flowers, but meal images are significantly more complex, consisting of multiple ingredients whose appearance and spatial qualities are further modified by cooking methods. To generate real-like meal images from ingredients, we propose Cook Generative Adversarial Networks (CookGAN), CookGAN first builds an attention-based ingredients-image association model, which is then used to condition a generative neural network tasked with synthesizing meal images. Furthermore, a cycle-consistent constraint is added to further improve image quality and control appearance. Experiments show our model is able to generate meal images corresponding to the ingredients.

model structure
CookGAN Framework

Citation

@inproceedings{han2020cookgan,
    title={Cookgan: Meal image synthesis from ingredients},
    author={Han, Fangda and Guerrero, Ricardo and Pavlovic, Vladimir},
    booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
    pages={1450--1458},
    year={2020}
  }  

License

Attribution 4.0 International (CC BY 4.0)