×
  • remind me tomorrow
  • remind me next week
  • never remind me
Subscribe to the ANN Newsletter • Wake up every Sunday to a curated list of ANN's most interesting posts of the week. read more

Interest
Machine Learning Can Turn Photos Into Anime-Style Background Art

posted on by Kim Morrissy
Study shows 'AnimeGAN' framework rendering photos in Hayao Miyazaki, Paprika, Makoto Shinkai styles

A Chinese research team from Wuhan University and Hubei University of Technology has developed a machine learning framework that can turn photographs into high-quality anime-style background art. Their study includes an example of a photo rendered in "Hayao Miyazaki," "Paprika" (a film directed by Satoshi Kon), and "Makoto Shinkai" styles.

The proposed framework is called "AnimeGAN: A Novel Lightweight GAN for Photo Animation." The approach combines neural style transfer and generative adversarial networks (GANs) to achieve fast and high-quality results with a light framework. The intent of the framework is to help artists save time when creating the lines, textures, colors, and shadows.

The study's authors Jie Chen, Gang Liu, and Xin Chen first submitted the paper to the International Symposium on Intelligence Computation and Applications 2019, and it became available through the online academic literature hub Springer Link on May 26. Japanese tech news outlet ITMedia highlighted the research in an article published on Tuesday.

A GitHub user by the name of "TachibanaYoshino" implemented the open source code for AnimeGAN using machine learning platform Tensorflow. Anyone can download the code and use it to create their own anime-style images.

The tech industry has continued to make strides to develop artificial intelligence programs to help with the anime production workload. Dwango's Yuichi Yagi debuted an AI program that creates in-between animation in 2017. Dwango then announced the program was utilized for some parts of FLCL Progressive. Imagica Group and OLM Digital joined forces with the Nara Institute of Science and Technology (NAIST) to develop a technique for automatic coloring, further expanding AI options.

Video game company NCSoft (Guild Wars 2) researchers Jun-Ho Kim, Minjae Kim, Hyeonwoo Kang, and Kwanghee Lee introduced a program that uses Generative Adversarial Networks (GANs) to transform real-life people into anime characters last year. AlgoAge Co.'s DeepAnime artificial intelligence engine can automatically generate talking animation based on a single image and a voice recording.

The manga publisher Hakusensha has begun using PaintsChainer automatic coloring program for some of its online manga releases.

Source: ITMedia (Yūki Yamashita)


discuss this in the forum (10 posts) |
bookmark/share with: short url

this article has been modified since it was originally posted; see change history

Interest homepage / archives