×
  • remind me tomorrow
  • remind me next week
  • never remind me
Subscribe to the ANN Newsletter • Wake up every Sunday to a curated list of ANN's most interesting posts of the week. read more
You are welcome to look at the talkback but please consider that this article is over 8 months old before posting.

Forum - View topic
NEWS: CyberAgent Reveals Guidelines for In-House Creators Using Generative AI for Art




Note: this is the discussion thread for this article

Anime News Network Forum Index -> Site-related -> Talkback
View previous topic :: View next topic  
Author Message
maximilianjenus



Joined: 29 Apr 2013
Posts: 2911
PostPosted: Sat Apr 13, 2024 8:10 am Reply with quote
Yup, these are the common sense.guidelines we would have gotten in les than month if luddites did not panic and had their brains stop working at the very mention of ai.
Back to top
View user's profile Send private message
gordonfreeman1



Joined: 13 Nov 2023
Posts: 14
PostPosted: Sat Apr 13, 2024 3:22 pm Reply with quote
Finally a company putting in place sensible guidelines. Although to be honest, the guidelines simply prove that the current crop of Gen "AI" (which is just brute-forced ML instead of anything approaching true intelligence) is incomplete and not usable for serious high quality production work as it has tonnes of copyright theft involved.

Last edited by gordonfreeman1 on Sat Apr 13, 2024 4:00 pm; edited 1 time in total
Back to top
View user's profile Send private message
NeverConvex
Subscriber



Joined: 08 Jun 2013
Posts: 2580
PostPosted: Sat Apr 13, 2024 3:44 pm Reply with quote
ANN Article wrote:
and be aware of how the AI generates works that may be too similar to other works.


Considering how large the store of content most artificial neural networks (you know, somehow it never occurred to me that shares an acronym with, uh, ANN Laughing ) that are worth trying to use is/was, I'm having trouble seeing how staff are supposed to do this kind of check manually with any real confidence; maybe you can quickly rule out output's similarity to commercially well-known works, but can you imagine being on the hook for declaring that an image the network generated isn't overly similar to anything anywhere on DeviantArt? Would be nice if the models were trained in a way that guaranteed they'd reproduce any individual existing work only with exceedingly low probability. Or, maybe this will inspire work on still more ANNs for automatically trying to detect whether a specific piece matches an existing work very closely. Machine learning turtles all the way down..
Back to top
View user's profile Send private message
TheBossman



Joined: 23 Jun 2015
Posts: 8
PostPosted: Sat Apr 13, 2024 5:18 pm Reply with quote
ANN Article wrote:
The guidelines also forbid creators from uploading copyrighted works as part of the learning model without express permission from the copyright holder.


Aren't stolen images still used for the models' base data?

Anyways, with all of these (necessary) guidelines, wouldn't it just be better to... I don't know... forgo generative AI altogether?
Back to top
View user's profile Send private message
Romuska
Subscriber



Joined: 02 Mar 2004
Posts: 814
PostPosted: Sat Apr 13, 2024 5:28 pm Reply with quote
I think there may be one extra use of the word "using" in the headline.
Back to top
View user's profile Send private message
Vanadise



Joined: 06 Apr 2015
Posts: 535
PostPosted: Sat Apr 13, 2024 7:52 pm Reply with quote
Yeah, on their face these guidelines seem fine, but the problem is that every single LLM database out there used by generative AI was trained on stolen data. Even Adobe Firefly, which they insisted was ethically sourced and commercially safe, turns out that it was trained on stolen data.

These guidelines are still worthless if your entire technology is fundamentally unethical.
Back to top
View user's profile Send private message
Tenebrae



Joined: 26 Apr 2008
Posts: 492
PostPosted: Sun Apr 14, 2024 7:51 am Reply with quote
Ha, good luck trying to prove that an image was generated using a model that was trained on existing art. Spoilers: that won't be possible.

Besides, the models don't contain copyrighted works in any case, some just may have been trained by having them to look at them, like real people train their skills by looking at existing works. Their brains are not in breach of copyright, as far as I'm aware, even if they have perfect recall of the images.

The actual point under debate is whether it is morally proper to have AI look at other people's work while training their neural network. Of course if the trainer is the copyright holder the debate can be skipped over. I did mention at one point that one way would be for the large companies to buy in bulk, like buying out Getty. Well, Getty have since begun to provide their own model (models?) created from their photography vault.

So would it be ok to run an anime-styled checkpoint based on that? Or will people admit it is not the thought of training a model by looking at people's works they hate, it is the idea of AI generated imagery that they hate? I mean, its not going to go away, the same economics that made the internet ubiquitous will make it too.

(And you can try it at home; Stable Diffusion is an open source generator that can be hosted locally, just choose an UI to go with it like ComfyUI or A1111).
Back to top
View user's profile Send private message
Egan Loo



Joined: 25 Feb 2005
Posts: 1363
PostPosted: Sun Apr 14, 2024 10:13 am Reply with quote
Tenebrae wrote:
Ha, good luck trying to prove that an image was generated using a model that was trained on existing art. Spoilers: that won't be possible.


If AI guidelines become the center of a legal case, the party behind the generative AI software could be forced to make its source code and training data available via discovery. Then, yes, that will be quite possible — and it could be why some parties might settle to avoid revealing their trade secrets.

Tenebrae wrote:

Besides, the models don't contain copyrighted works in any case, some just may have been trained by having them to look at them, like real people train their skills by looking at existing works. Their brains are not in breach of copyright, as far as I'm aware, even if they have perfect recall of the images.


"That's how humans do it!" is not a defense against copyright and trademark infringement. Humans are also perfectly capable of breaking intellectual property laws after just looking at other people's works and regurgitating. After all, that's how humans have made forgeries for centuries.
Back to top
View user's profile Send private message
Rob J.



Joined: 26 Apr 2023
Posts: 62
PostPosted: Thu Apr 18, 2024 6:51 pm Reply with quote
AI "art" isn't art -- it's the visual equivalent of Mad Libs (plug in random words to see how nonsensical the result is) when it isn't outright plagiarism.

If its algorithm is based on scraping anything that isn't in the public domain, without licensing and paying for it, it's plagiarism, forgery, and theft.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Anime News Network Forum Index -> Site-related -> Talkback All times are GMT - 5 Hours
Page 1 of 1

 


Powered by phpBB © 2001, 2005 phpBB Group