Forum - View topicNEWS: CyberAgent Reveals Guidelines for In-House Creators Using Generative AI for Art
Note: this is the discussion thread for this article |
Author | Message | |||
---|---|---|---|---|
maximilianjenus
Posts: 2906 |
|
|||
Yup, these are the common sense.guidelines we would have gotten in les than month if luddites did not panic and had their brains stop working at the very mention of ai.
|
||||
gordonfreeman1
Posts: 13 |
|
|||
Finally a company putting in place sensible guidelines. Although to be honest, the guidelines simply prove that the current crop of Gen "AI" (which is just brute-forced ML instead of anything approaching true intelligence) is incomplete and not usable for serious high quality production work as it has tonnes of copyright theft involved.
Last edited by gordonfreeman1 on Sat Apr 13, 2024 4:00 pm; edited 1 time in total |
||||
NeverConvex
Subscriber
Posts: 2542 |
|
|||
Considering how large the store of content most artificial neural networks (you know, somehow it never occurred to me that shares an acronym with, uh, ANN ) that are worth trying to use is/was, I'm having trouble seeing how staff are supposed to do this kind of check manually with any real confidence; maybe you can quickly rule out output's similarity to commercially well-known works, but can you imagine being on the hook for declaring that an image the network generated isn't overly similar to anything anywhere on DeviantArt? Would be nice if the models were trained in a way that guaranteed they'd reproduce any individual existing work only with exceedingly low probability. Or, maybe this will inspire work on still more ANNs for automatically trying to detect whether a specific piece matches an existing work very closely. Machine learning turtles all the way down.. |
||||
TheBossman
Posts: 8 |
|
|||
Aren't stolen images still used for the models' base data? Anyways, with all of these (necessary) guidelines, wouldn't it just be better to... I don't know... forgo generative AI altogether? |
||||
Romuska
Subscriber
Posts: 813 |
|
|||
I think there may be one extra use of the word "using" in the headline.
|
||||
Vanadise
Posts: 531 |
|
|||
Yeah, on their face these guidelines seem fine, but the problem is that every single LLM database out there used by generative AI was trained on stolen data. Even Adobe Firefly, which they insisted was ethically sourced and commercially safe, turns out that it was trained on stolen data.
These guidelines are still worthless if your entire technology is fundamentally unethical. |
||||
Tenebrae
Posts: 490 |
|
|||
Ha, good luck trying to prove that an image was generated using a model that was trained on existing art. Spoilers: that won't be possible.
Besides, the models don't contain copyrighted works in any case, some just may have been trained by having them to look at them, like real people train their skills by looking at existing works. Their brains are not in breach of copyright, as far as I'm aware, even if they have perfect recall of the images. The actual point under debate is whether it is morally proper to have AI look at other people's work while training their neural network. Of course if the trainer is the copyright holder the debate can be skipped over. I did mention at one point that one way would be for the large companies to buy in bulk, like buying out Getty. Well, Getty have since begun to provide their own model (models?) created from their photography vault. So would it be ok to run an anime-styled checkpoint based on that? Or will people admit it is not the thought of training a model by looking at people's works they hate, it is the idea of AI generated imagery that they hate? I mean, its not going to go away, the same economics that made the internet ubiquitous will make it too. (And you can try it at home; Stable Diffusion is an open source generator that can be hosted locally, just choose an UI to go with it like ComfyUI or A1111). |
||||
Egan Loo
Posts: 1356 |
|
|||
If AI guidelines become the center of a legal case, the party behind the generative AI software could be forced to make its source code and training data available via discovery. Then, yes, that will be quite possible — and it could be why some parties might settle to avoid revealing their trade secrets.
"That's how humans do it!" is not a defense against copyright and trademark infringement. Humans are also perfectly capable of breaking intellectual property laws after just looking at other people's works and regurgitating. After all, that's how humans have made forgeries for centuries. |
||||
Rob J.
Posts: 62 |
|
|||
AI "art" isn't art -- it's the visual equivalent of Mad Libs (plug in random words to see how nonsensical the result is) when it isn't outright plagiarism.
If its algorithm is based on scraping anything that isn't in the public domain, without licensing and paying for it, it's plagiarism, forgery, and theft. |
||||
All times are GMT - 5 Hours |
||
|
Powered by phpBB © 2001, 2005 phpBB Group