A person sees a piece of art and is inspired. They understand what they see, be it a rose bush to paint or a story beat to work on. This inspiration leads to actual decisions being made with a conscious aim to create art.
An AI, on the other hand, sees a rose bush and adds it to its rose bush catalog, reads a story beat and adds to to its story database. These databases are then shuffled and things are picked out, with no mind involved whatsoever.
A person knows why a rose bush is beautiful, and internalises that thought to create art. They know why a story beat is moving, and can draw out emotional connections. An AI can’t do either of these.
How that preclude these models from being creative? Randomness within rules can be pretty creative. All life on earth is the result of selection on random mutations. Its output is way more structured and coherent than random noise. That’s not a good comparison at all.
Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.
Interesting take on LLMs, how are you so sure about that?
I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.
So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.
I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.
Humans learn from other creative works, just like AI. AI can generate original content too if asked.
AI creates output from a stochastic model of its’ training data. That’s not a creative process.
What does that mean, and isn’t that still something people can employ for their creative process?
A person sees a piece of art and is inspired. They understand what they see, be it a rose bush to paint or a story beat to work on. This inspiration leads to actual decisions being made with a conscious aim to create art.
An AI, on the other hand, sees a rose bush and adds it to its rose bush catalog, reads a story beat and adds to to its story database. These databases are then shuffled and things are picked out, with no mind involved whatsoever.
A person knows why a rose bush is beautiful, and internalises that thought to create art. They know why a story beat is moving, and can draw out emotional connections. An AI can’t do either of these.
LLMs analyse their inputs and create a stochastic model (i.e.: a guess of how randomness is distributed in a domain) of which word comes next.
Yes, it can help in a creative process, but so can literal noise. It can’t “be creative” in itself.
How that preclude these models from being creative? Randomness within rules can be pretty creative. All life on earth is the result of selection on random mutations. Its output is way more structured and coherent than random noise. That’s not a good comparison at all.
Either way, generative tools are a great way for the people using to create with, no model has to be creative on its own.
They lack intentionality, simple as that.
Yup, my original point still stands.
How is intentionality integral to creativity?
Are you serious?
Intentionality is integral to communication. Creative art is a subset of communication.
I was asking about creativity, not art. It’s possible for something to be creative and not be art.
LLM AI doesn’t learn. It doesn’t conceptualise. It mimics, iterates and loops. AI cannot generate original content with LLM approaches.
Interesting take on LLMs, how are you so sure about that?
I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.
So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.
I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.