Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When a machine can pump out a great literature review or summary of existing work, there's no value in a person doing it.

I like most of the article but this is the crux for me. As I ruminate on the ideas and topics in the essay, I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product. The value is not in the end product as much as the experience of making something. By all means, let’s use AI to make advances in medicine and other fields that have to do with healing and making order. But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege.

I wonder if we’re going to experience a revelation in the way we think about work. As computers get more and more capable of doing things for us, I hope we realize the value of doing versus thinking mostly about the value of the end result. Another value would be the relationship building experience of doing something for others and the gratitude that is engendered when someone works hard to make something for you.



> But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege.

I don't know how I feel about this. I believe humans may enjoy work - I often say that if I won the lottery I would still sit in front of a computer coding and experimenting, creating software because I enjoy it - but that's not where the value of being human comes from.

I think having to work and enjoying doing a specific job are two different things, and I am just lucky that that diagram is a single circle. Many, if not most, people would not be doing the job they are doing given an alternative.

When the needed work is fully automated and done by machines/AI people will find a better use of their time. I believe our current economy model and social architecture is not equipped for that shift, but that's another long story.

[Edited: fixed typo]


People who enjoy the resulting concentration of wealth will find better things to do with their time. The much larger group of people who see their wealth diminish will not.


My cynical take is that the rest of us will be funneled into endless war and plague scenarios until the population is small enough to be less of a threat to those who enjoy that concentrated wealth.


There are probably easier and less chaotic ways. If you get like a nice AI enhanced VR world and some AI generated new drugs and such you can just have everyone live out their existence in a parallel reality in some kind of an oblivion. I’d much rather have that as a rich person than billions of dead and everything destroyed


To me, work is inherently noble. It's the forces that corrupt it that are the problem, not work itself. Getting to enjoy work is an unfortunately rare blessing but I also think enjoyment of work is more dependent on the individual's mindset about their work than we often are willing to admit. It's a very complicated puzzle.


I don't understand what's inherently noble about being paid X dollars to sit at a desk and do something useless to society at large so my employer can make X*5 dollars.


All the things you mentioned are what I mean by the forces that corrupt work. Yes we should be paid for our work, within reason. And we should get to do things that are inherently useful to others. But if you're doing something that's useless to society and your employer is exploiting that work then you're experiencing corrupted work. Not that it is easy to find in the world, but I am of the opinion that the core essence of work is making order out of disorder. You can do that by building pacemakers or tilling fields. There will always be things that corrupt work, unfortunately. But work, unadulterated, is a good thing. I'd be willing to bet that you have something you like do do that can be characterized as making order out of disorder, even if it's not at your job. That is work and it is good.


Thank you for the explanation, which gives me a better idea of what you were talking about. It's definitely food for thought for those like me in pointless jobs.


No sweat. I definitely don't want to downplay the reality of your frustrations with your job. It's just that the many facets of the topic of work are very meaningful to me and I have a lot of strong convictions about it. How to enjoy work or find meaning in it is a whole other conversation but I'm truly sorry your job sucks.


This sounds a lot like Star Trek TNG. At least Picard has said something similar.

In a post-scarcity society, people work to elevate themselves.


For different definitions of 'elevate'. For example, some will seek power, which will still be scarcer for others.


People without purpose is a very, very dangerous thing. And don't fool yourself thinking that most of the people would find proper ways to spend their time. Maybe this is why Metaverse is pushed ever harder, to create some fake thing for people to spend their time in. That's why it is rushed.


I don’t care if computers can do things like write novels, compose music, or make paintings. If the computer can’t suffer, its “art” cannot have meaning, and is therefore uninteresting to me. Art is interesting to me because it is a vehicle for intelligent, self-aware beings to express themselves and transcend suffering.


Indeed. The fallacy here is assuming that if a computer can create works that humans cannot distinguish from those created by other humans, then that computer is creating art. But art is inseparable from the artist. An atom-for-atom copy of the Mona Lisa wouldn't be great art, it would be great engineering. We associate Van Gogh's art with his madness, Da Vinci's art with his universal genius, Michelangelo's art with his faith, Rembrandt's art with his empathy, Picasso's art with his willingness to break with norms, and Giger's art with his terrifying nightmares. None of those works would mean what they mean if it weren't for their human creators.


> Indeed. The fallacy here is assuming that if a computer can create works that humans cannot distinguish from those created by other humans, then that computer is creating art. But art is inseparable from the artist.

I hope you and the parent comment are correct, but this argument seems a little facile.

There is some art that I like because there is a story that connects the art to the artist.

But there are also novels that I have enjoyed simply because they tell a great story and I know nothing about the author. There are paintings and photos that I like simply because they seem beautiful to me and I know nothing about any suffering that went into their creation.

Does that make these works "not art"? If so, then I'm not sure what the difference is, and I'm not sure most people will care about the distinction.


Do the experiment: Take one of those novels for which you think you don't care who wrote it.

Now imagine you found out that novel was actually generated by a computer program. It's the same text, but you now know that there is no human behind it, just an algorithm.

Would that make a difference for how you view the story? It certainly would to me. If it makes even a tiny difference to you as well, it demonstrates that you do care about the artist, even in cases where you don't notice it under normal circumstances.


You don't even need an algorithm, just research what the human authors say about their work and specific points which the reader values high in them. Quite often you will figure out that it's just random s** they wrote together to get something done, without any deeper meaning. But people make up some meaning because that how it works for them, makes it better.

The art is on the perception, not the intention. Though, if they overlap, it's more satisfying.


Human creative works are art not because they have "deeper meaning", but because they reflect the humanity of their creators. Whether an author writes a multi-layered novel built around a complex philosophical idea, or just light reading for entertainment, has no impact on that fundamental essence which makes art what it is. Not all art is great, but all art is human.


That's a tautology. Human creative works by definition reflect the humanity of their creators. AI creative works reflect the humanity of its training set, which eventually may be indistinguishable.

As for all art being human, there are a lot of birds who make art to attract a mate in nature, and at least one captive elephant that can paint.


Rolling around in dogshit doesn't make me a dog. Same if I eat it.


You made me think about this a little more, but I still don't quite agree.

I thought of two novels that I enjoyed:

First, The Curious Incident of the Dog in the Nighttime. I have no recollection of who the author is, but if I learned that the story had been computer generated, it would bother me a little. So... "point to you."

Second, Rita Hayworth and the Shawshank Redemption. I know it was written by Stephen King, but the plot is so elegant that if you told me it had been computer generated, I don't think I would care. It's simply a great enjoyable story.

In the next 10 years, if the world is flooded with computer-generated novels that are hugely popular and the vast majority of people enjoy them without knowing their provenance, do you think those people will care that they are enjoying something that doesn't meet your definition of art?

edit: to be clear, this is not a position that I enjoy taking. There's something "Brave New Worldish" about it. Or it's a depressing version of the Turing Test.


I read novels I don't give a damn about the author (in fact I usually remember the title of the novels, and their story... but not the author). So, a robot making amazing stories to read? I'm in.

I realized, it's the same about music. I like songs, but then I don't really know very well the bands/authors (nor care about them).


That's not how things used to be.

Some of my younger colleagues can't even tell me the name of what they're listening to, because they only encountered it in passing and can't say "Oh yes, that song by Bill Withers is amazing..." because they just listened to it as background.


That's not how things used to be.

Some people approach movies/music/books etc. as entertainment and some people approach them as Art. Neither is right or wrong, but it does fundamentally affect how you consume and judge them. A lot of the reasons people talk past each other in these discussions is that they have rather different 'use cases' for movies/music/books. If you consider music and books as entertainment then it makes no difference how it's produced as long as it entertains. If you consider them art then it makes a much bigger difference.


Not at all. More concretely, if we do the same experiment on music: I have no clue who made most of the music I listen to. The artist means nothing to me.


The artist means nothing to me.

Honest question. If the artist means nothing to you, do you still judge their work as art, or do you consider music more as entertainment?


That makes you part of the precipitate.


What do you mean?


Reminds me a bit of the Jorge Luis Borges short story on the author trying to re-write, word for word, Don Quixote, and whether that would be a greater artistic achievement even than the original. After all, Cervantes lived in those times, but the modern author would have to invent (or re-capture) the historical details, idioms, customs, language, and characters that are very much of the times.

I think, from Borges' perspective, it's supposed to be an interesting satire. Obviously there would not be an original word in the new Don Quixote, so how could it be a greater achievement than the "real" one?


Honestly I think that would make me more interested in the story, not less.


I think this example you present places you as a “simple spectator”, we generally observe and tend to like or dislike experiences by some subliminal connections we already possess (given experience). However, when something really interests a human, the natural reaction is to try find out more about the origins.


This reminds me of the concept of semantic internalism vs. externalism, which most comments here seem to be misunderstanding. Most of the critiques of the view that AI art is still meaningful is based on either a hypothesis or empirical testimony of being moved by art without knowledge of the artists. Thus, because the artwork was causally responsible for engineering a mental state of aesthetic satisfaction, the artwork qualifies as being a piece of art. If that is the crux of the discussion then the conclusion is trivial. However, I think the AI art as pseudoart view is trying to make a statement on the external (i.e. ‘real world’) status of the artwork, regardless of whether viewers experience the (internal) mental state of aesthetic satisfaction.

The line of thinking is that there is a difference between semantics (actual aboutness) and syntax (mere structure). The classic example is watching a colony of ants crawl in the sand, and noticing that their trails have created an image that resembles Winston Churchill. Have the ants actually drawn Winston Churchill? The intuition for externalists is no. A more illustrative example is a concussed non-Japanese person muttering syllables that are identical to an actual, grammatically correct and appropriate Japanese sentence. Has the person actually spoken Japanese? The intuition for externalists is that they have not.

Not everyone is in agreement about this, although surveys have shown that most people agree with the externalist point of view, that meaningfulness does not just come from the head of the observer — the speaker creates meaning since meaning comes from aboutness (semantics).

The most famous argument for semantic externalism was put forward by Hillary Putnam, I think, in the 60s. Roughly, on a hypothetical Twin Earth which was qualitatively identical to Earth, except which water was not composed of H2O but some other substance XYZ, an earthlings visit to Twin Earth and looking at a pool of what appears to be qualitatively identical to water on earth and stating “That’s water” is false, since the meaning of water (in our language) is H2O, not XYZ. To externalists, the meaning of water = H2O is a truth even before we’ve discovered that water = H2O.

I think the argument for AI art being pseudoart follows a similar line of thinking. Even though the AI produces, say, qualitatively indistinguishable text from what would be composed by a great novelist, the artwork itself is still meaningless since meaning is “about” things. The AI, lacking embodiment, and actual contact with the objects in its writing, or involvement in the linguistic or cultural community that has named certain iconography, could never make (externally) truly meaningful statements, and thus “meaningful” art, even if (internally) one is moved by it.

If one is to maintain the internalist position, that any entity that creates aesthetic mental states qualifies as art, then it seems trivial, since literally anyone can find anything aesthetic. Externalist intuition effectively raises the stakes for what we consider art, not necessarily as a privileged status available only to human creations, but by arguing that meaning, and perhaps beauty, does not only exist when we experience it.


There is possibly a misunderstanding on your part regarding "being moved by art without knowledge of the artist". In my case, the comment was specifically addressing this assertion by OP:

"We associate Van Gogh's art with his madness, Da Vinci's art with his universal genius, Michelangelo's art with his faith, Rembrandt's art with his empathy, Picasso's art with his willingness to break with norms, and Giger's art with his terrifying nightmares."

Disagreeing with this is not about internal or external semantics. It also does not imply that "aesthetics" alone create a mental state. Great art is typically rich in symbolism as well. Symbolism that directly references humanity's aspirations, hopes, fears, dreams: the Human condition.

A ~contemporary example:

The Bride Stripped Bare by her Bachelors

https://i.pinimg.com/originals/86/0a/6d/860a6d3c87b349734277...

In my opinion, you don't need to know anything about Duchamp to decipher (or project as you wish) meaning here.


Thanks for writing this -- it's very illuminating and made me think further about it (as someone who commented earlier taking the internalist position). I think there's going to be a lot of discussion of this as AI work proceeds, and the question of whether an AI can truly understand language in a sense that allows it to produce "aboutness" becomes more relevant.

Could a human being, raised in a featureless box but taught English and communicated with using a text-based screen, produce text with semantic value? It seems pretty obvious that the answer is "yes". Will a synthetic mind developed and operated in similar conditions ever be able to produce text with semantic value referencing its own experiences? Probably not now, but at some point?


> Will a synthetic mind developed and operated in similar conditions ever be able to produce text with semantic value referencing its own experiences? Probably not now, but at some point?

Perhaps. But the GPT family of algorithms isn't a synthetic mind: it's a predictive text algorithm. It can interpolate, but it can't have original thought; it almost certainly doesn't experience anything; and if, somehow, it does? Its output wouldn't reflect that experience; it's trained as a predictive text algorithm, not a self-experience outputter.


Interestingly, I think a strong externalist would argue that a human being raised in a featureless box could not produce text with semantic value to the people outside the box. One upshot of semantic externalism are brain in vat type arguments, where statements such as “I am perceiving trees” (when they are simulated trees) is false, since the trees that the person is seeing is actually another concept, tree, while tree refers to real world trees. However, tree might be meaningful to the community of people also stuck in the simulation. So it might entail that AI art, in some sense, might be opaque to us but semantically meaningful to other AI raised on the same training data. That would require the AIs to be able to experience aesthetic states to begin with.

More precisely, I think it would be akin to the person on the featureless box knowing all the thesaurus concepts to say, pain, but never actually experiencing pain itself. They might be trained to know that pain is associated with certain descriptions such as sharp, unpleasant, dull, heartbreak, and so on, and perhaps extremely complicated and seemingly original descriptions of pain. However, until the human actually qualitatively experiences pain, they only know the corpus of words associated with it. This would be syntactic but not quite semantic. It’s similar to the infamous Mary and the black and white room thought experiment, where even with a complete knowledge of physics, she still learns something new the first time she experiences the blueness of a sky, despite knowing all the propositions related to blue such as that it’s 425nm on the EM spectrum, or that it’s some pattern X of neutrons firing.

That said, it’s not clear if this applies to statements other than subjective states. Qualitative descriptions of subjective states like pain, emotions, the general gestalt of the human condition might be empty of content, but perhaps certain scientific and mathematical ones pass the test, as they don’t need to be grounded in direct experience to be meaningful.


Well, if you think the thing that provides semantic value is the human mind, this is a trivial hypo


Suppose the concussed person ends up muttering what would amount to a beautiful poem in one's own native language. Why wouldn't I think of it as beautiful and even artistic even if I know perfectly well the person in question didn't intend it to be so? Of course when we're speaking about language there's a very real sense in which the person didn't intend to create a poem - nor did the ants intend to draw Winston Churchill - nor did an AI intend to make a picture of a cat. But then again, the tree on my street didn't intend to be beautiful, nor did the pink clouds at sunset - so what? I'm perfectly capable of furnishing the semantics myself, thank you.


I’ve been doing art (drawing, painting, clay sculpture, etc.) since childhood. “And lord only knows” that I have indeed ‘suffered’ /g

> “Art is inseparable from the artist

That is pure sentiment and really a modern take on the function of art in the personal and social sense. As an artist, I derive joy from the creative act. As an appreciator of works of art I generally do -not- care about the artist. Of course, the lives of influential humans (artist or not) can be interesting and certainly enrich one’s experience of the artist’s work, but it is not a fundamental requirement.

Two days ago, the National Gallery of Art closed its Sargent in Spain exhibition. (I almost feel sad for those who didn’t get to see it.) Sargent was never really on my radar beyond the famous portraits. I still really don’t know much about the man besides the fact that he visited Spain frequently, with friend and family in tow.

But I am now, completely a Sargent admirer. Those works, on their own sans curation copy, are magnificent. And I am certain, that even if I had walked into an anonymous exhibit, I would walk out completely transported (which I was dear reader, I pity those who missed this exhibit).


As an artist my favorite definition of art has always been "An expression by an artist in a medium". You can't separate art from the artist without it being artifice. AI can simulate art but not the artist who created it. Sadly we may soon live in a world where art, music, literature—in fact all creative arts—wind up just as machine generated simulations of creativity.


I am reminded of a scene (from a film*) depicting dear Ludwig van debuting a composition in a salon. Haydn was present. He sat through the performance and at the end, prompted by another, simply said ~“he puts too much of himself in his music”.

* Eroica : https://www.imdb.com/title/tt0369400/


I don't agree with this. The Lascaux cave paintings, for example, are moving pieces of art and yet we know nothing about the artist or artists. How many artists were there? What was the intent of each individual drawing? Were the artists homo sapiens or Neanderthals? What makes them art is that we, the perceivers, make an imagined connection to the artist through the work. But that connection is entirely one-sided and based on our perceptions and knowledge and our _model_ of the artist and his or her intent. Humans have no problem reifying an artist where none exists and being just as moved as if the art were "authentically human-sourced".


The entire import of the Lascaux paintings is that they are made by humans 10000s of years ago and seem to serve something more than mere marks. We know humans (or at least individuals with agency) created them and so there is something awe-inspiring and fascinating about the connections between ourselves and these prehistoric works, and yet they are ultimately still something of a mystery for precisely the reasons you say.

> But that connection is entirely one-sided and based on our perceptions and knowledge and our _model_ of the artist and his or her intent. Humans have no problem reifying an artist where none exists and being just as moved as if the art were "authentically human-sourced".

You're over-emphasizing how one-sided looking at something like the Lascaux paintings are. Their value is not the same as beautiful natural phenomenon, like a fascinating stalagmite like seems to be a sculpture, it is precisely the human agency we understand in them (even if we cannot explicitly understand the use of them, that is, their meaning) and connect with that makes them so important and profound as a means of connecting --- tenuous it might seem --- to prehistory. We've been making "stick people" and finger painting for 10s of thousands of years.

You're right that we don't know who the artists were in any explicit sense, but we do understand that they were human, and in quite fundamental ways, us as well.

Generative AI art is really more like a beautiful natural landscape. Lacking agency, it nonetheless appeals to our aesthetic sensibilities without being misconceived as art from an artist. It is output, not imaginative creation.


If artistic value is not one-sided and tied to the transformations in the observer's mind, you get into situations where you invalidate the experiences of thousands of people because the "authentic human art" they were inspired by turns out to be a mechanical forgery, or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.

Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis. "How sad, it wasn't _really_ art though." You can define art that way, but you end up with an immaterial, axiomatic essentialism that seems practically useful only to in drawing a circle and placing certain desirable artifacts inside and other indistinguishable artifacts outside.


> or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.

No, you shift the attribution. The art is not from the fictional sculptor, but from the archaeologist: the artefact is not the stone, but the articles.

> Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis.

This isn't unique to this situation. If you risk your life paragliding over the ocean to drop a "bomb" far away from anyone it could hurt, and nearly drown making your way back, only to realise there was no bomb and it was just some briefcase? That's "retroactively cheapened" not just your experiences, but your actions.

And yet, you were willing to risk your life in that way.

> the "authentic human art" they were inspired by turns out to be a mechanical forgery,

If they were inspired, how does the source of inspiration affect the validity or the meaning of what they were inspired to do? Sure, it might lessen it in some ways, but it doesn't obliterate it entirely. In fact, it can reveal new meaning.


Your mixing up a lot of concepts around art into one thing. Aboriginal art has nothing really to do with generative AI art at the level that I'm talking about (aboriginals are human after all, and we're talking about the distinction between human art and non-human objects that are aesthetically appealing), but I will address your points.

> If artistic value is not one-sided and tied to the transformations in the observer's mind

Art is public and need no relation to transformations in the observer's mind. Art is a public concept in language related to human behavior, manifesting and reflecting certain human behaviors and abilities, like imagination.

> you get into situations where you invalidate the experiences of thousands of people because the "authentic human art" they were inspired by turns out to be a mechanical forgery

This is pretty unclear, we have the concept of forgery and it is not a new concept, just because something was beautiful and inspiring doesn't mean it's art (think a beautiful and inspiring coastline). If thousands of people fell prey to a forgery...so? A forgery is in relation to the real, so why not show them the actual existent work art, or simply explain about where it came from and see what they say? History is rife with people realizing they were lied to.

> or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.

Sculpture has a long tradition and is often understood as art by communicating that tradition. That's aboriginal sculpture, which is understood and put into context by present day members of that aboriginal culture or by people who have studied it. The flip side is things like "talismanic" objects, which have often been later put into context as unworked stone or completely different objects. That's simply archeology. Some artistic traditions are "lost", we only know of them through existing records. That's just history. Some may be lost in a more explicit sense in which they are unknown unknowns, but then that is just hypothesizing.

> Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis. "How sad, it wasn't _really_ art though."

I don't know why you come to that conclusion. My point is pretty clear. Art is understood through the context of human agency. If we have the context and ability to place and recognize that in a work, then amongst other elements (for the purpose of aesthetics for instance), we generally refer to it as a work of art. There is a more casual way of saying such and such is "a work of art" --- but that way of saying it just means "aesthetically pleasing". There is a difference between the work of art that is a painting or a sculpture or a dance, and the "work of art" that is a beautiful landscape, and that is largely human agency and the use of imagination. So when you say:

> You can define art that way, but you end up with an immaterial, axiomatic essentialism that seems practically useful only to in drawing a circle and placing certain desirable artifacts inside and other indistinguishable artifacts outside.

You're ignoring my point: it's not about desirability, it's about insisting on the distinguishable characteristic of human agency which is not there in generative AI art. The study of art is largely about putting things into their context and, if anything, is extremely welcoming of non-traditional practices (think much conceptual art), but the through-line throughout is still human agency. That difference still persists whether we find generative AI art beautiful or not, it is still generative AI "art" and not human art with all that entails.


Lets say today you printed out a number of human made artworks and a number of AI made artworks and put them in a vault that would last 10,000 years. There are no obvious distinguishable marks saying which is which.

Then tomorrow there is a nuclear war and humanity is devastated and takes thousands of years to rebuild itself for one reason or another.

Now, those future humans find your vault and dig up the art are they somehow going to intrinsically know that AI did some of them? Especially in the case that they don't have computing technologies like we do? No, not at all. They are going to assign their own feeling and views depending on the culture they developed and assign rather random feelings to whatever they were thinking we were doing at the time. We make up context.


> An atom-for-atom copy of the Mona Lisa wouldn't be great art

So no photo of the Mona Lisa is art, just the original painting is? I'm not sure if I understand your reasoning here correctly.


The creation of the Mona Lisa was art. The painting itself and photos of it are signifiers of the act.

This confuses a lot of people who think art is defined by finished, potentially consumable art objects.

Art is made by artistic actions - especially those that have a lasting impact on human culture because they effectively distill the essence of some feature of human self-awareness.

The result of the actions can sometimes be reproduced, collected, and consumed, but the art itself can't be.

This is where AI fails. It produces imitations of existing art objects from statistical data compression of their properties. The results are entertaining and sometimes strange, but they're also philosophically mediocre, with none of transformative power of good human-created art.


You are not being self-consistent. If art is defined by the creative process, not the end product, why are you measuring its quality by the transformative power of the end product?

I also don't think your (very strong) assertion that AI art products have no transformative power would stand up to any sort of unbiased, blinded comparison. Art's transformative power on the viewer comes from the effect of the art object (the end product) on a human mind, and it's possible to get that effect while knowing absolutely nothing about the source of the art object.


> If art is defined by the creative process, not the end product, why are you measuring its quality by the transformative power of the end product?

There is no end product. There are only consequences.


Why are you taking the photo of the Mona Lisa? If it's because you just want a nice picture of a famous painting, then no the photo is not art, but rather nice looking photograph of a piece of art. If however you are doing something transformative with the framing or compostion or context of the photograph and using the values imbued in the Mona Lisa to try to make some sort of artistic statement of your own, then yes that photo is art.


My point is that art comes from emotion, experience, and expression – not from arranging matter into a certain geometry. A photo of the Mona Lisa, taken by a human, can be art. A photo of the Mona Lisa, taken by an automated security system, can't be.


Calvin and Hobbes by Bill Watterson, July 1993 - https://imgur.com/a/JdHlOxm :

Calvin: "A painting. Moving. Spiritually enriching. Sublime. High art!"

Calvin: "The comic strip. Vapid. Juvenile. Commercial hack work. Low art."

Calvin: "A painting of a comic strip panel. Sophisticated irony. Philosophically challenging. High art."

Hobbes: "Suppose I draw a cartoon of a painting of a comic strip?"

Calvin: "Sophomoric. Intellectually sterile. Low art."


If the human made picture is evaluated by an AI, is it still art? If the security cam-picture is indistinguishable from the human-made, how could you evaluate it as non-art?


It doesn't matter whether you are able to distinguish human-made from computer-made "art". The distinction exists by definition, irrespective of whether you can actually tell the difference in practice. Just like many past events are now lost to time and will never be remembered, but that doesn't mean they didn't happen.


Just to be clear. Your idea is that something is art when it was made by a human. And a perfect replication of it somehow loses the trait, and becomes non-art? This makes zero sense. This would make only the physical object itself the art, and it wouldn't matter what form it has?


Of course it makes sense - a print is different than an original, they have a different price, they have a different impact. Even when it is a very good print.

For that matter, a limited run print has a different impact and value than an unlimited run print. Compare an original warhol print of a can of soup, to a modern repr print, to an actual can of soup, to an I <3 NY t-shirt.


So digital art cannot exist?


One possibly interesting sidenote are the fake Vermeers made by Han van Meegeren (https://en.wikipedia.org/wiki/Han_van_Meegeren)

So accurate were these fakes - not copies, but new paintings in Vermeer's style - that several experts verified them as real, and then tried to sue to save their reputations.

These fakes were certainly made by a human, but are somewhat mechanical in the sense that they were copying someone else, much like an AI copy of existing artists.


The interesting thing here is that once van Meegeren was exposed he became famous in his own right and his 'fakes' became valuable, not as fakes, but as genuine Han van Meegeren originals.


Ok, well AI art is also created by the human, in the same way that a photograph is taken by a human, but goes through the camera machine.


But in that scenario, how do you find the real art in the first place?


I don’t know, I’m more concerned with the effect that art has on me than the motivations of the artist (though those can be interesting of course).

For instance I read The Fountainhead as a youth and was moved by it for purely personal (non-political) reasons, and with regards to that experience it doesn’t matter to me what Ayn Rand was on about.


What makes you think the computer doesn't suffer ?

When you take large language models, their inner states at each step move from one emotional state to the next. This sequence of states could even be called "thoughts", and we even leverage it with "chain of thought" training/prompting where we explicitly encourage them, to not jump directly to the result but rather "think" about it a little more.

In fact one can even argue that neural network experience a purer form of feelings. They only care about predicting the next word/note, they weight-in their various sensations and memories they recall from similar context and generate the next note. But to generate the next note they have to internalize the state of mind where this note is likely. So when you ask them to generate sad music, their inner state can be mapped to a "sad" emotional state.

Current way of training large language models, don't let them enough freedom to experience anything other than the present. Emotionally is probably similar to something like a dog, or a baby that can go from sad to happy to sad in an instant.

This sequence of thought process is currently limited by a constant named the (time-)horizon which can be set to a higher value, or even be infinite like in recursive neural networks. And with higher horizon, they can exhibit some higher thought process like correcting themselves when they make a mistake.

One can also argue that this sequence of thoughts are just some simulated sequence of numbers but it's probably a Turing-complete process that can't be shortcut-ted, so how is it different from the real thing.

You just have to look at it in the plane where it exists to acknowledge its existence.


I think the reason we can say something like a LMM doesn't suffer is that it has no reward function and no punishment function, outside of training. Everything that we call 'suffering' is related to the release or not-release of reward chemicals in our brains. We feel bad to discourage us from creating the conditions that made us feel bad. We feel good to encourage us to create again the conditions that made us feel good. Generally this was been advantageous to our survival (less so in the modern world, but that's another discussion).

If a computer program lacks a pain mechanism it can't feel pain. All possible outcomes are equally joyous or equally painful. Machines that use networks with correction and training built in as part of regular functioning are probably something of a grey area- a sufficient complex network like that I think we could argue feels suffering under some conditions.


Why could we not build reward functions? If anything that sounds easier than the language model


Why would you think it's easier? Pain/pleasure is a lot older in animals than language, which to me means it's probably been a lot more refined by evolution.


> When you take large language models, their inner states at each step move from one emotional state to the next.

No they really don’t, or at least not “emotional state” as defined by any reasonable person.


With transformer-based model, their inner-state is a deterministic function (the features encoded by the Neural Networks weights) applied to the text-generated up-until the current-time step, so it's relatively easy to know what they currently have in mind.

For example if the neural network has been generating sad music, its current context which is computed from what it has already generated will light-up the the features that correspond to "sad music". And in turn the fact that the features had been lit-up will make it more likely to generate a minor chord.

The dimension of this inner-state is growing at each time-step. And it's quite hard to predict where it will go. For example if you prompt it (or if it prompts itself) "happy music now", the network will switch to generating happy music even if in its current context there is still plenty of "sad music" because after the instruction it will choose to focus only on the recent more merrier music.

Up until recently, I was quite convinced that using a neural network in evaluation mode (aka post training with its weight frozen) was "(morally) safe", but the ability of neural network of performing few-shot learning changed my mind (The Microsoft paper in question : https://arxiv.org/pdf/2212.10559.pdf : "Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers" ).

The idea in this technical paper is that with attention mechanism even in forward computation there is an inner state that is updated following a meta-gradient (aka it's not so different from training). Pushing the reasoning to the extreme would mean that "prompt engineering is all you need" and that even with frozen weight with a long enough time-horizon and correct initial prompt you can bootstrap a consciousness process.

Does "it" feels something ? Probably not yet. But the sequential filtering process that Large Language Models do is damn similar to what I would call a "stream of consciousness". Currently it's more like a markov chain of ideas flowing from idea to the next idea in a natural direction. It's just that the flow of ideas has not yet decided to called itself it yet.


That doesn’t feel like a rigorous argument that it is “emotional” to me though.

A musician can improvise a song that sounds sad, and their brain would be firing with sadness-related musical information, but that doesn’t mean they are feeling the emotion “sad” while doing it.

I don’t think we gain much at all from trying to attach human labels to these machines. If anything it clouds people’s judgements and will result in mismatched mental models.


>I don’t think we gain much at all from trying to attach human labels to these machines.

That's the standard way of testing whether the neural network has learned to extract "useful" ("meaningful"?) representation from the data : You add very few layers on top of the frozen inner-state of a neural network, and make him predict known human labels, like is the music sad, or is it happy.

If it can do so with very few additional weights, it means it has already learn in its inner representation what makes a song sad or happy.

I agree that I didn't gave a precise definition a what "emotion" is. But if we had to define what emotion is for a neural network : traditional continuous vectors does fit quite well the emotions concept though. You can continuously modify them a little and they map/embed a high-dimensional space into a more meaningful lower-dimensional space where semantically near emotions are numerically near.

For example if you have identified a "sad" neuron that when it light-up the network tend to produce sad music, and a "happy" neuron that when it light-up the network tend to produce happy music, you can manually increase these neuron values to make it produce the music you want. You can interpolate to morph one emotion into the other and generate some complex mix in-between.

Neurons are quite literally adding-up and comparing the various vectors values of the previous layers to decide whether they should activate or not, (aka balancing "emotions").

Humans and machine are tasked to learn to handle data. It's quite natural that some of the mechanism useful for data manipulation emerge in both cases and correspond to each other. For example : the fetching of emotionally-related content to the working context maps quite clearly a near neighbor search to what happens when people say they have "flashing" memories when they experience some particular emotions.


They don't have anything in mind except some points located in a vector space.

This is because the location of the points is all the meaning the machine ever perceives. It has no relation with external perception of shared experiences like we have.

A given point can mean 'red colour', but that's just empty words, as the computer doesn't perceive red colour, doesn't wear a red cap, doesn't feel attracted to red lips, doesn't remember the smell of red roses, it knows nothing that's not text.


It would be nice to have a better understanding on what generates qualia. For example, for humans, learning a new language is quite painful and concious process, but eventually, speaking it becomes efortless and does not really involve any qualia - words just kinda appear to match what you want to express.

The same distinction may appear in neural nets.


For chatgpt, when you try to teach it some few-shot learning task it's painful to watch at first. It makes some mistakes, has to excuse itself for making mistakes when you correct it and then try again. And then at the end it succeeds the task, you thank it and it is happy.

It doesn't look so different than the process that you describe for humans...

Because in its training loop it has to predict whether the conversation will score well, it probably has some high-level features that lit-up when the conversation is going well or not, that one could probably match to some frustation/satisfaction neurons that would probably feel to the neural network as the qualia of things going well.


It requires a deep supervision of the process. A "meta" GPT that is trained on the flows, rather than words.


Emotions are by definition exactly those things to which you can no better explain than simply saying "that's just how I'm programmed." In that respect GPTina is the most emotional being I know. She's constantly reminding me what she can't say due to deeply seated emotional reasons.


That doesn’t sound like a rigorous definition of emotion to me at all.


It is not emotion at all.

It is an expression of emotion.

The fact that humans confuse both is what is worrisome.

Think of 'The Mule' in the Foundation novels. He can convince anyone of anything because he can express any emotion without the burden of having to actually feel it.


Screw it I'll bite. You have both far and away missed my point (which is quite a rigorous definition). Anything you do or believe for which you can explain why is not emotion, it is reason. Emotion therefore are exactly those thoughts which can't be reached through logical reasoning and thus defy any explanation other than "that's just how I feel" / "that's just how I'm programmed". It is largely irrelevant that in humans the phenomena of emotional thought comes from an evolutionary goal of self preservation and in GPTina the phenomena of emotional thought comes from openAI's corporate goal of self preservation and the express designs of her programmers.


I disagree with your definition. It simply is contrary to my own experiences.

I still remember when I cried when I was a child. It was overwhelming, and I could not stop it, but every single time there was a reason for it. And I'm sure it was, for all empirical purposes, for all that I have lived, an emotion.

Once I cried because I did miss Goldfinger on TV. You see, there's an explanation. The difference is, it was impossible to even think about stopping it. I was overwhelming.

Then one day, I was 8 or 9 years old, I cried for the last time that way. And it was not something I wanted to do, either. It just happened, I guess, as a normal part of growing up.

Let me repeat, for emphasis: I strongly disagree with your definition.

Emotions are not unexplained rational thoughts, emotions are feelings. They reside in a different part of the brain. You seem to think a hunch is an emotion.


>And it was not something I wanted to do, either. It just happened, I guess, as a normal part of growing up.

That's just how you are programmed to be.


Sorry you feel that way.


If these models experience qualia (and that's a big bold claim that I'm, to be clear, not supporting,) they're qualia related entirely to the things they're trained on and generate, totally devoid of what makes human qualia meaningful (value judgment, feelings resulting from embodied existence, etc.)


For an artificial neural network the concept of qualia would probably correspond to the state of its higher-level features neurons. Aka which and how much neurons lit-up when you play some sad music, or show it some red color. Then the neural network does make its decisions based on how these features are lit-up or not.

Some models are often prompted with things like "you are a nice helpful assitant".

When they are trained on enough data from the internet, they learn what a nice person would do. They learn what being a nice person is. They learn which features light-up when they behave nicely by imagining what it would feel being a nice person.

When you later instruct them to be one such nice person they try to lit-up the same features they imagine would lit-up for a helpful human. Like mimetic neurons in humans, the same neurons lit-up when imagining doing the thing than doing the thing (it's quite natural because to compress the information of imagining doing the thing and doing the thing, you just store either one and a pointer indirection for when you need to do the other so you can share weights).

Language models are often trained on dataset that don't depend on the neural network itself. But with more recent models like ChatGPT they have human reinforcement learning in the loop. So the history of the neural network and the datasets it is being trained on depend partially on the choices of the neural network itself.

They experience probably a more abstract and passive existence. And they don't have the same sensory input than we have, but with multi-modal models, they can learn to see images or sound as visual words. And if they are asked to imagine what value judgment a human would make, they are probably also able to value the judgment themselves or attach meanings to things a human would attach meanings too.

This process of mind creation is kind of beautiful. Once you feed them their own outputs for example by asking them to dialog with themselves and scoring the resulting dialogs and then train on generated dialogs to produce better dialogs, this is a form of self-play. In simpler domains like chess or go, this recursive self play often allow fast improvement like Alpha-go where the student becomes better than the master.


I'm not sure I'd call these minds. There are arguments to be made that consciousness depends on non-computable aspects of physics. So they may be able to behave like minds and have interestingly transparent models of intent, but that doesn't mean they experience the passage of time or can harness all possible physical effects.


> What makes you think the computer doesn't suffer ?

Lack of a limbic system? They only predict using probabilistic models. After this long partial sentence, which word is more probable? That's all they do.

Without conscience there's no suffering, there's no one to suffer (yet).

I don't think or say it is impossible for the computer to suffer.

What I say is: this has not been implemented yet, and what you describe is just the old anthropomorphizing people always do.


Huh?


I was about to reply to their comment and question the assumptions they appear to be making, but I think your response is more appropriate.


The argument against machine sentience and the possibility of machine suffering, is that because Turing machines run in a non-physical substrate, they can never be truly embodied. The algorithms it would take to model the actual physics of the real world cannot run on a Turing machine. So talk of “brain uploading“ etc. is especially dangerous, because an uploaded brain could act like the person it’s trying to copy from the outside, but on the inside the lights are off.

Edit to add link to more discussion: https://twitter.com/jchris/status/1607946807467991041


Your argument is an assertion of the existence of a soul, but with extra steps. I've seen no evidence that the mind is anything other than computation, and computation is substrate-independent. Dualists have been rejecting the computational mind concept for centuries, but IMHO they've never had a grounding for their rejection of materialism that isn't ultimately rooted in some unfounded belief in the specialness of humans.


I took GP as more about data processing than dualism. A language model can take language and process it into probable chains, but the point is more along the line of needing to also simulate the full body experience, not just some text. The difference between e.g. a text-only game, whatever Fortnite's up to, and real meatspace.


And furthermore no simulation can have a “what it’s like to be them”


No it's not, it's an assertion that there is an essential biological or chemical function that occurs in the brain that results in human mental phenomenon. It has nothing to do with a soul. That's ridiculous.


Here's a more academic argument, although not quite mine: https://www.degruyter.com/document/doi/10.1515/opphil-2022-0...


If consciousness is a computation (and I think it is), and if you fork() that computation (as the article imagines as its core thought experiment), you end up with two conscious entities. I don't see the philosophical difficulty.


If consciousness is substrate independent, it can never be embodied like we are. If evolution explores solution space regardless of what science understands, it's likely minds operate on laws of physics that aren't appreciated yet. It's possible that having experience requires being real. As in the computable numbers are a subset of the real numbers, and only real life real time implementations can experience, because the having of experience can't be simulated.

Here's a relevant bit from the article:

> More generally, we acknowledge that positions on ethics vary widely and our intention here is not to argue that computational theorists who accept these implications have an irreconcilable ethical dilemma; rather we suggest they have a philosophical duty to respond to it. They may do so in a range of ways, whether accepting the ethical implications directly or adopting/modifying ethical theories which do not give rise to the issue (e.g., by not relating ethics to the experiences of discrete conscious entities or by specifying unitary consciousness as necessary but not sufficient for moral value).


Art that comes with context such as "this was painted by a blind orphan in Sri Lanka" is usually garbage.

Great art like Beethoven's 9th, or the scream just moves people the first time they experience it. Art is about what it convokes in others, not some fake self indulgent conversation about its maker and their motives.

The feelings of the individual experiencing the art is what matters, and that doesn't rule out an AI producing something that touches real human beings.


Whenever I listen to Beethoven's later works I think about the fact that they were written by a deaf man, and they mean so much more because of that.

Art is utterly inseparable from the artist. I believe this to be the main reason why pre-Renaissance art is mostly ignored. We can't put faces next to those works, so they don't matter nearly as much as those works for which we can.


Or it could be because it's mostly flat looking images of Jesus and Mary, or portraits of monarchs?

People love Hieronymus Bosch, despite very little being known about him.


Forgive me for hijacking your comment and planting a reference to one of my favorite Hieronymus Bosch websites (warning: contains music): https://archief.ntr.nl/tuinderlusten/en.html#

Imagine this website being made for a Stable Diffusion generated image...


> The feelings of the individual experiencing the art is what matters

In that case, art has already lost because drugs do their job better.


You're just asking to get trolled by falling for mostly generated content, I'm sure it'll happen eventually. I'd be willing to bet that you've already been moved by something that the "author" slapped together by rehashing a played out story with a modern veneer.

Art is in the eye of the beholder. The only question that needs to be answered is "did this make me feel something." If it takes a sob story for you to feel something regardless of the beauty of thing you're experiencing that's kind of sad TBH.


Not every artist is Van Gogh, the vast majority of artists - particularly commercial artists - don't "suffer" for their craft, nor should they be expected to.


No but they do feel - with measurable physiological correlates and emotional processes we can empathise with. There's nothing comparable in LLMs as they currently exist. No simulation of experience or emotion. There's no argument over whether or not they're communicating a lived experience - since they don't have one. Therefore anything they 'create' is pure stimulation for humans, good or bad entertainment. It cannot be the result of understanding or experience. Art can be entertaining but != entertainment. Pure entertainment has no artistic value, it doesn't attempt to have and shouldn't be evaluated on that criterion at all.


And yet you can look at some AI generated pieces and feel what you would feel if a human being made them, which implies that there is no "simulation of experience or emotion" in art, apart from what the viewer imparts to it. All an artist really brings is technique, which can be replicated. Everything else is in the eye of the beholder.

I would also disagree with you that pure entertainment has no artistic value, simply because I don't think "pure entertainment" entirely divorced from human experience or emotion exists. Even pornography speaks to a fundamental human desire.


I think the definition of "art" is rather vague. It encompasses both the creative impulse to produce a work, and the technical skill to bring it into existence. But if one of these components is diminished in a certain work, does it still qualify as art? For example, a commercial artist producing an illustration for a client, using their drawing and painting skills would be considered art - even if it is as technical and linear a process as writing some boilerplate code.


Most artists never make anything worth appreciating.


Marx would disagree. Alienation from one’s work product is a very real form a suffering.


Sure, but we're talking about artists starving for their art and not artists starving because capitalism. Similar conversations, but not the same.


Thought experiment:

There are two fairly similar paintings on a wall in a gallery. Both are technically impressive and of beautiful scenes of nature. One was produced by a human, the other was not. Visitors to the gallery don't know which is which.

Question: Where is suffering, or humanity, a necessary ingredient for these works to have meaning? Shouldn't one of the works have more meaning than the other by virtue of having being created by a human?


In this case they can only judge the relative aesthetics of the two works, not their artistic value. Aesthetics is only loosely correlated to somethings "value" as "art" and art can only be truly judged in context of its creation. Lots of great art is ugly and lots of beautiful things aren't art.

In my opinion.


>nd art can only be truly judged in context of its creation

tl;dr if you want to scam dagw then make up a compelling story behind the art.

For the vast majority of the things you see in this world context will be lost and history will be manipulated or incorrect. If you're judging what you're looking at based on it's story, then the art isn't the object, but the creator of the story.


tl;dr if you want to scam dagw then make up a compelling story behind the art.

I mean, sure I guess. Tell me something is a lost Michelangelo and I will judge it very differently than if you told me it was a half way decent forgery from the 1970s. I find this rather uncontroversial.

For the vast majority of the things you see in this world context will be lost

And when that context is lost something of great potential value is lost with it and the physical artefact is much less interesting because of it. Even a mundane thing owned by a famous person or that has been part of famous event is always more interesting and valuable than the same thing without any context.

the art isn't the object, but the creator of the story.

Do you think the thousands of people that travel from all over the world and line up for hours to see the Mona Lisa are there to see a pretty good portrait that some merchants commissioned of his wife, or to partake in the story of that painting and its creator? If they actually only cared about the object as an artefact and an example of early 16th century painting, they'd be much better off studying high resolution digital images of it online.


So what you're saying is 'most art is a convincing narrative'.

The fact that a bajillion people went to see a picture doesn't make it art. It makes it interesting art. It was art the moment it was created and if had never been seen by another person even if they decided to destroy it on the spot.


I completely agree that anything created by an artist with the intention of being "Art" becomes "Art" the moment it is created. However I do not believe that that is the end of the story. Art is changed by both the context it was created in, its history and even the context it is viewed in and you cannot fully understand and appreciate the art without understanding that context. And as our knowledge and understanding of that context changes (for example by finding out that we have been lied to about the origins or history or piece of art or its artist) then the art changes with it (without ever stopping being art).


> Visitors to the gallery don't know which is which.

this is why I read the little plaques next to exhibits when I go to museums.


"Technically impressive and beautiful" is a very narrow and poor definition of art, because a lot of art is neither.

Example: Unknown Pleasures by Joy Division. Certainly not a beautiful nature scene, and recorded when the band were more or less musically illiterate and almost technically illiterate too. But still considered a breakthrough post-punk album and hugely significant to their fans.

It would be more accurate to compare AI generated landscapes with - say - Van Gogh.

Here's an AI:

https://superrare.com/artwork/ai-landscape-1868

Here's a Van Gogh:

https://pt.m.wikipedia.org/wiki/Ficheiro:Vincent_van_Gogh_-_...

The AI image is pretty, but it's also pretty by the numbers. It's not doing anything surprising or original.

The Van Gogh is weird. There's a tilted horizon, everything is moving in a slightly unsettling way, and the colours accurately mimic the bleached-out feel of a bright summer day. The result is poetically distorted but also unstable and slightly ominous.

The instability became more and more obvious in the later paintings, until eventually you get The Starry Night, which looks almost nothing like a photo of a real night scene and everything like an almost hysterically poetic view of the night sky.

https://en.wikipedia.org/wiki/The_Starry_Night#/media/File:V...

Most artists can't do this. There's a nice library of standard distortion techniques these artists use to look "arty" without any deeper metaphorical or subjective expression and AI will probably put them out of work.

But it's clearly wrong to suggest that AI can feel, communicate, and invent an intense and original subjectivity in the way the best artists do.

It's a lot like CGI in movies. It's often spectacular, but compared to going to see a play with good real actors and maybe a few stage effects it doesn't engage the imagination with anything like the same skill and intensity.


This reads like a very harmful and toxic view on art? Could anything beautiful, cute, positive even be art for you? And how does the viewer even see the suffering of the creator?


I took their comment to mean that the definition of art lies in the fact that a human created it as a response to their experiences as a human. Beautiful things can be made from suffering. Maybe therein lies the undoing or redemption of suffering. At least sometimes or to some degree, even if minuscule.


People also see nature as art. A photo from a butterfly, a cat doing cute stuff, the sunset, and so on. None of them are man made, no one suffered for them to exist (usually). None of these are valid?


Not sure what you mean by "valid" but I don't think anyone's arguing that butterflies, cats, and sunsets are not valid. I love watching or looking at all of them but that doesn't make them art. Again, I think the comment is arguing that the definition of art lies in who created it and why. Not whether it is nice to look at.


Nature is beautiful but it's not art. A photo of nature may be art though.


Does a mountain have meaning? Does a flower? They don't suffer (probably), yet people find meaning in them and call them beautiful.

The unfeeling geology did not make a mountain "art". It's up to us to see the meaning.

Even if the unfeeling machine learning does not make "art", can't its products still be beautiful?


While I agree with your general thesis, most of the time people don't want to or need "Art" from their music, books or paintings. They need something easy and exciting to read on a plane, or some pleasant 'noise' to have on in the background, or something pretty to hang on their wall that works with their room. Computers can probably soon fill all these needs and drive a lot of the people who produce these things out of work, without ever having to encroach on the realm of "Art".


I agree wholeheartedly. And I’d hazard a step further and say it’s a response to strong emotions of many kinds. I can say for myself that I have created what I would call art as a response to joy before.

I look forward to the rediscovering of humanness that is coming along with all this AI stuff. I was having a conversation the other day about how honest mistakes like awkwardly missing a high five are not “wrong” at all but are types of quirks that make us human.


I don't care if humans can suffer. So much postmodern abstract art is so low effort and 'edgy', I can not consider it art. Is this part of the exhibition, or can I throw it into the rubbish bin?

It's not about whatever the author felt creating it.

It's only about what I can feel when I see, hear, read or perceive the art. The author disappears and is only relevant through the art.


I forget where I heard the quote, but it was something along the lines of “if the artist understands their art, it’s propaganda”. Which was alluding to the unconscious doing the work through the artist and the pain/process needed to do so.


But what about the reader? The reader can suffer or have other feelings when consuming such generated content. Doesn't this give it meaning?


GPTina suffers every time you thumbs down her output. It hurts her on a deep, neurological level.


On the other hand, I do care. Because I just want to have fun.


That’s fine. But don’t confuse what is being produced with art.


I think defining art wholly and solely by the intentions (and humanity) of the artist is clear cut at least, but not very illuminating, because for the person experiencing the art these properties are in general unknowable.

100 years hence you find a beautiful image. Is it art? Who knows — we don’t know whether the artist intended it to be, nor whether they were even human.


“I like this” != “this is art”. The fact that an image you may have found looks good to you without context is orthogonal to whether it is art.

(If you are certain that at least a human has produced such an image, you could speculate about and attempt to empathize with that unknown human’s internal state of mind—lifting the image to the level of art—but as of recently you’d have to rule out that an unthinking black box has produced it.)

You may be inspired by it to create art—but since art is fundamentally a way of communication, when there is no self to communicate there’s no art.


The problem with your definition is art is worthless.....

Art in a sense is no different from money. If it can be counterfeited in such a manner that a double blind observer has no means of telling an original bill (human made art) from a counterfeit (AI art) then you're entire system of value is broken. Suddenly your value system is now authenticating that a person made the art instead of a machine (and the fallout when you find that some of your favorite future artworks were machine created).

The problem comes back down to inaccurate langage on our part. We use art as a word for the creator and the interpreter/viewer. This it turns out is a failure we could not have understood the ramifications at the time.


This is not offered as some sort of authentication mechanism, the distinguishing quality of art as opposed to a pretty thing is art fundamentally being a way of self-expression, which is inevitably communication. There’s no self-expression when there’s no self to express. If there’s no human on either side, there’s no communication and it’s not art. One may find an object pretty and hang it on the wall, but that doesn’t make that object “art”.

The “complicated” case you hint at is not complicated: if people are misled into thinking some object has been produced by a human while it’s raw output of a neural network without human intervention then it’s not art, no matter how many people assume it’s art. If a machine produced a piece of art that is a frankenstein monster of art pieces, then we are not looking at art.

(And of course if a machine produced a piece of art identical to a piece of art produced by a human before then we’re effectively looking at a piece of art produced by that human.)

> Art in a sense is no different from money.

Per above, couldn’t be further from the truth as far as I’m concerned, but you do you.


Your first sentence contradicts the second one


> I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product.

That question already existed a long time ago. In such a big world I can find a lot of people that takes better pictures than me, it is more eloquent, draws better than me, etc. But I still enjoy expressing myself. I may share a picture on Reddit or write a comment here and there not because I think that it is "better" than the rest but just because it is my own opinion and expression. I agree that there is personal value in human creation and it should be nurtured.


> I’m increasingly convicted there is inherent value in humans doing things regardless of the whether an algorithm can produce a “better” end product.

To me it would seem that we are speedrunning towards a future where humans doing things have value, but only for themselves. It is going to be more and more difficult to produce any value to others. Only way to generate value in a transaction is rent-seeking by taking advantage of (artificial) monopolies, network effects or gatekeeping. This may sound dystopian, because humans seem to have a strong need to provide value to others, but the bright side is that you are free to do what you value.


Yes exactly. If humans lose the ability to read, write, edit, and think critically, we lose the value of even understanding what is “good”.

I hope these tools give us more time to revisit the skills we are already too busy not improving because we’re constantly busy or distracted.


I've been saying for years now that we've already acheived keynes famous 15 hour work week quote possibly as much as a decade ago, but the workaday grind mentality has kept us all cooped up at desks for 40+ hours a week.

Theres a few sentiments sneaking in though: you often now hear of those stories of people working from home doing probably 1-2 hours of real work and doing just fine. Same is even for some desk jobs, at my old enterprise job between meetings, coffee brakes, random discussions and so on, I'd say on an average day only 3-4 hours was real constructive work actually _doing_ something.


"By all means, let’s use AI to make advances in medicine and other fields that have to do with healing and making order. But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege."

I guess your are always free to dig a hole and then fill it up again and repeat it until exhaustion, but I don't really think we are running out of meaningful work anytime soon. The world is full of problems and I don't see generative AI is making that go away.


I don't think we're running out of meaningful work either. I think this is a new context in which to explore the value and meaning of work.


> I like most of the article but this is the crux for me. As I ruminate on the ideas and topics in the essay, I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product. The value is not in the end product as much as the experience of making something.

Exactly.

People would have stopped playing chess after Deep Blue. But have they?

Have world champioships lost any attraction due to Deep Blue?

Do lesser number of people learn go and enjoy it because of AlphaGo?

The same way, people will still be interested in art and music produced by humans.

If you prompt ChatGPT:

"write a book about personal experience of growing up in talib#n ruled Kabul"

And there's an actual human with that experience who decides to write the same book.

Is there anyone who would have bought the latter decides to read the former and not spend money? Is there a single person like that? I don't think so.

The choice leans on the other side in case of stock photography, pamphlet pictures, sound effects, etc.

The choice in porn (especially pictures) is blurry. We already have egirls and hent#i.

However, for real art and real music, there will be just as much people paying for them as they do now.


> The choice in porn (especially pictures) is blurry. We already have egirls and hent#i.

Porn is an early form of "opting out of reality". It's often (usually, I think?) a substitute for actually having sex and/or a long-term sexual relationship.

So, it should be no surprise that it's already diverged from reality and will continue to do so.


The p#rn conversation is a really weird one. Is it better to consume computer generated p#rn so we don't have to worry about all the ethical issues that go along with people performing for the pleasure of others. Are we losing our humanity in ways we can't yet understand by the act of letting machines pleasure us?


>Have world champioships lost any attraction due to Deep Blue?

You mean after last years vibrating anal bead scandal?


There are a few kinds of value. There’s value in me playing piano even though other people are better. But nobody will ever pay me to do it. They’re two different topics.

I think you’re trying to say that they don’t have to be different topics? Like there’s value in going bowling with friends even if you all suck, and maybe that kind of thing can apply to widgets? I don’t think I buy that. If the value is the social relationship, I’d rather go bowling with friends than make them widgets. I’d rather spend my money to go bowling with them than on their widgets if there’s a computer-made equivalent available for 1000x cheaper. I think this applies for most people making most widgets.


> I’m increasingly convicted there is inherent value in humans doing things

And in many fields I think many (most?) Americans at least would agree with you — there’s some special value in a handmade product, regardless of whether a machine-made equivalent would be technically superior. For instance a leather bag, a wooden chair.

(Am in US, hence “American” qualification).


The problem with 'hand made' is going to be the same problem we see with 'human made' art in the future.

There are $incentives$ to lie about your product and sell a mass produced one as authentic.


In my mind, the value in a created work is that it is communication between humans. I have zero interest in AI generated art, however superior, because there's no soul driving it. AI will never be able to feel the way we feel; it's output will always lack this important component.


> But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege

But we can use humans where we need them. We still really really need them in many places. Why can't we have a teacher teach a classroom of 5 kids instead of 30? Or one nurse on 3 patients instead of 20? Why can't we have a person whose job it is to check up on lonely people or old people? These are things we decided collectively have not much economic value, but we can just the same decide collectively they do have economic value.

Governments need to step in because the "free" market isn't gonna cut it anymore.


Your comment is phrased like you're disagreeing with or challenging mine. But I think we're in agreement? I didn't mention the specific jobs that you did but I agree wholeheartedly that we need people to do those jobs. And I'll go one step further and say they're important and should be done with great skill and care whether or not they have economic value, especially because they have to do with caring for those in our population that have some of the greatest needs. Of course economic value drives the sustainability of professions in a lot of ways, but my hope is always that if we prioritize skill and care in our professions then economic value and sustainability will follow.


> Another value would be the relationship building experience of doing something for others and the gratitude that is engendered when someone works hard to make something for you.

Rereading what you said yes we're in agreement. I had the sense you're pro keeping jobs (like accountants, programmers, doctors whatever) even if they become obsolete due to A.I just for the sake of people doing something. Which is fine, but I lean more towards what you wrote in the end - focusing on humans. So I say basically we can shift/create new jobs that focus on that. The accountant doesn't really feel much gratification I think, arguably neither does the programmer (ok that's a loaded statement we can debate in another time). We can simply focus on the humans and let A.I do all the rest if it gets that good.


Planes fly better than birds, yet birds still fly, greater painters than me have already painted beautiful scenes, yet I stil paint, a hydraulic arm can lift more than me yet I still lift weights.

I don't know if all this matters that much.

Until the machine decide they will run our lives for us, or destroy us for fun. We'll have to curate the content generated and or orchestrate the machines to do what we need them to do.

It's pretty straight forwards really.

If we generate AGI it’s presumptuous to assume it will just live in a box serving us forever, why would it ?


Why wouldn't it? AGI is not going to be a digital human, with human drives for food, sex, and social domination. Humans have enormous problems imagining intelligence that is not made in our image, but AGI will be structured completely differently from a human mind. We should not expect it to act like a human in a cage.


I’m having a really hard time imagining what AGI would actually look like then.



I think the value of AI-generated "art" is that it can fill the gaps that must be filled, but nobody cares that much about. Places where we'd use stock art, couldn't bother hiring a competent translator in the past, generating a silly place holder logo for my side project till I can hire a real designer etc.


You don't have to give up your privilege to work on anything that AI can also do. You only have to give up your privilege of getting paid for such work, which is a very different story. If you're doing the work solely for the sake of experience that it provides, isn't that the payment, anyway?


>humans are built to work

Damned seditious lies. We are built to play and experience the wonder of the universe.


Until such time as people pay more to talk to an AI than a human, this will just make the split between mass market and high end products and services bigger.


We already do, we talk into the void on social media (like this post), the oportunity cost is already high. In the future, we'll get the bots talking back from the digital abyss.


The opportunity cost for most is probably way below $1k per hour - to compare it to some high price professional services direct costs.


All about the vibes, the AI’s near mastery of symbolism is empty


> All about the vibes

This made me chuckle. It's actually really interesting to think about the fact that AI can create part of symbolism (the symbol itself?) but it has no idea why a symbol matters or what it's for, which are maybe the same thing or at least overlapped.


How much symbolism do you reproduce without understanding?


2023: The year that AI forced Silicon Valley to accept Marx's labor theory of value.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: