Hacker Newsnew | past | comments | ask | show | jobs | submit | digitaltrees's commentslogin

Exactly this

But the brick and mortar distribution is hard to build and gives a structural advantage. As another poster pointed out, you could sell your items locally and not need to deal with shipping, communication with buyers and other stuff. It would reduce friction which might actually expand the marketplace significantly. I personally have multiple family members that throw stuff away because listing on eBay is too hard and pawnshops too sketchy

Why does it give structural advantage to own a bunch of dead mall and strip mall brick and mortar stores that have been on the verge of bankruptcy for well over a decade?

They haven't been on the verge of bankruptcy since they paid off their own loans. This deal will actually be the closest they've been to bankruptcy since like 2022, because they'll actually have interest payments to make.

That's an accident of becoming a meme stock. That's not a business model.

Revenue continues to decline year over year. Nothing about the business has materially changed the trajectory they have been on throughout all of this. https://www.macrotrends.net/stocks/charts/GME/gamestop/reven...


https://www.macrotrends.net/stocks/charts/GME/gamestop/eps-e...

There's something else going on. The other companies known for being meme stocks are doing substantially worse in terms of share price. AMC is below what it was before early 2021, down 90% from its high. That holds for most of them. Bed, Bath, and Beyond famously went bankrupt.

Meanwhile, yeah, Gamestop is down about 75% from its high. But it's also 2.5x its post-top low, and... um, about 24x its 2020 low. Go ahead and check. Makes at least some sense, when you understand that they stemmed an EPS bleed and turned it into a profitable company.


A sinking company buying a healthy company several times larger and more profitable doesn't make sense. The eBay board and shareholders would be crazy to participate in this fantasy. GME shareholders are already known to be of questionable judgment, so whatever they do is SNAFU. So, it's not surprising GameStop would try something crazy, what's surprising is anyone is taking it seriously.

I don't know what your issue with Gamestop is, but calling it "a sinking company" is wildly inaccurate, bordering on delusion.

What would you call a company with revenue that declines every year for a decade?

Shrinking. Which is smart, if you see a recession on the horizon or in-progress. If profit is increasing anyway? Well, that's either a miracle or someone who really knows what they're doing. You understand that profit is supposed to go DOWN when revenue declines, right? Which is bad. But Gamestop is doing the opposite. Which is...?

Not smart. If a recession is coming, the last thing you want is to reduce your revenue in advance. Or during! That's like saying, better to take some poison now if there's a flu going around I might catch!

And yes their profitability has gone up. Due to massive cuts. Slashing stores, slashing normal investment in stores, and laying off everyone they can.

This is what you do when your business model is completely failing -- you stop all normal investment that sustains the business for the long term, so you can make more profit in the short term. But it's not sustainable. It's what happens when you realize you're going out of business, and you want to take out as much money as you can, while you still can.

So the increased profit isn't a good sign in this case. It's precisely the opposite -- the end is near for this business model. It is indeed sinking.

And their desperate and nonsensical bid for eBay is another sign of this -- a kind of Hail Mary pass since their original business model is busted.


GameStop in Canada owns stores in active malls and in standalone locations. Hard to see the store empty, even at 1030am on a weekday in November.

Gamestop in Canada isn't owned by Gamestop in the US. They were also historically unprofitable so people might be in the stores but they aren't buying enough.

Now that you mentioned it, I seem to recall that news. The point stands that in some areas, that type of store can still perform.

Perhaps they would come to an agreement.


Reminds me of S.P.A.T from about a boy.

Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”

I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.

I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.


Evidence for that? I remember there was a guy who worked for google that quit because he thought an LLM was conscious and we needed to talk about its rights, but that's the only example I am aware of.

I came here to say this. But your neurons are faster than mine.

But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

How is that different than a cell?


Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?

If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.


I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.

I’d argue it’s a spectrum with awareness being simple response to stimuli at one and self awareness of and reflection on a subjective experience across time on the other.

Why would current AI be an argument for panpsycism? I don’t understand the connection.

AI is stochastic, not static and deterministic.

As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems


> AI is stochastic, not static and deterministic.

LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.


The same argument is made about the human neural network

1. That is not the claim you originally made.

2. Not provably so.

3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.


Trivially simple programs (rule sets) can give rise wildly complex systems.

Fractals, Game of Live, the emergent abilities of highly-scaled generative pre-trained transformers.

Coincidences appears to be an emergent property of (relatively) simple matter.

70kg of rocks will struggle to do anything that might look like consciousness, but when a handful of minerals and three buckets of water get together they can do the weirdest things, like wondering why there is anything at all rather than nothing.


Ai models are routinely referred to as stochastic. Their mere existence popularized that term.

The human brain being infinitely more complex is a degree difference not a kind difference. That means if you scale an AI up to the functional size of the human brain then it would be just as worthy of the label conscious


I think it's the opposite argument

IF current AI is conscious, so are trees, rocks, turbulent flows, etc.

The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.


But I listed a specific difference: sensation and response. Trees have that. Rocks do not.

I believe you're using the scientific definition of "sentience", while everyone else is using the common understanding of of the word (which should be called "sapience", but thanks to sci-fi's usage of the word "sentience" is largely not).

I was a philosophy and cognitive science major in undergrad decades ago so I am definitely guilty of using more jargon based concepts than I should

There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.

These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.


How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?

You’re arguing against the opposite of my position. I am arguing that LLMs have a reasonable basis to be seen as conscious because there is nothing special about biological neural networks.

Ya, I seem to largely agree with your comments on this article. I was replying to brookst, did you mean to reply on a differnt thread?

I have a 7 month old so my neural network is running on one gpu in a manner of speaking.

Sensory input is nothing but data.

That's just reductive semantics. Anything can be described as "nothing but data".

Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.

They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.

There are robots with AI controlling them, so it doesn't hold that they don't all have bodies. They can see, they can move.

(I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)


Bodies aren’t necessary for senses. I can send a picture to Claude. I can send a series of pictures. That’s usually called a sense of vision. I could connect it to a pressure sensor and that would be touch.

> They don't have senses because they don't have a body

Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."

> Does my installation of starcraft have consciousness?

Can your installation of StarCraft take in information about the world and then reason about its own place in that world?


The weights on your hard drive might have consciousness if they can respond to stimuli in ways other conscious brains do. That’s the whole point of the Turing test, it’s a criteria for when the threshold of reasonable interpretation is crossed.

How do you measure this consciousness?

How do you imagine a brain can distinguish data from a real sense and data from another source?

Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.

No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.

Conflating senses with cognitive awareness of sensory input is a mistake.


I’m not sure I fully understand the distinction you’re making, or if I do I’m not sure I agree. Concretely, I agree that these are very different mechanisms. Abstractly… I agree that an LLM cannot be burned. I’m not sure I agree, though, that there is a significant conceptual difference between thermoreceptors in the skin causing action potentials to make their way up the spinal cord to the brain is all that different than reading a temperature sensor over I2C and turning it into input tokens.

Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.


The human Brain is a neural network. Your sense of “knowing what warmth is” reduces down to the weights of connections between neurons in an analog of LLMs. What is different about the human brain that warrants saying that the same emergent characteristics for one network are inaccessible to another?

You really don't think there's an experiential difference between putting your hand on a hot stove, versus reading the text "the stove is 200c, and will hurt if you touch it"?

That’s not a charitable reading of my argument. All senses are of different “kind”. That doesn’t mean that the lack of a sense of touch means that LLMs can never have senses. That is a very broad claim.

We don't have a way of measuring "cognitive awareness" though. We have a way of measuring electrical impulses, and how they behave in response to various treatments (eg anaesthetics or magnetic fields), but we can't objectively measure whether the system is aware at all.

We can measure electrical spikes, and we can ask the system to reply what it experiences when various spikes occur. Guess what: we can do that with ANNs now too.

It'd be one thing if this were all a philosophical discussion, but in this thread so many folks are making very firm statements about the nature of reality we have no means to back up.


LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.

What is different about the human neural network? People have given LLMs sensors and they respond to stimuli. The sense of self can be expressed as a linguistic artifact that results in an emergent pattern recognition of distinct entities. For example, merely my saying I am sitting under the tree with a friend I have encountered the self as a pointer to me as the speaker. There is evidence from early childhood development that language acquisition correlates to awareness of the self as distinct from other. And there is evidence from anthropology indicating that language structures shape exactly what the self is perceived to be.

Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.


What you’re missing is a “self” to have the “experience”.

LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.


The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?

> the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

Can such an algorithm reason about itself in relation to others?


> Can such an algorithm reason about itself in relation to others?

No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.

LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.


> LLMs do it by ...

That they do it at all is the point and is what separates then from MP3 encoding algorithms. The "how" doesn't seem to me to be as important as you're suggesting.

You asked a hypothetical above about a different algorithm and now we've ascertained the reasons why that was reductive.

> LLMs never experienced ...

What is experience beyond taking input from the world around you and holding an understanding of it?


> The "how" doesn't seem to me to be as important as you're suggesting.

When the question is understanding the true nature of what is occurring (eg "is an LLM conscious"), the "how it works" is critical. For example, the 1700s "Mechanical Turk" automaton which appeared to play chess (https://en.wikipedia.org/wiki/Mechanical_Turk). Royal courts and their advisors accepted that it played chess after glancing at the complex gearing inside the cabinet. Had they taken the time to examine how the internal gearing worked in greater detail, they would have arrived at a more accurate understanding of the device's true nature.

> That they do it at all is the point

True in some cases but not others, especially when external appearances can be deceiving. The Mechanical Turk was: 1. Designed to deceive, and 2. Not able to mechanistically play chess. Conversely, LLMs were not intentionally designed to deceive but they can still be misleading because they're a novel system which: 1. Manipulates linguistic symbols in highly complex ways, and 2. Can instantly access vast quantities of detailed information pre-trained into it's relational database that's been indexed across thousands of dimensions. And these abilities are not only novel but can be highly useful for some real-world tasks. This makes LLMs uniquely challenging for humans to reason about because LLMs are specifically tuned to generate output which closely mirrors the exact ways humans assess intelligence (and consciousness). We couldn't have designed a system to be more ideal at playing Turing's 'Imitation Game' and convincing humans they are human-like if we'd intentionally tried to.

In fact, I've previously described LLMs as accidentally being "the most perfectly deceptive magic trick ever" (while I'm a technologist professionally, I've spent quite few years designing actual magic tricks as a hobby). Designers of magical illusions joke that "the perfect floating lady trick" would actually be able to do useful things like replace a forklift, since it could float anything, anywhere instead of just appearing to violate physics. LLMs actually can really do useful things and replace some human labor but that fact doesn't mean they have all the abilities and traits of humans nor that they internally function in similar ways.

> You asked a hypothetical above...

That wasn't me, it was another poster.

> What is experience beyond taking input from the world around you and holding an understanding of it?

In the view of many leading philosophers of mind (Dennett, Chalmers, Nagle, etc) "Experiencing" is quite a bit more than just sensing, processing and recording. They use the term "Qualia" (https://plato.stanford.edu/entries/qualia/) which is what they're talking about when they ask "what is it like to be a..." (wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat)? While the philosophical debate around why material reductionism can't explain human consciousness is fascinating, we don't need to go there to understand "what it is like to be an LLM" because we already know the answer: it's not like anything - there is no there there.

First, it's obvious we can't trust what the LLM's textual output says when it's asked "what is it like to be you" because it's an 'imitation machine' trained on 100% human sample text. The algorithms were designed, tuned and tested to generate text output which most plausibly simulates how a composite human would respond to the input (including the invisible system prompt instructing: "You are a Large Language Model, not a human"). We even added a tiny degree of random variability to the processing of the statistical weights because we found that makes the simulation seem a bit more plausibly like what a composite human would say. In short, the 'self-reported' textual output of a system purpose-built to generate plausible human-like textual output can't be trusted any more than a study of pathological liars can trust self-reported data from their study population.

Fortunately, with LLMs we can directly look under the hood at how it works and the entire specialty of Mechanistic Interpretability exists to do exactly that (https://towardsdatascience.com/mechanistic-interpretability-...). So we know with certainty that, despite what they may say, LLMs do not experience qualia in the way that humans and even other mammals do (which we have insight on from 'looking under the biological hood' with fMRI, surgical and brain injury studies).

Then the only question left is whether to redefine "consciousness" in some new way very different from "human consciousness" or "consciousness in mammals" (the only examples we've had until very recently). Personally, I think it makes little sense to radically redefine consciousness to include statistical algorithms running billions of matrix multiplications on a massive database of human-generated text. The term "consciousness", as vague and poorly defined as it is, was created to refer to human, or at least biological, consciousness. I'd be fine with creating a new term to refer to whatever novel traits of LLMs someone wants to quantify but they should leave the term "consciousness" out of it because the poor thing's already barely useful and stretching it further will just leave it broken and devoid of any meaning.


Toddlers learn over the course of several years of observing training data and for the first few years misspeak about themselves and others. What’s the difference?

How are you sure it doesn’t reason about itself? The grammar of languages encode the concepts of self and others. LLMs operate with those grammar structures and do so in increasingly accurate ways. Why would we say humans that exhibit the same behavior are inherently more likely to be conscious?

I answered your questions in two other fairly detailed responses, so I'll link to them instead of re-posting much of the same text again: https://news.ycombinator.com/item?id=48000647 https://news.ycombinator.com/item?id=48002211

How do I know you have this "self"?

How do you know other humans do?


By the laws of physics, it's pretty clear we don't. The same chemical and electromagnetic interactions that drive everything around us are active in our brains, causing us to do things and feel things. We feel like we're in control of it, we feel like there's something there riding around inside. We grant that other people have the same magic, because I clearly do. But rocks, trees, LLMs, those are not people and clearly, clearly not conscious because they don't have our magic.

Hard disagree. We reliably operate with the concept of a self that’s distinct from others. The chemical and physical processes change in response to stimulus.

Indeed. We assume a lot, because we don't know. We don't have have settled, universal definitions of what consciousness means. But that also means that while we like to rule out consciousness in other things, we don't have a clear basis for doing so.

Based on that reasoning anything could be conscious. If that's a bullet you want to bite, fair enough.

I'll bite that bullet. In fact I contend the idea that "humans and maybe some animals are conscious, but other things are not" is the special pleading stand. Why are the oscillating fundamental fields over here (brains) special, but the oscillations over there (computers, oceans, rocks) not? If they are, where do you draw the line? It smacks of "babies dont feel pain" (widely believed until the 80s! the 1980s!) sort of reasoning.

https://en.wikipedia.org/wiki/Panpsychism


Actually I don't really have any problems with panpsychism. It's a pretty uncommon perspective, but when discussing conscious machines, it at least presents a consistent criteria for consciousness.

I do not know, because we have no known way of measuring consciousness.

I merely object to the notion that we know how to tell who or what has a consciousness.


[flagged]


Ad hominems are always a nice way of getting out of answering something you have no answer to.

It's not an ad hominem. In fact, it's perhaps the most good faith interpretation of your words possible. Ad hominem would be calling you stupid because you obviously know that you have a self and only your own stupidity could explain your inability to see how your self is generalisable. When you go around pretending you genuinely think maybe humans don't have selves, really the only way to take you seriously is to think that maybe you're a p-zombie.

It was an ad hominem, and so is this.

I do not pretend. I asked honest questions that clearly neither you nor the previous person are able to answer.


In other words, you don't think it's nice at all.

Yes, it is rather overt sarcasm.

Bring on the feature creep and epic down time

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: