Hacker Newsnew | past | comments | ask | show | jobs | submit | brookst's commentslogin

Not sure I agree it’s “most likely” when the linked article presents no evidence of LEA awareness or complicity, just one person speculating.

I know firsthand what can be done with a hardhat, clipboard, and high-viz vest. IMO it is far more likely that Banksy is just really good at social engineering in ways that other street artists are not.


The difference is that you'd get a police visit and your artwork torn down if you're not Banksy.

mainly because it's worth a lot of money...

That doesn't mean it was coordinated.

I imagine this just isn't that difficult to get away with. Most areas are basically empty in the early hours of the morning (even in the middle of the city). And people doing some kind of engineering or installation work at that time would also not be that unusual.

Plus this is pretty much the only street artist with worldwide name recognition; of course things are going to be different.

The is the age-old music parochial thing. "Oh, he's just in a cover band, he doesn't write anything" / "Oh, she's just a composer, she can't even play the stuff she writes" / "Oh, he writes and plays his own stuff but knows fuck all about theory so it's not real music" / etc.

Me, I'm having a blast with claude code, MCP, and Ableton. I'm directing harmony and asking for arrangements and variations in rhythm, mixing, and production. Don't know if that counts as "making it myself", but then I was writing music before I could actually play any instrument at all, so :shrug:


I built https://github.com/brookstalley/cordyceps to do CAD work using claude code.

It's not perfect by any stretch, but it is surprisingly strong. It was able to create and debug some pretty complicated geometry by iterating with screenshots, adjusting view angle and zoom and rendering mode, updating parametric geometry generation, and working to fairly complex goals.


Isn't that conflating diagnosis and treatment plan?

Sure, but my anecdotal experience is that doctors do this regularly in real life, especially when choosing to diagnose or ignore problems that are unlikely to kill an aging patient before some other larger issue does.

Gotcha, I was thinking more about radiologists than patient-facing doctors.

Radiologists do it too.

They don’t even have the resources to test the most common browsers on every scenario of every page of every application, let alone fix every issue such testing would find.

Big if true

Plenty of people doubt evolution.

Doubt without evidence is just noise.


Think of the poor Xerox machines.

Obligatory Blightsight recommendation for intelligence != consciousness.

That book is badass on so many levels. I'd just started it again yesterday.

that book messes with my head every time I read it, it's like I go through life in a detached way for several weeks. I need to read it again!

I read it once, was immensely impressed, can't bear to read it again. In fact I find most of what I have read from Peter Watts to be brilliant but disconcerting and uncomfortable.

Blindsight

argh, and too late to edit. But thank you for the correction!

These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.


How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?

You’re arguing against the opposite of my position. I am arguing that LLMs have a reasonable basis to be seen as conscious because there is nothing special about biological neural networks.

Ya, I seem to largely agree with your comments on this article. I was replying to brookst, did you mean to reply on a differnt thread?

Sensory input is nothing but data.

That's just reductive semantics. Anything can be described as "nothing but data".

Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.

They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.

There are robots with AI controlling them, so it doesn't hold that they don't all have bodies. They can see, they can move.

(I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)


Bodies aren’t necessary for senses. I can send a picture to Claude. I can send a series of pictures. That’s usually called a sense of vision. I could connect it to a pressure sensor and that would be touch.

> They don't have senses because they don't have a body

Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."

> Does my installation of starcraft have consciousness?

Can your installation of StarCraft take in information about the world and then reason about its own place in that world?


The weights on your hard drive might have consciousness if they can respond to stimuli in ways other conscious brains do. That’s the whole point of the Turing test, it’s a criteria for when the threshold of reasonable interpretation is crossed.

How do you measure this consciousness?

How do you imagine a brain can distinguish data from a real sense and data from another source?

Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.

No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.

Conflating senses with cognitive awareness of sensory input is a mistake.


We don't have a way of measuring "cognitive awareness" though. We have a way of measuring electrical impulses, and how they behave in response to various treatments (eg anaesthetics or magnetic fields), but we can't objectively measure whether the system is aware at all.

We can measure electrical spikes, and we can ask the system to reply what it experiences when various spikes occur. Guess what: we can do that with ANNs now too.

It'd be one thing if this were all a philosophical discussion, but in this thread so many folks are making very firm statements about the nature of reality we have no means to back up.


I’m not sure I fully understand the distinction you’re making, or if I do I’m not sure I agree. Concretely, I agree that these are very different mechanisms. Abstractly… I agree that an LLM cannot be burned. I’m not sure I agree, though, that there is a significant conceptual difference between thermoreceptors in the skin causing action potentials to make their way up the spinal cord to the brain is all that different than reading a temperature sensor over I2C and turning it into input tokens.

Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.


The human Brain is a neural network. Your sense of “knowing what warmth is” reduces down to the weights of connections between neurons in an analog of LLMs. What is different about the human brain that warrants saying that the same emergent characteristics for one network are inaccessible to another?

You really don't think there's an experiential difference between putting your hand on a hot stove, versus reading the text "the stove is 200c, and will hurt if you touch it"?

Sure, and at the same time we need a more efficient way to ensure big companies can’t just take what they want and bury anyone who complains.

It’s not an easy problem.


Stop big companies from ever forming. They are not a natural force that cannot be reckoned with. We allow them to exist. Revoke the charters of any business over 500 employees.

I can see a number of ways to work around that limitation, without even lobbying and bribing. And I'm not even a lawyer or an accountant.

Eventually all the money and power will converge in a few sub 500, or sub 50, companies and nothing will change.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: