Hacker Newsnew | past | comments | ask | show | jobs | submit | oofbey's commentslogin

Fun fact about laser discs. They are analogue not digital. CD’s store digital information with the presence or absence of pits. Fairly ancient but still fundamentally feels like a very old version of a thumb drive.

Laser discs are not digital. They encode the analogue video signal’s value as the length of the pit. It is digitized in the time domain - sampled at some frequency, but the “vertical” signal value is stored entirely analogue. In terms of encoding it’s more similar to a VHS tape than a CD. Kinda crazy.


> Laser discs are not digital... It is digitized in the time domain

Laser disks are 100% digital (as you said, they store digits in the time domain).

They don't encode their data using binary like a CD does.

"Binary" and "digital" are two separate and unrelated concepts.


Um... I think they store "PCM encoded-ish" but the length of the pits are not discrete on / off like on a CD but various arbitrary lengths, so analog.

The sound was also analog to begin with, then the same encoding as CDs, then after that AC-3 and DTS.


yeah i remember learning this as a kid and being surprised. i originally thought laserdiscs were modern high tech, but then they turned out to actually be from the late 70s/early 80s with the primitive analog video encoding where red book audio cds of the mid to late 80s were actually digital.

BUT... Pioneer put AC-3 (Dolby Digital) surround on LaserDiscs before DVDs came out. So LaserDiscs were the first video medium to offer digital sound at home.

And at that point, most players sold were combo players that could also play CDs.

And there was one more disc format: CD Video. It was a CD-sized digital single that also had a LaserDisc section for the (analog) music video. I have a couple; one is Bon Jovi.


Was CD video compressed? I thought it existed at the same time as DVD but cheaper.

That's Video CD. It existed before DVD but survived alongside it (mainly in Asia) as a cheaper alternative.

no, apparently there was both. i was familiar with video cd which was mpeg-1 on a cd-rom (with some weird partitioning scheme). cd video is apparently a very obscure hybrid format with an analog video section and a digital audio section. https://en.wikipedia.org/wiki/CD_Video

I just learned this in my 40s and am surprised. Very cool.

So how does writes work? Does an analog signal translate into pit-lengths with absolute precision?

Everything is analog when it gets to real world

I think yours might be extreme. But I think the anonymity here is widely appreciated. And frankly necessarily relies on easy creation of accounts.

People share things that they often wouldn’t. And somehow the culture remains mostly civil. It’s a pretty fantastic forum IMHO.

Changing the rules would surely change the vibe, so to speak.


I appreciate the anonymity. Posting as throwaway is often useful to distance the poster from $work or $ex or other situations yet contribute to a conversation.

But will it continue under all the login id surveillance laws coming up?


What I find really sad is how slowly this science moves. As I read the article, the “a ha!” breakthrough was in the 1990s. Why did it take them 15-20 years to publish anything??? Coming from the tech industry this is mind boggling.

If you thought there was even a chance you were sitting on a realization that could literally “cure cancer” wouldn’t you want to scream it from the rooftops as loud and fast as possible?


every researcher who believes in their idea shouts (to grant organizations) that their thing is the best. but gathering actual data is difficult.

usually research is super inefficient, academia is full of people who are ... true believers. (sometimes only in their own greatness, and that leads to fraud.) which is amazing, but it's not really conductive to figuring out what's the best way to set up a high-throughput process to generate data.

in many cases it would require them to stop most of what they do and let a specialist team build said pipeline for them. (but that runs into cost problems. and that immediately leads us back to the grant organizations allocating resources.)

for example see how many problems the paper from 2014 had: https://pubpeer.com/publications/A32D7989007655CBF8D9DB2A250...

also see how fiendishly difficult it was initially to create transgenic mice embryos: https://www.astralcodexten.com/i/167092138/prerequisites-dex...

and just an anecdote, I have a friend who worked at a brain research group (they implanted electrodes into rodents, put them into mazes, and then see whether they dream about the maze) and ... it was cool, but IMHO that public money was mostly wasted.


It’s easy to get AI to write bad code. Turns out you still need coding skills to get AI to write good code. But those who have figured it out can crank out working systems at a shocking pace.

Agreed 100%. I'd add that it's the knowledge of architecture and scaling that you got from writing all that good code, shipping it, and then having to scale it. It gives you the vocabulary and broad and deep knowledge base to innovate at lightning speeds and shocking levels of complexity.

I am sorry for asking, but... is there guide even on how to "figure it out"? Otherwise, how are you so sure about it?

Right here: https://codemanship.wordpress.com/2025/10/30/the-ai-ready-so...

This series of articles is gold.

Unsurprisingly, writing good software with AI follows the same principles as writing it without AI. Keep scopes small. Ship, refactor, optimize, and write tests as you go.


When a new technology emerges we typically see some people who embrace it and "figure it out".

Electronic synthesisers went from "it's a piano, but expensive and sounds worse" to every weird preset creating a whole new genre of electronic music.

So it seems plausible, like Claude's code, that our complaints about unmaintainable code are from trying to use it like a piano, and the rave kids will find a better use for it.



That's actually a great question. Truth be told the best way right now is to grab Codex CLI or Claude CLI (I strongly prefer Codex, but Claude has its fans), and just start. Immediately. Then go hard for a few months and you'll develop the skills you need.

A few tips for a quickstart:

Give yourself permission to play.

Understand basic concepts like context window, compaction, tokens, chain of thought and reasoning, and so on. Use AI to teach you this stuff, and read every blog post OpenAI and Anthropic put out and research what you don't understand.

Pick a hard coding problem in Python or Typescript and take a leap of faith and ask the agent to code it for you.

My favorite phrase when planning is: "Don't change anything. Just tell me.". Save this as a tmux shortcut and use it at the end of every prompt when planning something out.

Use markdown .md docs to create a planning doc and keep chatting to the agent about it and have it update the plan until you're super happy, always using the magic phrase "Don't change anything. Just tell me." (I should get myself a patent on that little number. Best trick I know)

Every time you see an anti-AI post, just move on. It's lazy people making lazy assumptions. Approach agentic coding with a sense of love, excitement, optimism, and take massive leaps of faith and you'll be very very surprised at what you find.

Best of luck Serious Angel.


You're not really answering the question are you?

Your answer is to play with it. Cool. But why cant you and others put together a proper guide lol? It cant be that hard.

Go ahead and do it - it'll challenge the Anti-AI posters you are referencing. I and others want to see that debate.


Don't worry we'll all be taking the Claude certification courses soon enough

Here’s some practical tips:

Start small. Figure out what it (whatever tool you’re using) can do reliably at a quality level you’re comfortable with. Try other tools. There are tons. If it doesn’t get it right with the first prompt, iterate. Refine. Keep at it until you get there.

When you have seen some pattern work, do that a bunch. It won’t always work. Write rules / prompts / skills to try to get it to avoid making the mistakes you see. Keep doing this for a while and you’ll get into a groove.

Then try taking on bigger chunks of work at a time. Break apart a problem the same way you’d do it yourself first. Write a framework first. Build hello world. Write tests. Build the happy path. Add features. Don’t forget to make it write lots of tests. And run them. It’ll be lazy if you let it, so don’t let it. Each architectural step is not just a single prompt but a conversation with the output being a commit or a PR.

Also, use specs or plans heavily. Have a conversation with it about what you’re trying to do and different ways to do it. Their bias is to just code first and ask questions later. Fight that. Make it write a spec doc first and read it carefully. Tell it “don’t code anything but first ask me clarifying questions about the problem.” Works wonders.

As for convincing the AI haters they’re wrong? I seriously do. Not. Care. They’ll catch up. Or be out of a job. Not my problem.


I’m not a SWE by trade so I could care less about your last comment.

But again this is all… vague. I’m personally not convinced at all.

I’ll be hiring for a large project soon, so I’ll see for myself what benefits (well I care about net benefits) these tools are providing in the workplace.


If it wasn’t clear, I don’t have any desire to convince anybody of anything. You don’t believe the future is here yet? Good luck holding on to that position. Not my problem. I was taking time to try to help somebody who sounded genuinely curious and seeking help. That I’m happy to do.

You’re writing novels when if you had something compelling to show it’d be simple and easy.

If you can’t make it simple and easy… then you haven’t understood it at all. All geniuses refer to this as the standard by which one understands something. Whether it’s Steve Jobs or Einstein. So don’t get mad. Show us all how simple and easy it is. If you can’t.. then accept you’re full of it and don’t quite get it as well as you claim. Not rocket science is it?

But here we are. And actually my project is going to create the future. You’re a bozo programmer who creates the future that others already see. Know your role and don’t speak for others like me who are in the position of choosing who gets hired.


Ah - I know! Seriously I know. There's such a bad need for this right now. The problem is that the folks who are great at agentic coding are coding their asses off 16 to 20 hours a day and don't have a minute they want to spend on writing guides because of the opportunity cost.

One of the rare resources I found recently was the OpenClaw guys interview on Lex. He drops a few bangers that are really valuable and will save you having to spend a long time figuring it out.

Also there's a very strong disincentive for anyone to write right now because we're competing against the noise and the slop in the space. So best to just shut the fuck up and create as fast as we can, and let the outcome speak for itself. You're going to see a lot more products like OpenClaw where the pace of innovation is rapid, and the author freely admits that they're coding agentically and not writing a single line.

I think the advantage that Peter has (openclaw author) is that he has enough money and success to not give a fuck about what people say re him writing purely agentically, so he's been very open about it which has been great for others who are considering doing the same.

But if you have a software engineering career or are a public figure with something to lose, you tend to STFU if you're doing pure agentic coding on a project.

But that'll change. Probably over the next few months. OpenClaw broke the ice.


How do you figure anything out? You go use it, a lot.

Actually “last year: this”. It was published in early 2025.

First mover disadvantage: you need to invent a way of doing things (ANN) before any decent tools have been built. 10 years later the world is laughing at you for using Java to do vector math.

Not true. The system prompt is clearly different and special. They are definitely trained to differentiate it.

Trained != Guaranteed. It's best effort

....and there are plenty of attacks to circumvent it

What was the injected title? Why was Claude acting on these messages anyway? This seems to be the key part of the attack and isn’t discussed in the first article.

> Why was Claude acting on these messages anyway?

Because that's how LLMs work. The prompt template for the triage bot contained the issue title. If your issue title looks like an instruction for the bot, it cheerfully obeys that instruction because it's not possible to sanitize LLM input.


No doubt! I’ve measured literal 5 minute ping times on airplanes. 300,000ms. Where are the buffering the packets!?

My guess is that you're getting retransmissions because of dropped frames, not because there's some huge buffer in the sky.

Indicated airspeed 280kts, ground speed 470kts, FL410, the packets are trying to catch up…

I like "huge buffer in the sky".

That's where I imagine all my deleted data goes.


we're all just riding the ring buffer of samsara, maaan

There’s one huge buffer in the sky!

The huge buffers are at the two endpoints (:->


Yet another way Google might decide you’ve violated their ToS and cut you off from your entire digital life without warning or recourse.

Sounds handy. But use at your own risk.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: