"If you believe the AI researchers–who have been spot-on accurate for literally"
I do not understand what has happened to him here... there was an entire "AI winter" in the 90's to 2000's because of how wrong researchers were. Has he gone completely delusional? My PhD supervisor has been in AI for 30 years and talks about how it was impossible to get any grant money then because of catastrophically wrong predictions had been.
Like, honest question. I know he's super smart, but this reads like the kind of ramblings you get from religious zealots or scientologists, just complete revisions of known, documented history, and bizarre beliefs in the inevitably of their vision.
It really makes me wonder what such heavy LLM coding use does to one's brain. Is this going to be like the 90's crack wave?
Yeah, I full-stopped on that sentence because it was just so bizarre. I can understand making a counter-to-reality claim and then supporting the claim with context and interpretation to build toward a deeper point. But he just asserts something obviously false and moves on with no comment.
Even if he believes that statement is true, it still means he has no ability to model where his reader is coming from (or simply doesn't care).
I wound up going back to C in a big way about five years ago when I embarked on Scheme for Max, an extension to Max/MSP that lets you use s7 Scheme in the Max environment. Max has a C SDK, and s7 is written in 100% ANSI C. (Max also has C++ SKD, but is far less comprehensive with far fewer examples and docs).
I was, coming from being mostly a highlevel language coder, suprised at how much I like working in this combo.
Low level stuff -> raw C. High level stuff -> Scheme, but written such that I can drop into C or move functions into C very easily. (The s7 FFI is dead simple).
It's just really nice in ways that are hard to articulate. They are both so minimal that I know what's going on all the time. I now use the combo in other places too (ie WASM). It really forces one to think about architecture in what I think is a good way. YMMV of course!
One interesting reason this happens, at least in the music field, is the adult disadvantages that often go along with various forms of savantism. I have spoken with a number of fellow music academics about this, and it's not uncommon that the things that make one a young prodigy are the same things that give one real obstacles to making it in the regular world, and this can impose a ceiling on where they get to. For example, many music prodigies have never "really had to work" and once they get to having to shoulder the boring reponsibilities that go with building a career, they instead alienate people, or just can't do things that are hard for them because it's always been easy. Maybe they can't play off charts unless they've heard it, or aren't used to following instructions/cues/being the lead, etc. And unless they are truly, truly rare air, real career gigs have boring work elements too.
Savantism can be pretty damned weird. I've known a few, including a couple who will never have an adult career beyond local gigs because of their mental disabilities in other, non-music areas. The Oliver Sacks book "Musicophelia" has fascinating case stories about it.
People into this sort of stuff might be interested to know that s7 Scheme also runs really well in WASM. It's 100% ansi C and uses its own GC, so getting it compiled and running in WASM is very simple. I use it in both my audio project (Scheme for Max, an s7 interpreter in Max/MSP) and in a browser based set of music practice tools I am working on as a solopreneur. It's fantastic to be able to write the music theory engine in Scheme instead of JS.
I really hope the racket effort gains traction too! Excited to see this. In comparison, s7 is much more minimal. Though this also means the FFI is dead simple too, so extending it and bridging to js functions is much easier, and everything in s7 is available now in WASM - CL macros, keywords, hashtables, first class environments, continuations, etc
From the article "And yet these tools have opened a world of creative potential in software that was previously closed to me, and they feel personally empowering. "
I keep seeing things like this, about "democratization" etc. But coding has been essentially free to learn for about 25 years now, with all the online resources and open source tools anyone can use.
Is this just a huge boon to those too lazy to learn? And what does that mean for later security and tech debt issues?
You, like most hacker news folks, likely are too rich to imagine the type of person who can’t spend hours every day learning a billion different difficult things. Some folks have to work menial jobs, take care of their kids, and fix their leaky roof themselves.
as a software developer with a life after work, I don't have time to learn to be a mechanic either. So I don't claim to be one without having done the work.
Bad assumption. I taught myself to code in the early 2000s while scrapping by earning just above minimum wage in Vancouver, one of the most expensive cities in the world.
So uh, you would be hard pressed to be more wrong. And I used free resources.
Think about having to assemble a car (you can find specs and tutorials online, say) and then drive it, or asking engineers and mechanics to assemble it, and then using the car assembled by others to go for a drive.
Those are all false equivalents. The GP speaks of "democratization of learning", which had already happened. It's more akin to if I said "now people can finally vote" when remote voting expanded to civilians. It's not like people couldn't have voted before, and in fact it had only a modest impact on turnout.
Then people would ask "is this just a huge boon to those too lazy to vote?", and the answer would be "no actually, voting is still a thing where one must do their own thinking."
If anything, it's a boon to people too lazy to drive, similar to LLMs being a boon for those too lazy to type.
Too lazy to learn is a bit harsh and your statement lacks empathy.
Coding has been free to learn for a long time and the quality of education resources has only improved overtime. But that does not mean it’s easy and it doesn’t decrease the time to learn.
I’ll use myself as an example. I’m pretty creative, I have a lot of ideas and interests, but I struggle a lot with the logic and syntax of coding. I find it interesting at the surface level but every time I’ve tried to learn I just can’t get it to click. And, to be frank, I don’t find it very enjoyable.
But at the same time I have random website and app ideas quite frequently. I’ll use apps that have terrible UI/UX and imagine ways it could be better or maybe even design something in Figma if I’m feeling frisky. But actually making an app? Always just way too out of my wheelhouse. Plus I work 40-50 hours a week and prioritize socializing on the weekends, a lot of those ideas have to be relegated to just ideas on a list in Obsidian. Does that make me a lazy person? Maybe to you but I don’t think of myself that way.
The tools available now have unlocked something new for me. My ideas can start to come to life because the coding part doesn’t hold me back anymore. I’ve made silly websites with domains I’ve owned for years. I’ve made apps that solve an annoying issue I’ve had forever like a file media viewer app for my iPhone since file viewing sucks with Files/Preview and every app on the AppStore is infested with ads and didn’t fit my use case. I just for fun made an app that can play against me in MTG by using the continuity camera from my iPhone to my Mac to read the playing field.
I get where you’re coming from but you probably think every vibe coder is lazy because you’re good at coding. Not everybody has the talent/time/desire to learn how to code. Does that mean we can’t let our ideas come to life?
Honestly stories like yours are the best part of this whole AI revolution. It’s genuinely cool that the technical barrier is no longer killing creative ideas. You’ve essentially skipped the "coder" stage and jumped straight to orchestrator (Product Owner + QA rolled into one), with AI acting as the diligent junior dev. That is a totally valid model
The only downside is that sooner or later, you hit the scaling trap. When a project grows from a silly website into a real product with users, not understanding how exactly the AI stitched those code blocks together becomes a critical risk. AI is great at putting up walls and painting facades, but the foundation (architecture and security) still needs human verification, especially once other people's data gets involved
Trying to fix broken architecture with the same AI that wrote it leads to recursive technical debt. AI can rewrite code, but it cannot make strategic decisions if the operator doesn't understand the problem. In the end, instead of a fix, you get a Big Ball of Mud, just built 10x faster
Anyone have experience with the author's book? I am just getting into this world right now, as it happens, and am working on Art of Prolog, Simply Logical, and the Reasoned Schemer, but other suggestions for resources that are particularly good would be welcome!
Best HN site today, IMHO.
reply