Hacker Newsnew | past | comments | ask | show | jobs | submit | drdaeman's commentslogin

It sure does a little bit, but a) quality varies a lot - some courses can get you from zero to dos cervesas por favor, some are just poorly structured noise that has no chance of sticking in mind; b) doesn’t explain grammar (it’s an exception when it does), so results greatly vary on preconditions like languages you’re already familiar with and can relate - anything too foreign and you’ll have hard time trying to understand how those examples generalize.

Duolingo it got me just enough Spanish (with zero prior knowledge) to get around, communicate basic needs (like a caveman, sure) and understand simple instructions, all without putting serious effort to learn language properly (putting serious effort into it) but only casually, as a side task.


The missing bit is that compliance is for governments and business partners, not for any end-users. For the purposes of KYC/AML process, end-users are objects, not subjects.

Your coins frozen with no reason given even internally except for "machine said no" - no one gets any slap on the wrist unless you sue real hard, happen to win, and most likely that'll be just a scratch that won't be noticed enough to change any attitudes.

The Man sees that someone they don't like transferring their coins through the fintech company - that's what those companies are really concerned about, because it would be a punch in the gut the company will feel.

Thus, the incentives. Current social design doesn't punish for false positives (until they hit really high levels), only false negatives.


Maybe I won't have to be concerned about job security some years from now, when everything becomes FUBAR and companies will need a legacy systems expert/software necromancer to a) discover, spec and re-formalize what their machine-generated black boxes are doing; b) build comprehensible and maintainable systems; and c) be responsible for what happens in the process aka swear by my work. While (a) probably can be done by a machine alone, and (b) can be done by a machine-and-human tandem, (c) absolutely requires a human.

But the few years to come are going to be wild for a lot of folks out there.

I don't expect Coinbase to publish a "we're hiring everyone back" in 5 years from now, but I hope at some point media will spot those trends as they'll - I have no doubts - will happen, and propagate that tune.


> so confusing why PMs/PMMs

Because their goal metric is number of tasks closed/features delivered (and this counts as one), not customers satisfied.

Plus, social parroting - a misconception that if it's popular and everyone does it it "can't be wrong".


The way I see it, it’s purely a terminology/nomenclature problem. Consciousness is whatever speaker decides to call so. When listener has a different notion, communication hits a barrier. Language works only because most have somewhat overall similar perceptions on semantics, and it’s easy for everyday stuff (and even then people can easily miscommunicate e.g. colors). Not that many think about nature of consciousness in any fine detail, so for most folks it’s just… a hand-wavy something humans have related to thinking and awareness.

And overhauling the language to match scientific understanding requires getting everyone onboard with that scientific understanding. Good luck with that, given that we have plenty of people who believe in weirdest nonsense.

Brain is not the only thing that makes us conscious, the whole human is a super-weird collection of highly intertwined systems that work together and produce whatever we call “human.” As I get it, it’s a huge complexity all the way down to that gut bacteria that somehow affects our thinking too. And I don’t think we have a vocabulary for all that - we mostly think of “self” as a single entity.


A way around the nomenclature problems is to try and look at experiments instead. For example Turing seeming frustrated by similar discussions on will machines be able to think that turned on how people defined 'think' and instead proposed his experiment where you can see if people can tell the computer and a human apart by asking questions by text.

I'm not quite sure how you'd apply that to consciousness vs thinking say.


Thinking about it consciousness no doubt got there through evolution, as did other aspects of animal life, and functionally, if you think about something like a deer, it increases its survivability to be aware of what's going on, where it is, if there's a predator near, should it run away etc. The inputs are billions of sensory and memory neurons and that info has to be filtered down to something like a situation summary so other parts of the brain can make a decision like whether to stay eating or run. I presume consciousness is basically the situation summary.

That’s not exactly clear as it may seem. At least I can trivially form counterpoints to both of those - not necessarily true but not obviously false.

LLMs “live” in token “space”, and it’s “aware” of all its surroundings in form of input. (Quoted terms for my lack of better words.) It has no other surroundings to be directly (not intellectually) aware of, just like we aren’t immediately aware about the physics around us.

As for the static nature - LLMs are trained and aren’t exactly static, they just get updates at different cadences, and we call those updates different names or, more precisely, versions. Plus LLMs can exist in multiple versions simultaneously - we can’t “fork” a human mind but it’s simple with LLM. Claude Opus (not sure if e.g. Haiku is related or parallel development with distinct origins) is like the proverbial Ship of Theseus in this sense. Either way it’s undeniable it learns and evolves, just very differently from biological systems, and ot all depends on how we decide to call things. Which isn’t exactly surprising, given it’s based on different principles and processes.


No, it doesn't learn or evolve on its own, but rather through humans and a huge data-gathering and training loop. It's aware of the input, but anything resulting from it will be discarded by the next query or restart. Even with all the supporting systems, aka external memory, it can only access and retain an infinitesimally small amount of the information to which it was exposed — a shadow of a shadow of the things themselves.

If you argue that the entire system, including humans, is conscious, then that's a kind of tautology because humans are already conscious. If any, an LLM is only a partial reflection and low dimensional projection of this consciousness.


All true, but simultaneously, if you look at it not in the “how did we get here” but “what’s out there today” light, it’s an option that can run on a lot of platforms. Not by some particular merit but because history happened this way - but that’s not a problem with the technology itself.

I concur.

The article defines "success" in the Windows context as being "available everywhere". It does not address how it got to that point.

And sure, you might not like Microsoft, and you may not like how it became successful (using the above definition) but the fact that it is available everywhere is not in dispute.

Of course most successful things have murky pasts. We don't necessarily agree with how it got there, but there it is. That is, at least in the technical sense irrelevant. You may prefer LP's or CD's, but streaming is now the successful way to get your music.

That doesn't mean it's the only way though, and of course you are free to not use Windows programs, or play games via Steam etc. That is your choice.


Yep, it pulls stuff from at least npm, it’s not a secret - check the source code.

Actually it pulls latest versions (checking registry then installing that exact version, not sure why they sidestep normal resolution algorithms) no matter what .npmrc may say, so min-release-age breaks almost everywhere it integrates with JS/TS ecosystem (most visibly, Copilot). I probably should’ve filed an issue.

It also installs Go packages but I haven’t looked into that.


They have built an orchestrator, not Kubernetes. There is one key difference: they know this thing, end-to-end, down to every single bolt and piece of duct tape (with possible exception for Docker internals)

And that's a very important distinction when it comes to maintaining complex systems. This could've changed with LLMs (I'm still adjusting to what new capabilities mean for various decision-making logic), but before machine intelligence debugging an issue with Kubernetes could've been a whole world of pain.


And chances are only they know it. If my role has enough cluster access, I can muddle through pretty much any helm chart (with lots of cursing, yes) but it might take me days to set up whatever elaborate bespoke environment and script invocations are needed to replicate the current production setup maybe.

> This is a modern “oops, I ran DROP TABLE on the production database” story.

It's not that story, though. It's a story "oops, my tool ran DROP TABLE on the production database" (blaming the tool). At least I haven't heard people blaming their terminals or database clients as if the tool is somehow responsible for it.


It's an AI-enhanced "the script had a bug in it".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: