These sorts of articles have no value. The author is a "media entrepreneur". Being forced to read his opinion intermixed with out of context pull-quotes is not a good use of anyone's time. If Dawkins gave his opinion at length it might be worth reading but only if your goal is to understand something about Dawkins not about AI.
I would be interested to see a scientific discussion on what consciousness is biologically and if AI can fit that definition. But it would require someone with more credentials than a _media entrepreneur_ to pull off.
There is no true scientific discussion possible about the nature of consciousness. This is squarely in the realm of philosophy.
I personally think its moot to discuss whether LLMs are conscious. If they are, then we have diluted the definition to something that has no relevance to morality or concepts like life and death. Lets just take them for what they are, if we feel like they deserve to be treated with respect then we should (dont think anyone does yet).
I'm not sure it's that beyond science. Some aspects can be a bit woo but there are practical questions like what anesthetics render you unconscious for surgery and how that interacts with the brain parts involved.
The way I see it, it’s purely a terminology/nomenclature problem. Consciousness is whatever speaker decides to call so. When listener has a different notion, communication hits a barrier. Language works only because most have somewhat overall similar perceptions on semantics, and it’s easy for everyday stuff (and even then people can easily miscommunicate e.g. colors). Not that many think about nature of consciousness in any fine detail, so for most folks it’s just… a hand-wavy something humans have related to thinking and awareness.
And overhauling the language to match scientific understanding requires getting everyone onboard with that scientific understanding. Good luck with that, given that we have plenty of people who believe in weirdest nonsense.
Brain is not the only thing that makes us conscious, the whole human is a super-weird collection of highly intertwined systems that work together and produce whatever we call “human.” As I get it, it’s a huge complexity all the way down to that gut bacteria that somehow affects our thinking too. And I don’t think we have a vocabulary for all that - we mostly think of “self” as a single entity.
A way around the nomenclature problems is to try and look at experiments instead. For example Turing seeming frustrated by similar discussions on will machines be able to think that turned on how people defined 'think' and instead proposed his experiment where you can see if people can tell the computer and a human apart by asking questions by text.
I'm not quite sure how you'd apply that to consciousness vs thinking say.
Thinking about it consciousness no doubt got there through evolution, as did other aspects of animal life, and functionally, if you think about something like a deer, it increases its survivability to be aware of what's going on, where it is, if there's a predator near, should it run away etc. The inputs are billions of sensory and memory neurons and that info has to be filtered down to something like a situation summary so other parts of the brain can make a decision like whether to stay eating or run. I presume consciousness is basically the situation summary.
That is an unfair characterization of Zig. The OP correctly points out:
> Function signatures don’t change based on how they’re scheduled, and async/await become library functions rather than language keywords.
The functions have the same calling conventions regardless of IO implementation. Functions return data and not promises, callbacks, or futures. Dependency injection is not function coloring.
These things _are_ function colouring, but they show function colouring isn't scary or hard.
The original function colouring essay was much more about JavaScript's implementation than a general statement.
If JavaScript had exposed a way for a synchronous function to call back into the runtime to wait for an async function to complete, it would still be just as coloured, but no one would be complaining about colour (deadlocks yes, but that's another kettle of fish).
I think this is right. More specifically, the problem is JavaScript function colors mean Sync/Async, whereas Zig's mean Non-IO/IO. Using function colors for async is fundamentally unnecessary, whereas for I/O it is fundamentally necessary. You should be able to define a synchronous function that calls an asynchronous function. But it makes no sense to define a non-IO function that calls a function that does IO.
EDIT: with the exception of doing IO on a freshly allocated, in-memory buffer that doesn't escape the function call.
> The functions have the same calling conventions regardless of IO implementation.
Okay, then by that definition, Rust doesn't have colored functions either, because `async fn foo() -> Bar {` in Rust is just syntax sugar for `fn foo() -> impl Future<Output=Bar> {` (that's a normal function that returns a future), and `Future` is just a normal trait that provides a `poll` method; there's no different calling convention anywhere. So which is it? Either Rust and Zig both have colored functions, or neither do.
I really like these projects but missing from them is genericity. If you're taking the time to build a WASM app in Rust it would be nice if that app could compile to something other than WASM. For example, looking at the sycamore website's source I see p, h1, div, etc. What I'd rather see is "row", "column", "text". In their source I see tailwind what I'd rather see is "center", "align right", etc.
In other words, elm-ui but for these WASM Rust apps. Building a mobile app, a desktop app, and a web app, in my mind, should be accomplish-able given the right primitives (without requiring a JavaScript runtime be bundled). Rust's multi-crate workspaces make it a really great candidate for solving these cross-platform problems. IMO of course.
Are there native frameworks which use XHTML? Regardless, a document language being used to construct complex, interactive GUIs is incidental complexity. XHTML can be a compilation target but it does not need to be a development target.
If your only target is web then there is no benefit other than a reduction in complexity.
For example, a "row" is not just a "<div>" tag. Its a div which horizontally fills its container. Centering contents with a "center" style attribute abstracts flex-box, browser compatibility, version compatibility, and the cascading behavior of CSS.
You move the incidental complexity of the web platform into the compiler which will always do the right thing. And in exchange you get the option to compile to a native or mobile app for "free".
I think I much prefer semantic elements like <section> over something like <row>. Calling something a row bakes in presentational information. Something that's a row on one screen size might be a column on another.
I agree. For my blog I don't apply CSS and prefer to let the browser's reader mode perform the styling for me.
But there are categories of application where that is not acceptable. The presentation is a tightly controlled aspect of the application's functionality. If you're designing an application with leptos or sycamore my suspicion is you would fall into the latter category rather than the former.
No browsers support streaming parsing of XHTML so you are slowing down pageloads. Especially for longer pages.
Also the ecosystem is really not there for XHTML, it never really took off. In practice it is close enough to HTML that it probably mostly works, but you are going to have problems.
The advantage is also very small, your emitter is simpler (you don't have to special case void elements and whatnot) and if you need to consume your own pages the parser is simpler. But that isn't worth much for most people.
It does make me sad, because parsing and even emitting HTML is a nightmare. But it won, so at the end of the day I find it easier to just accept that.
Both California and New York have shrinking populations according to Wikipedia. And even if the estimates are wrong and they do have growing populations you still need to consider the fact that the Southern states are growing much faster.
Federal power is shifting south. If you like the politics of New York and California that's a long-term problem that needs to be resolved.
California has a "Neutral Decrease" in population and New York has a "Neutral Increase" according to Wikipedia. I'm gonna go with the 5 year trendline, which is up.
> And even if the estimates are wrong
Nice, covering all your bases.
> Southern states are growing much faster
Yeah, smaller states grow faster than larger ones.
I'm talking specifically about this phenomena[1]. I wasn't agreeing with the GP that these policies are "oppressive". I'm only informing you that these states are not experiencing "explosive growth" and the downstream effects of that fact.
CA and NY are both expected to lose 4 seats next census, based on current trends. Saw this in an HN submissions yesterday about gerrymandering and how it might be a losing strategy for Dems looking out into the 2030s.
> There may be an argument for leaning less on code review. When code is expensive to produce and is likely to stay in production for many years it's obviously important to review it very carefully. If code is cheap and can be inexpensively replaced maybe we can lower our review standards?
Agree with everything else you said except this. In my opinion, this assumes code becomes more like a consumable as code-production costs reduce. But I don't think that's the case. Incorrect, but not visibly incorrect, code will sit in place for years.
> Agree with everything else you said except this.
Yeah, I'm not sure I agree with what I said there myself!
> Incorrect, but not visibly incorrect, code will sit in place for years.
If you let incorrect code sit in place for years I think that suggests a gap in your wider process somewhere.
I'm still trying to figure out what closing those gaps looks like.
The StrongDM pattern is interesting - having an ongoing swarm of testing agents which hammer away at a staging cluster trying different things and noting stuff that breaks. Effectively an agent-driven QA team.
I'm not going to add that to the guide until I've heard it working for other teams and experienced it myself though!
This kinda gets into the idea of AIs as droids right?
So, you have a code writing droid that is aligned towards writing good clean code that humans can read. Then you have an implementation droid that goes into actually launching and running the code and is aligned with business needs and expenses. And you have a QA droid that stress tests the code and is aligned with the hacker mindset and is just slightly evil, so to speak.
Each droid is working together to make good code, but also are independent and adversarial in the day to day.
Theoretically I'd want a totally different model cross checking the work at some point, since much like an individual may have blind spots, so will a model.
> The StrongDM pattern is interesting - having an ongoing swarm of testing agents which hammer away at a staging cluster trying different things and noting stuff that breaks. Effectively an agent-driven QA team.
It assumes that bugs are rare and easy to fix. A look at Claude Code's issue tracker (https://github.com/anthropics/claude-code/issues) tells you that this is not so. Your product could be perpetually broken, lurching from one vibe coded bug to another.
I can tell you precisely why foreign Toyotas (especially certain models) are more reliable that whats typically sold in the US. No electronics and parts which operate based on physics (pressure, gravity, etc). Both of these decisions lend themselves to a simple engine compartment and repairability.
In the US, you can buy a five-speed 4runner which is about the simplest engine available on the market. Has all the benefits enumerated above and its trivially repairable by DIYers. However, even the 4runner has annoying garbage which can fail.
Compare the newest 70 series Land Crusier in Japan to the US Land Cruiser (Prado). Difference is a v8 with no electronics and a 4 cylinder hybrid filled with electronics and a rats nest of tubes running across the top of the engine. Try working on that... Of course its get +20mpg compared to the Japanese version. I'm pretty sure the 70 series is 4 wheel drive always whereas the prado runs in 2 wheel drive but has a 4 wheel switch (more complexity -- better gas mileage).
Anyway, intangibles such as availability of parts and lower pricing makes scavenging more economical and increases life span.
Also, stability of the platform means there's lots of expertise that has developed over the past +30 years. Same design, same repairs, same parts. Makes things easy.
The V8 in the 70 series landcruiser uses computer controlled electronic injection. It also has other electronic / electro-mechanical systems like ABS and airbags.
He doesn't have to, he _gets_ to! Its knowledge exchange. Take it as a gift and not self-promotion. There's no money in this game so don't treat it like guerilla marketing. Treat it like excited people pushing the limits of technology.
If citizenship is required to vote then how would accessing voter rolls suppress liberal voters? Honest question; I'm not concern trolling. I had to Google who's allowed to vote.
I found this article[1] by the Brennan Center. It alleges this is an attempted federal takeover of elections but it doesn't suggest or allude to voter suppression. I'm not convinced by the article that having access to voter rolls can be considered a federal takeover of election administration (but I'm not in the know and would need things explained more verbosely).
If you have more information about the attempted centralization of election administration and its impacts on voter suppression I would be interested to know more.
Honestly my real fear is ICE agents at polling places on Election Day harassing would-be voters with citizenship checks and aggressive behavior, slowing things down and maybe causing some people to leave.
Regarding voter data though, if it becomes known that registering to vote as a minority will get you extra scrutiny from ICE, and perhaps a visit to your home, that would probably cause some citizens avoid voting altogether, especially if they are associated with people who are not her legally.
Either way, the federal government really has no right to that data or legitimate use for it, so hopefully they don't manage to get their hands on it.
There are not non-citizens on voter rolls. They want the rolls to get data on voters.
When you ask yourself why the ultra-politicized DOJ (which isn't even the DHS...) from an administration that has explicitly called liberals the enemy is asking for voter rolls, it becomes pretty understandable why people might come to the conclusion that it is to suppress the people that have already explicitly been identified as targets.
As of yesterday I decided I want to try adding a GTK runtime for Elm. Write Elm, compile to JS, execute with QuickJS, bridge to GTK, render a native app using the JS state machine.
The goal is basically what's listed on the Elm website. No runtime errors, fearless refactoring, etc. But also improved accessibility for developers who want to create a native "Linux" app. IMO Linux should be so accessible and so amenable to rapid prototyping that it is the default choice when building a new GUI app.
I would be interested to see a scientific discussion on what consciousness is biologically and if AI can fit that definition. But it would require someone with more credentials than a _media entrepreneur_ to pull off.
reply