I switched to Deno because it is the only option out of the 3 that allow monorepo workflow without building .d.ts files. Bun and Node both do type stripping or compiling of TS, but it only works for the entry package of the running script, not any of the linked dependencies from the same repo.
There are still things I dislike about Deno, but it really does make package development a lot simpler. JSR is a great upgrade from NPM, and Deno makes it so simple to publish to both NPM and JSR. Strict IO permission system and WebGPU support are also nice to have.
> wrap a project into an `*.exe`
Deno makes this simple too. Though that's where it's bundling features stop. Honestly I am okay with that, I'd rather use Rolldown or Vite for web or library bundling.
Deno has been great for wrapping the dozens of REST API's I need to use in the world in MCP. The no compilation thing means that I can push and it's literally deployed in seconds. I run several dozen of the little servers for various use cases, it's a very cheap way to build an automatable life
I do think simulating consciousness is within the realm of possibility. I also think it's absurdly silly to think LLMs (no matter their size) are conscious, if for no other reason than they can't actively learn.
I would maybe be comfortable classifying them as a snapshot of consciousness, but when you are interacting with an LLM it's far from interacting with a conscious entity.
How severe are we talking? I don't think there's any analog for how bad learning is for LLMs, needing multiple human lifetimes worth of data in order to be trained.
In the hypothetical case that a I truely lost all ability to learn, then yes I would no longer consider myself conscious. I'd be a echo of a previously conscious entity.
I strongly believe LLMs and API harnesses of today simply are not at the technological stage where such an API makes sense in standards.
However if this needs to be done, then it needs to be a opt-in per site permission at the very least, and there should be a way to verify the identity of which model is being prompted (which extends to even minor tweaks made to system prompts).
As a user I need to be sure that I can't be fingerprinted by navigating to a random site and them using this API without my permission.
As a dev I need to know what model my users are using, so I have the option to craft specific prompts per model.
You are missing the point of the original argument.
It's not that the project maintainer can use a LLM to generate a PR, it's that they choose not to.
To relate it closer to your argument. As a someone involved in a project that does X, I would find little value in collaborating with the "author" of another project built with AI to do X. Where as a project doing X were the authors actually wrote, understand the code, and thus the problem space better would be extremely valuable peers.
If you are a responsible maintainer you need to verify the correctness of the contribution wether you used an LLM to generate it or wether someone else did.
Having someone else be the AI-middlemen, just introduces additional complexity and confusion.
I'm often curious how well the arena approach works for highly dynamic graphs. You end up having to reimplement the ref counting or a custom GC, which carries more risk.
Even memory locality of a arena doesn't provide benefit, because memory order does not mirror execution order.
It is full of these short sentences that AI writing loves, sort of to feel "punchy". Normally you would copy-edit that stuff, join them up, have the writing have some rhythm. I agree with GP, the article is hard to read because it seems to have a lot of https://tropes.fyi/
Twitter is full of strung together short punchy sentences, and it spread to articles long before AI.
Not discounting the possibility that it's AI, but it didn't have the same repetition, contradiction, and inaccuracies I notice in other AI content. Though even that isn't exclusive to AI.
Try for too much impact, and you end up browbeating the reader until they're little more than metaphorical pulp. A human writer might like using those types of sentences--or any of the obvious LLM writing tropes--in specific contexts, but they'll usually recognize the need to avoid overusing them.
LLMs don't, and so the tropes get repeated ad nauseam. It doesn't help that social media posts are a huge part of their training data, and there's a large body of research on how Twitter and social media in general have altered grammar and sentence construction towards patterns more commonly found in oral-based traditions as users sought out ways to make their voices heard.
It's easy to imagine a more polished version of a line like "It's not X. It's Y!" being tossed out during a speech precisely because it can be dramatic and punchy. When it's done in every other paragraph, however, it can become rather disconcerting.
People are losing their ability to reason without prompting an LLM first.
It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.
I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.
It's much more than a management problem, experienced software engineers are actively opting into apathy and atrophy of their craft.
I see many peers getting worse in their abilities. It's especially disheartening to see people I admired for their problem solving devolve into someone who delegates more and more of their reasoning to LLMs. It really negatively affects working with them. If you have a concern or criticism of "their" approach to a problem they either dismiss it off hand as invalid, or they go discuss it with their LLM of choice making themselves a bottleneck to collaboration.
As the article suggests I suspect we're in for a real dark age of software as companies struggle to know who to keep, if they can even trust that those who have vital knowledge and skill today will retain it going forward.
This is my take away too. I see some interesting toys here and there, but not much of substance. Meanwhile all the GitHub issues I follow for open source projects have slowed to a halt, the products I use have no significant updates. Even AI products are slow to improve their interfaces.
There are still things I dislike about Deno, but it really does make package development a lot simpler. JSR is a great upgrade from NPM, and Deno makes it so simple to publish to both NPM and JSR. Strict IO permission system and WebGPU support are also nice to have.
> wrap a project into an `*.exe`
Deno makes this simple too. Though that's where it's bundling features stop. Honestly I am okay with that, I'd rather use Rolldown or Vite for web or library bundling.
reply