I've also been wanting to make Gleam my primary language (am generally a Typescript dev), and I have not had any issue with using it with LLMs (caveat, I'm obviously still new with it, so might just be ignorant).
In fact, I'd say most of the Gleam code that has been generated has been surprisingly reliable and easy to reason about. I suspect this has to do with the static typing, incredible language tooling, and small surface area of the language.
I literally just copy the docs from https://tour.gleam.run/everything/ into a local MD file and let it run. Packages are also well documented, and Claude has had no issue looping with tests/type checking.
In the past month I've built the following, all primarily with Claude writing the Gleam parts:
- A realtime holiday celebration app for my team where Gleam manages presence, cursor state, emojis, and guestbook writes (still rough): https://github.com/devdumpling/snowglobe
- A private autobattler game backend built for the web
While it's obviously not as well-trodden as building in typescript or Go or Rust, I've been really happy with the results as someone not super familiar with the BEAM/Erlang.
EDIT: Sorry I don't have demos up for these yet. Wasn't really ready to share them but felt relevant to this thread.
> I fear LLMs have frozen programming language advancement and adoption for anything past 2021.
Why would that be the case? Many models have knowledge cutoffs in this calendar year. Furthermore I’ve found that LLMs are generally pretty good at picking up new (or just obscure) languages as long as you have a few examples. As wide and varied as programming languages are, syntactically and ideologically they can only be so different.
There's a flywheel where programmers choose languages that LLMs already understand, but LLMs can only learn languages that programmers write a sufficient amount of code in.
Because LLMs make it that much faster to develop software, any potential advantage you may get from adopting a very niche language is overshadowed by the fact that you can't use it with an LLM. This makes it that much harder for your new language to gain traction. If your new language doesn't gain enough traction, it'll never end up in LLM datasets, so programmers are never going to pick it up.
> Because LLMs make it that much faster to develop software
I feel as though "facts" such as this are presented to me all the time on HN, but in my every day job I encounter devs creating piles of slop that even the most die-hard AI enthusiasts in my office can't stand and have started to push against.
I know, I know "they just don't know how to use LLMs the right way!!!", but all of the better engineers I know, the ones capable of quickly assessing the output of an LLM, tend to use LLMs much more sparingly in their code. Meanwhile the ones that never really understood software that well in the first place are the ones building agent-based Rube Goldberg machines that ultimately slow everyone down
If we can continue living in the this AI hallucination for 5 more years, I think the only people capable of producing anything of use or value will be devs that continued to devote some of their free time to coding in languages like Gleam, and continued to maintain and sharpen their ability to understand and reason about code.
* One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests.
* One developer has an LLM making his PRs. He slurped up my unfinished branch, PRed it, and merged (!) it. One can only guess that the approved was also using an LLM. When I asked him why he did it, he was completely baffled and assured me he would never. Source control tells a different story.
* And I forgot to turn off LLM auto complete after setting up my new machine. The LLM wouldn't stop hallucinating non-existent constructors for non-existent classes. Bog standard intellisense did in seconds what I needed after turning off LLM auto complete.
LLMs sometimes save me some time. But overall I'm sitting at a pretty big amount of time wasted by them that the savings have not yet offset.
> One developer tried to refactor a bunch of graph ql with an LLM and ended up checking in a bunch of completely broken code. Thankfully there were api tests.
So the LLM was not told how to run the tests? Without that they cannot know if what they did works, and they are a bit like humans, they try something and then they need to check if that does the right thing. Without a test cycle you definitely don’t get a lot out of LLMs.
You guys always find a way to say "you can be an LLM maximalist too, you just skipped a step."
The bigger story here is not that they forgot to tell the LLM to run tests, it's that agentic use has been so normalized and overhyped that an entire PR was attempted without any QA. Even if you're personally against this, this is how most people talk about agents online.
You don't always have the privilege of working on a project with tests, and rarely are they so thorough that they catch everything. Blindly trusting LLM output without QA or Review shouldn't be normalized.
A LOT of people, if you're paying attention. Why do you think that happened at their company?
It's not hard to find comments from people vibe coding apps without understanding the code, even apps handling sensitive data. And it's not hard to find comments saying agents can run by themselves.
I mean people are arguing AGI is already here. What do you mean who is normalizing this?
I fully believe there are misguided leaders advocating for "increasing velocity" or "productivity" or whatever, but the technical leaders should be pushing back. You can't make a ship go faster by removing the hull.
And if you want to try... well you get what you get!
But again, no one who is serious about their business and serious about building useful products is doing this.
> But again, no one who is serious about their business and serious about building useful products is doing this.
While this is potentially true for software companies, there are many companies for which software or even technology in general is not a core competency. They are very serious about their very useful products. They also have some, er, interesting ideas about what LLMs allow them to accomplish.
I am not saying you should be a LLM maximalist at all.
I am just saying LLMs need to have a change-test cycle, like humans, in order to be effective. But looks like your goal is not really to be effective at using LLMs, but to bitch about it on the internet.
> But looks like your goal is not really to be effective at using LLMs, but to bitch about it on the internet
Listen, you can engage with the comment or ignore everything but the first sentence and throw out personal insults. If you don't want to sound like a shill, don't write like one.
When you're telling people the problem is the LLM did not have tests, you're saying "Yeah I know you caught it spitting out random unrelated crap, but if you just let it verify if it was crap or not, maybe it would get it right after a dozen tries." Does that not seem like a horribly ineffectual way to output code? Maybe that's how some people write code, but I evaluate myself with tests to see if I accidentally broke something elsewhere. Not because I have no idea what I'm even writing to begin with.
You wrote
> Without that they cannot know if what they did works, and they are a bit like humans
They are exactly not like humans this way. LLMs break code by not writing valid code to begin with. Humans break code by forgetting an obscure business rule they heard about 6 months ago. People work on very successful projects without tests all the time. It's not my preference, but tests are non-exhaustive and no replacement for a human that knows what they're doing. And the tests are meaningless without that human writing them.
So your response to that comment, pushing them further down the path of agentic code doing everything for them, smacks of maximalism, yes.
You are overlooking a blind spot, that is increasingly becoming a weakness for devs. You assume that businesses care that their software actually works. It sounds crazy from the dev side but they really don't. As long as cash keeps hitting accounts the people in charge MBAs do not care how it gets there and the program to find that out only requires one simple unmistakable algo Money In - money out.
evidence
Spreadsheets. These DSL lite tools are almost universally known to be generally wrong and full of bugs. Yet, the world literally runs on them.
Lowest bidder outsourcing. Its well known that various low cost outsourcing produces non functional or failed projects or projects that limp along for years with nonstop bug stomping. Yet business is booming.
This only works in a very rich empire that is in the collapse/looting phase. Which we are in and will not change. See: History.
I wish I could just ship 99% AI generated code and never have to check anything.
Where is everyone working where they can just ship broken code all the time?
I use LLMs for hours, every single day, yes sometimes they output trash. That’s why the bottleneck is checking the solutions and iterating on them.
All the best engineers I know, the ones managing 3-4 client projects at once, are using LLMs nonstop and outputting 3-4x their normal output. That doesn’t mean LLMs are one-shotting their problems.
I once toured a dairy farm that had been a pioneer test site for Lasix. Like all good hippies, everyone I knew shunned additives. This farmer claimed that Lasix wasn't a cheat because it only worked on really healthy cows. Best practices, and then add Lasix.
I nearly dropped out of Harvard's mathematics PhD program. Sticking around and finishing a thesis was the hardest thing I've ever done. It didn't take smarts. It took being the kind of person who doesn't die on a mountain.
There's a legendary Philadelphia cook who does pop-up meals, and keeps talking about the restaurant he plans to open. Professional chefs roll their eyes; being a good cook is a small part of the enterprise of engineering a successful restaurant.
(These are three stool legs. Neurodivergents have an advantage using AI. A stool is more stable when its legs are further apart. AI is an association engine. Humans find my sense of analogy tedious, but spreading out analogies defines more accurate planes in AI's association space. One doesn't simply "tell AI what to do".)
Learning how to use AI effectively was the hardest thing I've done recently, many brutal months of experiment, test projects with a dozen languages. One maintains several levels of planning, as if a corporate CTO. One tears apart all code in many iterations of code review. Just as a genius manager makes best use of flawed human talent, one learns to make best use of flawed AI talent.
My guess is that programmers who write bad code with AI were already writing bad code before AI.
> but LLMs can only learn languages that programmers write a sufficient amount of code in
i wrote my own language, LLMs have been able to work with it at a good level for over a year. I don't do anything special to enable that - just front load some key examples of the syntax before giving the task. I don't need to explain concepts like iteration.
Also llm's can work with languages with unconventional paradigms - kdb comes up fairly often in my world (array language but also written right to left).
I don't think this is actually true. LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere.
If this does appear to become a problem, is it not hard to apply the same RLHF infrastructure that's used to get LLMs effective at writing syntactically-correct code that accomplishes sets of goals in existing programming languages to new ones.
> LLMs have an impressive amount of ability to do knowledge-transfer between domains, it only makes sense that that would also apply to programming languages, since the basic underlying concepts (functions, data structures, etc.) exist nearly everywhere.
That would make sense if LLMs understood the domains and the concepts. They don't. They need a lot of training data to "map" the "knowledge transfer".
Personal anecdote: Claude stopped writing Java-like Elixir only some time around summer this year (Elixir is 13 years old), and is still incapable of writing "modern HEEX" which changed some of the templaring syntax in Phoenix almost two years ago.
Pure anecdote. Over the last year I've taken the opportunity to compare app development in Swift (+ SwiftUI and SwiftData) for iOS with React Native via Expo. I used Cursor with both OpenAI and Anthropic models. The difference was stark. With Swift the pace of development was painfully slow with confused outputs and frequent hallucinations. With React and Expo the AI was able to generate from the first few short prompts what it took me a month to produce with Swift. AI in development is all about force multipliers, speed of delivery, and driving down cost per product iteration. IMO There is absolutely no reason to choose languages, frameworks, or ecosystems with weaker open corpuses.
I would argue it's more important than ever to make new languages with new ideas as we move towards new programming paradigms. I think the existence of modern LLMs encourages designing a language with all of the following attributes:
- Simple semantics (e.g. easy to understand for developers + LLMs, code is "obviously" correct)
- Very strongly typed, so you can model even very complex domains in a way the compiler can verify
- Really good error messages, to make agent loops more productive
- [Maybe] Easily integrates with existing languages, or at least makes it easy to port from existing languages
We may get to a point where humans don't need to look at the code at all, but we aren't there yet, so making the code easy to vet is important. Plus, there's also a few bajillion lines of legacy code that we need to deal with, wouldn't it be cool if you could port (or at least extend it) it into some standardized, performant, LLM-friendly language for future development?
I think that LLMs will be complemented best with a declarative language, as inserting new conditions/effects in them can be done without modifying much (if any!) of the existing code. Especially if the declarative language is a logic and/or constraint-based language.
We're still in early days with LLMs! I don't think we're anywhere near the global optimum yet.
> It’d be like inventing a new assembly language when everyone is writing code in higher level languages that compile to assembly.
Isn't that what WASM is? Or more or less what is going on when people devise a new intermediate representation for a new virtual machine? Creating new assembly languages is a useful thing that people continue to do!
We may end up using AI to create simplified bespoke subset languages that fit our preferences. Like a DSL of sorts but with better performance characteristics than a traditional DSL and a small enough surface area.
It does further than non-determinism. LLM output is chaotic. 2 nearly identical prompts with a single minor difference can result in 2 radically different outputs.
> For those that don't know its also built upon OTP, the erlang vm
This isn't correct. It can compile to run on the BEAM: that is the Erlang VM. OTP isn't the Erlang VM; rather, "OTP is set of Erlang libraries and design principles providing middle-ware to develop [concurrent/distributed/fault tolerant] systems."
Importantly: "Gleam has its own version of OTP which is type safe, but has a smaller feature set. [vs. Elixir, another BEAM language with OTP support]"
The comment you are replying to is correct, and you are incorrect.
All OTP APIs are usable as normal within Gleam, the language is designed with it in mind, and there’s an additional set of Gleam specific additions to OTP (which you have linked there).
Gleam does not have access to only a subset of OTP, and it does not have its own distinct OTP inspired OTP. It uses the OTP framework.
What's the state of Gleam's JSON parsing / serialization capabilities right now?
I find it to be a lovely little language, but having to essentially write every type three times (once for the type definition, once for the serializer, once for the deserializer) isn't something I'm looking forward to.
A functional language that can run both on the backend (Beam) and frontend (JS) lets one do a lot of cool stuff, like optimistic updates, server reconciliation, easy rollback on failure etc, but that requires making actions (and likely also states) easily serializable and deserializable.
You can generate those conversions, most people do.
But also, you shouldn’t think of it as writing the same type twice! If you couple your external API and your internal data model you are greatly restricting your domain modelling cability. Even in languages where JSON serialisation works with reflection I would recommend having a distinct definition for the internal and external structure so you can have the optimal structure for each context, dodging the “lowest common decimator” problem.
I understand your point, and I agree with it in most contexts! However, for the specific use case where one assumes that the client and server are running the exact same code (and the client auto-refreshes if this isn't the case), and where serialization is only used for synchronizing between the two, decoupling the state from it's representation on the wire doesn't really make sense.
This is also what really annoyed me when I tried out Gleam.
I'm waiting for something similar to serde in Rust, where you simply tag your type and it'll generate type-safe serialization and deserialization for you.
Gleam has some feature to generate the code for you via the LSP, but it's just not good enough IMHO.
Rust has macros that make serde very convenient, which Gleam doesn't have.
Could you point to a solution that provides serde level of convenience?
Edit: The difference with generating code (like with Gleam) and having macros generate the code from a few tags is quite big. Small tweaks are immediately obvious in serde in Rust, but they drown in the noise in the complete serialization code like with the Gleam tools.
> Rust has macros that make serde very convenient, which Gleam doesn't have.
To be fair, Rust's proc macros are only locally optimal:
While they're great to use, they're only okay to program.
Your proc-macro needs to live in another crate, and writing proc macros is difficult.
Compare this to dependently typed languages og Zig's comptime: It should be easier to make derive(Serialize, Deserialize) as compile-time features inside the host language.
When Gleam doesn't have Rust's derivation, it leaves for a future where this is solved even better.
"Elixir has better support for the OTP actor framework. Gleam has its own version of OTP which is type safe, but has a smaller feature set."
At least on the surface, "but has a smaller feature set" suggests that there are features left of the table: which I think it would be fair to read as a subset of support.
If I look at this statement from the Gleam OTP Library `readme.md`:
"Not all Erlang/OTP functionality is included in this library. Some is not possible to represent in a type safe way, so it is not included. Other features are still in development, such as further process supervision strategies."
That quote leaves the impression that OTP is not fully supported and therefore only a subset is. It doesn't expound further to say unsupported OTP functionality is alternatively available by accessing the Erlang modules/functions directly or through other mechanisms.
In all of this I'll take your word for it over the website and readme files; these things are often not written directly by the principals and are often not kept as up-to-date as you'd probably like. Still even taking that at face value, I think it leaves some questions open. What is meant by supporting all of OTP? Where the documentation and library readme equivocates to full OTP support, are there trade-offs? Is "usable as normal" usable as normal for Erlang or as normal for Gleam? For example, are the parts left out of the library available via directly accessing the Erlang modules/functions, but only at the cost of abandoning the Gleam type safety guarantees for those of Erlang? How does this hold for Gleam's JavaScript compilation target?
As you know, Elixir also provides for much OTP functionality via direct access to the Erlang libraries. However, there I expect the distinction between Elixir support and the Erlang functionality to be substantially more seamless than with Gleam: Elixir integrates the Erlang concepts of typing (etc.) much more directly than does Gleam. If, however, we're really talking about full OTP support in Gleam while not losing the reasons you might choose Gleam over Elixir or Erlang, which I think is mostly going to be about the static typing... then yes, I'm very wrong. If not... I could see how strictly speaking I'm wrong, but perhaps not completely wrong in spirit.
Ah, that’s good feedback. I agree, that documentation is misleading. I’ll fix them ASAP.
> Elixir also provides for much OTP functionality via direct access to the Erlang libraries.
This is the norm in Gleam too! Gleam’s primary design constraint is interop with Erlang code, so using these libraries is straightforward and commonplace.
Thanks for the clarification. I've read about Gleam here and there, and played with it a bit, and thought there was no way to directly access OTP through the Erlang libraries.
This can be just my lack of familiarity with the ecosystem though.
Gleam looks lovely and IMO is the most readable language that runs on the BEAM VM. Good job!
I wonder why so many have got this wrong across this thread? Was it true once upon a time or something, or have people just misunderstood your docs or similar?
OTP is a very complex subject and quite unusual in its scope, and it’s not even overly clear what it even is. Even in Erlang and Elixir it’s commonly confused, so I think it’s understandable that Gleam has the same problem further still with its more distinct programming style.
hey, thanks for the clarification. I was under the impression that Gleam had a few shortcomings re: OTP, like missing APIs or the need to fall back to Erlang. Many people I know who work regularly with Elixir hold similar opinions - do you have any idea what happened there? Is there a lack of publicity for this support? Is it a documentation problem?
I presume they checked out Gleam years ago, or their investigation was more shallow.
That aside, it is normal in Elixir to use Erlang OTP directly. Neither Elixir nor Gleam provides an entirely alternative API for OTP. It is a strength that BEAM languages call each other, not a weakness.
who cares, just dont shove political opinions into a software project that developers. we are devs not jobless sjw's running around the road with some useless sign board
> who cares, just dont shove political opinions into a software project that developers. we are devs not jobless sjw's running around the road with some useless sign board
Here we are, having a technical discussion and here you are, shoving politics into it.
They are working towards "the real thing", whatever your definition of real is.
BTW in the 90s people tried to come up with a type system for Erlang, and failed:
--- start quote ---
Phil Wadler[1] and Simon Marlow [2] worked on a type system for over a year and the results were published in [3]. The results of the project were somewhat disappointing. To start with, only a subset of the language was type-checkable, the major omission being the lack of process types and of type checking inter-process mes-sages. Although their type system was never put into production, it did result in a notation for types which is still in use today for informally annotating types.
Several other projects to type check Erlang also failed to produce results that could be put into production. It was not until the advent of the Dialyzer [4] that realistic type analysis of Erlang programs became possible.
I hate how people talk about type systems as if there were no trade-offs to be considered. A Hindley–Milner style type system would effectively kill half the features that make Elixir amazing, and worse, would break pretty much all existing code.
I don’t mean to minimize the huge effort by the Gleam team; however, Elixir cannot become Gleam without breaking OTP/BEAM in the same ways Gleam does. As it stands now, Elixir is the superior language between the two, if using the full Erlang VM is your goal.
I didn't know about the statem limitation, I have howerver worked around it with gen server like wrapper, that way all state transitions were handled with gleams type system.
I have been meaning to ask about that on the discord but its one of the ten thousand things on my backlog.
Maybe i could write a gen_event equivalent.. I have some code which does very similar things.
i just implemented a project in elixir with LLM support and would never have considered that before. (i had never used elixir before) - So who knows maybe it will help adoption?
TLDR but it shows how you could teach an LLM your GraphQL query language to let it selectively load context into what were very small context windows at the time.
After that the MCP specification came out. Which from my vantage point is a poor and half implemented version of what GraphQL already is.
Not because I love Anthropic (I do like them) but because it's staving off me having to change my Coding Agent.
This world is changing fast, and both keeping up with State of the Art and/or the feeling of FOMO is exhausting.
Ive been holding onto Claude Code for the last little while since Ive built up a robust set of habits, slash commands, and sub agents that help me squeeze as much out of the platform as possible.
But with the last few releases of Gemini and Codex I've been getting closer and closer to throwing it all out to start fresh in a new ecosystem.
Thankfully Anthropic has come out swinging today and my own SOP's can remain in tact a little while longer.
I think we are at the point where you can reliably ignore the hype and not get left behind. Until the next breakthrough at least.
I've been using Claude Code with Sonnet since August, and there haven't been any case where I thought about checking other models to see if they are any better. Things just worked. Yes, requires effort to steer correctly, but all of them do with their own quirks. Then 4.5 came, things got better automatically. Now with Opus, another step forward.
I've just ignored all the people pushing codex for the last weeks.
Don't fall into that trap and you'll be much more productive.
The most effective AI coding assistant winds up being a complex interplay between the editor tooling, the language and frameworks being used, and the person driving. I think it’s worth experimenting. Just this afternoon Gemini 3 via the Gemini CLI fixed a whole slate of bugs that Claude Code simply could not, basically in one shot.
If you have the time & bandwidth for it, sure. But I do not, at I'm already at max budget with 200$ Anthrophic subscription.
My point is, the cases where Claude gets stuck and I had to step in and figure things out has been few and far between that I doesn't really matter. If the programmers workflow is working fine with Claude (or codex, gemini etc.), one shouldn't feel like they are missing out by not using the other ones.
Using both extensively I feel codex is slightly “smarter” for debugging complex problems but on net I still find CC more productive. The difference is very marginal though.
I tried codex due to the same reasoning you list. The grass is not greener on the other side.. I usually only opt for codex when my claude code rate limit hits.
I personally jumped ship from Claude to OpenAI due to the rate-limiting in Claude, and have no intention of coming back unless I get convinced that the new limits are at least double of what they were when I left.
Even if the code generated by Claude is slightly better, with GPT, I can send as many requests as I want and have no fear or running into any limit, so I feel free to experiment and screw up if necessary.
You can switch to consumption-based usage and bypass this all together but it can be expensive. I run an enterprise account and my biggest users spend ~2,000 a month on claude code (not sdk or api). I tried to switch them to subscription based at $250 and they got rate limited on the first/second day of usage like you described. I considered trying to have them default to subscription and then switch to consumption when they get rate limited, but I didn't want to burden them with that yet.
However for many of our users that are CC users they actually don't hit the $250 number most months so its actually cheaper to use consumption in many use cases surprisingly.
Don't throw away what's working for you just because some other company (temporarily) leapfrogs Anthropic a few percent on a benchmark. There's a lot to be said for what you're good at.
I also really want Anthropic to succeed because they are without question the most ethical of the frontier AI labs.
Aren’t they pursuing regulatory capture for monopoly like conditions? I can’t trust any edge in consumer friendliness when those are their longer term goal and tactics they employ today toward it. It reeks of permformativity
> I also really want Anthropic to succeed because they are without question the most ethical of the frontier AI labs.
I wouldn't call Dario spending all this time lobbying to ban open weight models “ethical”, personally but at least he's not doing Nazi signs on stage and doesn't have a shady crypto company trying to harvest the world's biometric data, so it may just be the bar that is low.
I can’t speak to his true motives but there are ethical reasons to oppose open weights. Hinton is an example of a non-conflicted advocate for that. If you believe AI is a powerful dual use tech technology like nuclear, open weights are a major risk.
You need much less of a robust set of habits, commands, sub agent type complexity with Codex. Not only because it lacks some of these features, it also doesn't need them as much.
The benefit you get from juggling different tools is at best marginal. In terms of actually getting work done, both Sonnet and GPT-5.1-Codex are both pretty effective. It looks like Opus will be another meaningful, but incremental, change, which I am excited about but probably won’t dramatically change how much these tools impact our work.
I’m threw a few hours at Codex the other day and was incredibly disappointed with the outcome…
I’m a heavy Claude code user and similar workloads just didn’t work out well for me on Codex.
One of the areas I think is going to make a big difference to any model soon is speed. We can build error correcting systems into the tools - but the base models need more speed (and obviously with that lower costs)
Not GP but my experience with Haiku-4.5 has been poor. It certainly doesn't feel like Sonnet 4.0 level performance. It looked at some python test failures and went in a completely wrong direction in trying to address a surface level detail rather than understanding the real cause of the problem. Tested it with Sonnet 4.5 and it did it fine, as an experienced human would.
While its been a long time since Ive used Thunderbird, I just wanted to take the time to publicly say thank you.
Many HNers probably wont (or cant) remember the world of desktop mail clients but basically during the height of MSFT dominance there was only one real mail client: Outlook. Which Microsoft was starting to monetize heavily, ignore UX, and keep it windows only (cant blame them for that).
Then Thunderbird arrived on the scene, an OSS mail client that beat the pants off of Outlook in features, spam detection, IMAP support and a bunch of other things.
And it was free.
And you could use it on any machine.
This was a huge moment for OSS.
We owe a lot of credit to Mozilla and Thunderbird for rescuing us from a closed source world.
Before Thunderbird, Eudora was fantastic. We ran it at a college I worked at for most of the staff and faculty, and it was a very sad day when Qualcomm shut it down.
The last 'mainline' (pre-OSE) versions of Eudora for Mac and Windows were open-sourced and preserved as an artefact by the Computer History Museum[2] in 2018; as part of the preservation, the CHM assumed ownership of the Eudora trademark.
The only actively maintained fork of the software, known as Eudoramail as of June 2024, originates from 'mainline' Eudora for Windows as preserved by the CHM. Hermes, its current maintainers, describe Eudoramail 8.0 as currently being in alpha; Wellington publisher Jack Yan, meanwhile, points out its stability, a number of well-characterised and reproducible display bugs notwithstanding.
On May 22, 2018, after five years of discussion with Qualcomm, the Computer History Museum acquired full ownership of the source code, the Eudora trademarks, copyrights, and domain names. The transfer agreement from Qualcomm also allowed the Computer History Museum to publish the source code under the BSD open source license. The Eudora source code distributed by the Computer History Museum is the same except for the addition of the new license, code sanitization of profanity within its comments, and the removal of third-party software whose distribution rights had long expired.
The time period under discussion ("before Thunderbird", and the heyday of Outlook lock-in, and I would also add before gmail) is well before 2018.
I used mutt at the time too, but I don't think it's in the same category as the graphical clients. For a while Gnome's evolution was also big in free OS circles.
Eventually, and I was glad to see it!, but way too late for it to matter much. I would've used Eudora when it was originally offered. Since I couldn't, I got comfortable with Thunderbird. And when my friends who used Eudora had to migrate off of it, I set them up with Thunderbird, too.
Eudora was practically a CULT. I worked for one of their users who straight refused to use anything else, and one of my ongoing jobs as an admin was trying to get Exchange to play nice with it. It was maddening.
I fired it up several times for testing purposes, I don't get the hype, but man, for some people it was just the best damn software ever made.
It did its thing—internet email—really well. It was aimed squarely at the user with like a POP account, and it had a clean UI and plenty of features. For the time and use case, it was a fantastic client.
Outhouse tried to be too many things at once. Email client with HTML/rich text features that made it leave Microsoft crufties including mso: tags and the infamous J smiley all over your emails, contact manager, calendar. It was heavyweight, slow, and not quite there in terms of UI. But if you're an MBA type and you're committed to MSFT, or you're looking for a turnkey solution and it's this or Lotus Notes, Outhouse and Exchange sound like a win.
The Bat! was absolutely the best email client. ever. way ahead of eudora.
it was a massive step back when i switched to my first macbook in 2006 (the black one!) and started to use Thunderbird.
That said Thunderbird is fantastic now and great to see it get native Exchange support!
Eudora had its own very distinct take on mail client UI. Many loved it. I never really got on with it, although I could use it.
While the native codebase is probably too old to salvage now, there was a project to write a Eudora-style UI for Thunderbird as an add-on. That might be easier to revive for 21st century email.
I know people that used it because it was self contained for Windows if I remember correctly. I remember one person running the installation off of a Zip drive back in the 90's. I warned them that Zip disks like to randomly self destruct and he'd better be making backups.
Used to rely heavily on this, until an upgrade on one of my systems blew up the entire profile and I just stuck with webmail mostly. On my phone I have gmail, Outlook and Thunderbird (forked from K-9).
> but basically during the height of MSFT dominance there was only one real mail client: Outlook.
On Windows, you had:
* Netscape Suite (later Seamonkey)
* Eudora
* Pegasus
and (edit:) two of those still exist. Plus, Outlook cost money (unless you used Outlook Express), while Netscape was gratis, and on Linux and most Unix variants, Outlook has never even existed. On Linux specifically there's Evolution and there's KMail.
And I'm sure I'm forgeting a few others.
> Then Thunderbird arrived on the scene
It was a development of the MailNews component of Netscape, to use the same XUL-based platform as Firefox. So, an evolution, not a revolution.
I loved Pegasus. Specifically because to move it to another machine you just had to copy the PMAIL folder and make a shortcut. No registry awareness, no dependencies.
> Many HNers probably wont (or cant) remember the world of desktop mail clients...
If there are people who have never used a desktop mail client, I will say you owe it to yourself to try one. Web clients suck compared to desktop clients, it's not even close between the two. Sticking with just the Gmail interface (or whatever) is so limiting; definitely give alternatives a shot if you haven't.
> Sticking with just the Gmail interface (or whatever) is so limiting
Perhaps it's the fact that I grew up with Gmail throughout my education (and now my career), but most local clients lack one key feature - quick move!
My entire workflow around emails is based around opening & reading them, and then using the "Quick Move" button in Gmail to move it into a specific folder by typing the first few letters of the folder and hitting enter.
I know there are extensions for Thunderbird like Quick Folder Move [0], but I find these can be buggy, slow, etc. I presume these are just the realities of dealing with email providers who'd prefer you use their webmail clients rather than Thunderbird et al.
Gnome evolution has shift-ctrl-v to move to a folder with typeahead search. I don't use the gmail webclient so I can't say how it compares.
I should note that I mostly use the emacs notmuch mail client, which requires having the mail mirrored locally (which I do with e.g. isync/mbsync), but gives really responsive and rich search and tagging capabilities
I tried a couple of them, and they both started downloading my entire backlog of email to my hard drive, which I didn't want.
I couldn't think of a reason why this would be necessary, but I haven't really kept up with how the technology has evolved in recent years. Is this behavior intrinsic to desktop clients?
Intrinsic, no. Common, yes. Many people who use desktop clients want a local copy of a substantial fraction of their email so that they can review or compose messages while off-line. Desktop clients also operate faster and can provide robust search services only if they have a cached copy of the messages on disk.
I can think of reasons why I might want a local copy, but they didn't apply in my case.
Do I have control over my data? I'm not sure I understand the question, but in this case the answer seems like a clear no, as my employer manages the email server.
Definitely make sure to adjust your defaults if you decide to dip your toes into nntp... I hate some of the defaults there... namely the reply/respond button defaults. Usually you want to respond to the group, not send an email to the poster.
That said, NNTP is so dead at this point, outside some active BBSes that offer NNTP access. Usenet definitely feels like a wasteland when I've looked around the past couple years.
Opera had an amazing built-in NNTP and email client. I think it was my first experience with views instead of folders, so my emails could appear in multiple "folders" (I think now we call them "smart folders").
It is double weird because unix has always supported this.
I think was an accident of how the unix filesystem was implemented but basically, every file has at least one name but can have as many as you want, if a file ever has zero names it gets deleted. note that every open file is considered an additional name for that file.
By accident, I don't think it was designed this way but as they were putting together the filesystem "hey, what happens if two directories entries point to the same data?" anyone else "We will make a complicated locking system to prevent that from happening" the unix madlads "ship it and call it a feature, hell, work it into how files are opened as well then you can do tricksy stuff like open a file then delete it so it does not exists anywhere in the filesystem but it is still on disk"
The funny, in an ironic sense, thing is that while this this sort of naturally fell out of the first design of the unix filesystem it is not natural at all to modern copy-on-write filesystems, they have to do contortions to support it, but they do because it is now what people expect.
Personally I do not use thunderbird, but one elderly relative requires thunderbird. So I am all in favour of thunderbird getting better. Not everyone is able to use emails in a much simpler way. I actually, back when I was using gmail still, had some +4000 unread messages. I simply can not keep up with regular mail.
What, you don't think people were flocking over to Mutt?
When I was first getting into Linux, I liked Evolution a lot, though admittedly I haven't used it in awhile. Honestly I haven't really used Mutt in awhile either; webmail is just easier.
Evolution is great; it's also had outlook EWS (including Oauth2) support for several years now. I am still mystified as to why Thunderbird is so much more popular (though nice to see that thunderbird is getting some much needed TLC more recently).
Thunderbird is the only MOZ product that I still use daily - almost at par with Mail.app if not more, and I hope to keep using it unless they eventually release the iOS Thunderbird after making it unrecognisable to me and ensuring that some of the differentiating Thunderbird features are missing – like the ability to send email from any address on a domain by just editing the "From" field - of course, it will work only if you own that domain. But it's a feature I can't do without (and utilise it a lot on desktop). Then there are forever pending things like maildir support :)
I used, and even paid for, The Bat! at around this time, but as it was the emailer of choice for spammers, when spamming was a newish thing, I kept getting perfectly legitimate mail bounced and the developers had to constantly update the client to traverse the anti Bat internet! Which was a pain. I also used Opera email client for a while. Which was dross.
It also had a password dialog that showed fun hieroglyphs as you typed, instead of dots or asterisks. But, oops, those would change deterministically depending on what you typed.
I still use Thunderbird and I love it. Even though I absolutely hate email and it is a chaotic clusterfuck we act like is bulletproof.
I'm incredibly impressed at how feature deficit email is, but Thunderbird gives a lot of power back. It's just a lot of little things that add up. Like why is tagging and sorting so hard? But Thunderbird makes it easy, giving you as many as you want and let you label as you please. In Gmail, Outlook, or Apple Mail you can't implement filtering, but in Thunderbird you can. There's just so many junk emails being sent from accounts I can't outright block and my inbox is a nightmare of chaos without these. Sure, I wish I could do regex and it was more feature rich, but it is strong enough that I can already catch a lot of emails that Gmail's spam detection misses. Like what the fuck is with this spam detection, it is missing things where my email is not even in the To or {B,}CC fields![0]
> And you could use it on any machine.
The only thing I'm missing is on iOS. Email on my phone is a literal joke. Apple Mail[0.1] is the only one (compared to Gmail, Outlook, and Thunderbird) that previews a PDF. It seems like they're just helping scammers. I routinely get PayPal crypto scams and they look reasonably legitimate on Apple Mail but nowhere else. I could see how someone could be fooled, but I don't even have a PayPal account lol.
But on this note, we really do need to do something about email. We treat it so poorly. I use a lot of relay and proxy addresses now[1]. I'm also sending out a lot of resumes lately and it is surprising how we treat email. Like Microsoft only gives you SSO and then forces your email through that, not allowing you to add another email address. Not everything is "[email protected]", I use "[email protected]"[2] and "[email protected]" (ditto [2]). In a world where we keep IDs for decades, where emails are constantly scraped and leaked, and where logins are tied to emails, these proxies are more important than ever. When I dump my gmail address I can also just redirect my two entry points (the mozmail and website domains) towards my new one. It is still not a great solution but at least it is easier to dump [email protected] and move to [email protected] than it is to go from [email protected] to [email protected].
If anyone has a better solution to this too, please let me know. I really fucking hate email and it seems like there's a ton of low hanging fruit
[0] The source of the email is a bit complicated and is clearly a LLM bypass by looking like generic emails like password resets or login alerts, but if my email was [email protected] it looks like it is sent to `[email protected] <[email protected]>` CC `[email protected]`. It feels like we've gone backwards in spam detection. These are trivial to detect!
[0.1] And dear god, the least Apple Intelligence could do is run a god damn Naive Bayes filter on my text messages. You can surely do that on device! No Angela, I don't want to learn more about how I can make $500/wk and at no point in time have I ever wanted to accept a text message from a +63 country code... nor do I ever accept a call from my original area code as I haven't lived in the area for decades and it is a great filter to know who's spam.
[1] I use both Firefox relay and my personal website as Cloudflare gives you free email forwarding. Firefox relay integrates into Bitwarden (most of the time...) and it makes it really convenient for giving websites unique emails and unique passwords. Also helpful when you are given a piece of paper as you can create an email on the spot, block them as needed, and track how they're traded.
[2] I don't actually have the "godelski.mozmail.com" domain, so don't send me mail there. Though I wish relay would allow you to buy a second domain (and Signal would allow you at least 2 usernames!) At least give me one "clear" and one "handle".
> I'm incredibly impressed at how feature deficit email is . . . It's just a lot of little things that add up. Like why is tagging and sorting so hard?
If you read the specifications for the various email protocols, you'll soon discover that email, at the protocol level, is at its most feature-rich akin to flat files stored in a hierarchy of folders.
Tags, sorting, etc. are all the responsibility of clients. (Which is as it should be, since sorting is part of viewing data, not storing or sending it. Regarding tags, I suppose you could roll out a new email protocol, but SMTP is nothing more than a few text commands to send and receive bytes, and any tagging would be done by the client alone or the server alone as a value-add. The feature itself could not be implemented via, for example, the SMTP spec.
When you send an email via SMTP, you send the server "MAIL FROM" plus sender's address, RCPT TO plus destination, DATA and the contents of the email, and then a dot to represent the end of the email.
The email is then immutable. The receiver would be the one who wants to tag an email, and since the email is immutable, there's nothing you can do. And even if the sender wants to tag it, there's no command. I suppose in theory you could just add the tags to the email body, but every recipient not using your "improved" email format would just see that in the body of the email
In this context, the relevant protocol is IMAP, not SMTP. And IMAP very much has tagging and filtering, which is what Thunderbird exposes here. Heck, IMAP even has notes, you can attach to mails, so you could discuss mail drafts using plain IMAP, but no client I know exposes this.
Fair, but I think you missed the forest for the trees. You're right that I could be more clear but you also seem to understand that in context I'm discussing clients.
Nothing I've discussed has to do with protocol and everything has to do with clients, which is also in the context of what Thunderbird is. So I'm not sure why you're bringing up protocols as no one was discussing it until you brought it up.
- The mature frameworks (e.g. ASP.NET with razor pages) are great. Microsoft still have the same issue of pushing new and different ways of doing web things, but you do see a lot of that on the web platform in general.
- CLI workflow for compilation/build/deployment is now there and works smoothly. VS Code extensions for a bit of intellisense without requiring a full IDE (if that's the way you work).
The thing I enjoy most about modern C# is the depth/levels of progressive enhancement you can do.
Let's say in the first instance, you write a proof of concept algorithm using basic concepts like List<T>, foreach, stream writing. Accessible to a beginner, safe code, but it'll churn memory (which is GC'd) and run using scalar CPU instructions.
Depending on your requirements you can then progressively enhance the memory churn, or the processing speed:
Eventually getting to a point where it's nearly the same as the best C code you could write, with no memory churn (or stop-the-world GC), and SIMD over all CPU cores for blisteringly fast performance, whilst keeping the all/most of the safety.
this performance aspects is interesting. So time to try C# again. I;m learning Zig for some of those reasons, but also because the language has a small scope and the language features will be smaller
If you are leaning towards Zig, I don't think C# will be what you are looking for. It is a good option along with Java and Go, but not in Zig, C, Rust territory if that is what you want.
At my daily job, our production servers are Linuxes, and we deploy our .NET code (fairly complex web app) just fine, using the same techniques you would with other technologies like CLI GitLab CI/CD, Docker, Kubernetes... Forget about GUI if you're not into it.
The web framework craze is settling down. Microsoft seems to have consolidated into two web frameworks: Blazor (improved/extended Razor Pages) and ASP.NET for MVC, and APIs. I personaly don't expect another U-turn for the coming years.
Blazor is really nice, and ready for production. The only downside I see is that Visual Studio (vanilla) struggles compared to another MSFT technologies. You don't need Visual Studio, though.
I am a full time .NET developer, experienced with both newer and older .NET versions.
They are confusingly named, but this is the gist:
- .NET Framework is the older version that is tied to Windows.
- .NET is the newer version that is cross platform, and was renamed from .NET Core.
Linux support is pretty good on .NET. I don't have as much experience with this personally since most of my company is still using .NET Framework, but I was able to get a simple .NET app running on Linux without any hassle.
The main web frameworks I am aware of are Blazor and MVC. Blazor behaves more like a single-page application (without needing JavaScript!) and abstracts away most of the headache of making dynamic web pages, but generally doesn't scale as well from what I have seen. MVC is a little more traditional but you need to write some JavaScript for interactivity.
I'm not fully sure what you mean by GUI heavy. Everything I am aware of can be accomplished with the CLI tooling.
> Subpar tooling outside of Windows (I'm looking at you C# Dev Kit)
JetBrains Rider is excellent and runs on Windows, Mac and Linux. It has a few Windows only features but nothing important for me, it's the best IDE for C#/.NET you can get on non-Windows platforms imo. And it's free for non commercial use.
There is only one relevant web framework: ASP.NET Core. Microsoft is as bad at naming things as it has always been, so that hasn't changed. They're pretty good about not deprecating too much in the web area now, the big disruptive change was from .NET Framework to .NET Core. The Windows UI stuff seems to be a bit of a shitshow in terms of deprecations, but I've no direct experience with it.
I use .NET at $job, and have been running arch for the last few years, without any problems.
I also have two collegues using Apple silicon, also no problems there.
The official aspnet core web framework is (in my opinion) good enough that you don't need anything 3rd party.
The GUI story is not a good one though, and if I were to write a GUI program I'd reach for Avalonia (3rd party, free).
I use Rider as IDE, but there are multiple other options.
With the recent performance improvements (Span, Memory, Intrinsics, etc) it's possible to write quite performant C# these days, and with low GC pressure.
It has been very good since around 2007/8 when ASP.NET MVC released. That was the first good MS web framework. You needed to avoid EntityFramework / WCF / Unity and anything coming from the Enterprise group. There were some amazing OSS frameworks and libraries.
Then when .Net Core happened it was compelling and once that matured a no-brainer. On the MSFT the Alt.Net side won, MS hired many of the good people and the Alt.Net supporters inside Microsoft run the show nowadays. So now it's fine to run a largely MS stack on .Net.
The Linux story is much better now. Back end the day it was necessary to try to use Mono which wasn't fully compatible and the code had to have #if defs all over the place. That is thankfully all gone now.
Most things can be done with the dotnet cli but for editing code, realistically you will want to use an editor like vscode, Rider or Visual Studio itself. I found the LSP support in vim quite bad for C#.
As other had said it is a huge leap forward. What is holding me back from using mac/linux as dev platform is that Visual Studio (proper, not code) is windows only. I would hate if I had to use vs code as my ide for c#. I am aware that ryder exists but haven't personally used it for more than a few short moments. IMHO opinion, if you you want to develop using c# you are best of using windows because of better tooling with linux as deployment target.
Honestly, .NET should be the default choice for any non-trivial backend or non-GUI application. I assume the legacy (outdated/inaccurate) perception of MSFT's stewardship is the only reason anyone reaches for something else.
AOT is a game changer for native binaries, but even with fw dependence, the cross platform support is excellent (we deploy to Linux servers frequently.)
>> .NET should be the default choice for any non-trivial backend or non-GUI application
I personally wouldn't go that far. If you know well, it makes sense to use it, but you already know Java or Go, would the benefit delta isn't going to be that high.
Somewhat famously, even Anders Hejlsberg decided to use Go vs .NET for the new Typescript compiler. .NET is fine (and I have personally used it a lot) but that doesn't mean it is net better than other options out there.
For GUI there are Avalonia (cross-platform WPF), Uno Platform (cross-platform WinUI) and old-school bindings to native UI frameworks on Android, iOS & Mac (you can do UIKit/AppKit in C#), and of course Windows.
Not really. They're independent languages with different design goals. You're exact questions in this thread would have been just as relevant without bringing it up.
> - Enterprise Devs as the core user (Type safety, great stdlib)
> - High level OO based interfaces
> - Allows for low enough level programming that you can reasonably use in place of C/C++
This can reasonably said about any programming language that is popular. These points could also reasonably match Go.
C# and Java have had completely opposite design goals. C#'s design goals are to have a more powerful/complex language with a not as advanced runtime. Java is the exact opposite. They favor having a slower moving language while pushing the edge on having a very advanced runtime. They're only similar on the surface level being C-Like languages with a GC. The design philosophies have created very different languages with different implementations, for example async/await in C# vs virtual threads in Java.
Green threads in Java are only there because the async model implementation of the JVM is trash, don't even talk about generics
Also, Java didn't move for a very, very, very, very, very, very, very, long time because nobody cares. It took Microsoft to reinvest in it, and make things moving. Basically they took a bunch of C# specifications and translate them to JSR format.
Before that they port Rx from .NET Framework, to Java and Javascrit.
> Basically they took a bunch of C# specifications and translate them to JSR format.
I would really love for you to point out which ones, because I can't think of a single one. Most features being developed into Java are taken from ML, the OpenJdk team has said this numerous times.
> ecause the async model implementation of the JVM is trash
What model? They've never implemented async.. it's a compiler construct.
I find it really disappointing that this is basically the only discourse about .NET in HN. People repeatedly stuck in a time loop where they seem unable to find information about .NET.
People have mentioned they back "founders not ideas". Which is a great tag line, and also not 100% true, but true enough.
The other side is they like to fund in area's that have a strong why now. One great answer to that is "Because it wasnt possible to build this X years ago" in other words they like to fund companies that are taking advantage of a new technical property, regulatory change, or cultural change.
AI hits 2/3 of those.
Now you can say "AI slop all you want" just like you can say "Crypto is a scam" but its a statement that ignores there has been profitable and viable new ventures built on top of these new compute properties.
TLDR: technological change is the basis of how VC's make money.
Many can point to a long history of killed products and soured opinions but you can't deny theyve been the great balancing force (often for good) in the industry.
- Gmail vs Outlook
- Drive vs Word
- Android vs iOS
- Worklife balance and high pay vs the low salary grind of before.
Theyve done heaps for the industry. Im glad to see signs of life. Particularly in their P/E which was unjustly low for awhile.
Balance is too weak of a word. OpenAI was conceived specifically to prevent Google from getting AGI first. That was its original goal. At the time of its founding Google was the undisputed leader of AI anywhere in the world. Musk was then very worried about AGI being developed behind closed doors particularly Google, which was why he was the driving force behind the founding of OpenAI.
The book Empire of AI describes him as being particularly fixated on Demis as some kind of evil genius. From the book, early OAI employees couldn’t take the entire thing too seriously and just focused on the work.
I thought it was a workaround to Google's complete disinterest in productizing the AI research it was doing and publishing, rather than a way to balance their dominance in a market which didn't meaningfully exist.
That’s how it turned out, but IIRC at the time of OpenAI’s founding, “AI” was search and RL which Google and deep mind were dominating, and self driving, which Waymo was leading. And OpenAI was conceptualized as a research org to compete. A lot has changed and OpenAI has been good at seeing around those corners.
That was actually Character.ai's founding story. Two researchers at Google that were frustrated by a lack of resources and the inability to launch an LLM based chatbot. The founders are now back at Google. OpenAI was founded based on fears that Google would completely own AI in the future.
I think that Google didn't see the business case in that generation of models, and also saw significant safety concerns. If AI had been delayed by... 5 years... would the world really be a worse place?
Elon Musk specifically gave OAI $150M early on because of the risk of Google being the only Corp that has AGI or super-intelligence. These emails were part of the record in the lawsuit.
It’s a common pattern for upstarts to embrace openness as a way to differentiate and gain a foothold then become progressively less open once they get bigger. Android is a great example.
Last I checked, Android is still open source (as AOSP) and people can do whatever-the-f-they-want with the source code. Are we defining open differently?
I think we're defining "less" differently. You're interpreting "less open" to mean "not open at all," which is not what I said.
There's a long history of Google slowly making the experience worse if you want to take advantage of the things that make Android open.
For example, by moving features that were in the AOSP into their proprietary Play Services instead [1].
Or coming soon, preventing sideloading of unverified apps if you're using a Google build of Android [2].
In both cases, it's forcing you to accept tradeoffs between functionality and openness that you didn't have to accept before. You can still use AOSP, but it's a second class experience.
Core is open source but for a device to be "Android compatible" and access the Google Play Store and other Google services, it must meet specific requirements from Google's Android Compatibility Program. These additional proprietary components are what make the final product closed source.
Was "Android" the way you define it ever open? Isnt it similar to chromium vs chrome? chromium is the core, and chrome is the product built on top of it - which is what allows Comet, Atlas, Brave to be built on.
That's the same thing what GrapheneOS, /e/ OS and others are doing - building on top of AOSP.
> Yes. Initially all the core OS components were OSS.
Are you saying they "un-open sourced" things? Because that hasnt happened. Just beacuse a piece of code is open source doesnt mean additional services need to be open source as well.
vscode core is open source, but MS maintains closed-source stuff that builds on top of vscode. That doesnt mean vscode isnt open source anymore.
They've poisoned the internet with their monopoly on advertising, the air pollution of the online world, which is an transgression that far outweighs any good they might have done. Much of the negative social effects of being online come from the need to drive more screen time, more engagement, more clicks, and more ad impressions firehosed into the faces of users for sweet, sweet, advertiser money. When Google finally defeats ad-blocking, yt-dlp, etc., remember this.
This is an understandable, but simplistic way of looking at the world. Are you also gonna blame Apple for mining for rare earths, because they made a successful product that requires exotic materials which needs to be mined from earth? How about hundreds of thousands of factory workers that are being subjected to inhumane conditions to assemble iPhones each year?
For every "OMG, internet is filled with ads", people are conveniently forgetting the real-world impact of ALL COMPANIES (and not just Apple) btw. Either you should be upset with the system, and not selectively at Google.
I dont think your comment justifies calling out any form of simplistic view. It doesnt make sense. All the big players are bad. They"re companies, their one and only purpose is to make money and they will do whatever it takes to do it. Most of which does not serve human kind.
It seems okay to me to be upset with the system and also point out the specific wrongs of companies in the right context. I actually think that's probably most effective. The person above specifically singled out Google as a reply to a comment praising the company, which seems reasonable enough. I guess you could get into whether it's a proportional response; the praise wasn't that high and also exists within the context of the system as you point out. Still, their reply doesn't necessarily indicate that they're not upset with all companies or the system.
Yes, we're absolutely holding Apple accountable for outsourcing jobs, degrading the US markets, using slave and child labor, laundering cobalt from illegal "artisanal" mines in the DRC, and whitewashing what they do by using corporate layering and shady deals to put themselves at sufficient degrees of separation from problematic labor and sources to do good PR, but not actually decoupling at all.
I also hold Americans and western consumers are responsible for simply allowing that to happen. As long as the human rights abuses and corruption are 3 or 4 degrees of separation from the retailer, people seem to be perfectly OK with chattel slavery and child labor and indentured servitude and all the human suffering that sits at the base of all our wonderful technology and cheap consumer goods.
If we want to have things like minimum wage and workers rights and environmental protections, then we should mandate adherence to those standards globally. If you want to sell products in the US, the entire supply chain has to conform to US labor and manufacturing and environmental standards. If those standards aren't practical, then they should be tossed out - the US shouldn't be doing performative virtue signalling as law, incentivizing companies to outsource and engage in race to the bottom exploitation of labor and resources in other countries. We should also have tariffs and import/export taxes that allow competitive free trade. It's insane that it's cheaper to ship raw materials for a car to a country in southeast asia, have it refined and manufactured into a car, and then shipped back into the US, than to simply have it mined, refined, and manufactured locally.
The ethics and economics of America are fucking dumb, but it's the mega-corps, donor class, and uniparty establishment politicians that keep it that way.
Apple and Google are inhuman, autonomous entities that have effectively escaped the control and direction of any given human decision tree. Any CEO or person in power that tried to significantly reform the ethics or economics internally would be ousted and memory-holed faster than you can light a cigar with a hundred dollar bill. We need term limits, no more corporation people, money out of politics, and an overhaul, or we're going to be doing the same old kabuki show right up until the collapse or AI takeover.
And yeah, you can single out Google for their misdeeds. They, in particular, are responsible for the adtech surveillance ecosystem and lack of any viable alternatives by way of their constant campaign of enshittification of everything, quashing competition, and giving NGOs, intelligence agencies, and government departments access to the controls of censorship and suppression of political opposition.
I haven't and won't use Google AI for anything, ever, because of any of the big labs, they are most likely and best positioned to engage in the worst and most damaging abuse possible, be it manipulation, invasion of privacy, or casual violation of civil rights at the behest of bureaucratic tyrants.
If it's not illegal, they'll do it. If it's illegal, they'll only do it if it doesn't cost more than they can profit. If they profit, even after getting caught and fined and taking a PR hit, they'll do it, because "number go up" is the only meaningful metric.
The only way out is principled regulation, a digital bill of rights, and campaign finance reform. There's probably no way out.
> laundering cobalt from illegal "artisanal" mines in the DRC
They don't, all cobalt in Apple products is recycled.
> and whitewashing what they do by using corporate layering and shady deals to put themselves at sufficient degrees of separation from problematic labor and sources to do good PR, but not actually decoupling at all.
They don't, Apple audits their entire supply chain so it wouldn't hide anything if something moved to another subcontractor.
One can claim 100% recycled cobalt under the mass balance system even if recycled and non-recycled cobalt was mixed as long as the total amount used in production is less or equal to recycled cobalt purchased in the books.
At least here[0] they claim their recycled cobalt references are under the mass balance system.
People love getting their content for free and that's what Google does.
Even 25 years ago people wouldn't even believe Youtube exists. Anyone can upload whatever they want, however often they want, Youtube will be responsible for promoting it, they'll provide to however many billions users want to view it, and they'll pay you 55% of the revenue it makes?
Yep, it's hard to believe it exists for free and with not a lot of ads when you have a good ad blocker... though the content creator's ads are inescapable, which I think is ok since they're making a little money in exchange for what, your little inconvenience for 1 minute or so - if you're not skipping the ad, which you aren't, right??) - after which you can watch some really good content.
The history channels on YT are amazing, maybe world changing - they get people to learn history and actually enjoy it. Same with some match channels like 3brown1blue which are just outstanding, and many more.
Yes, this is correct, and it happens everywhere. App Store, Play Store, YouTube, Meta, X, Amazon and even Uber - they all play in two-sided markets exploiting both its users and providers at the same time.
They're not a moral entity. corporations aren't people.
I think a lot of the harms you mentioned are real, but they're a natural consequence of capitalistic profit chasing. Governments are supposed to regulate monopolies and anti-consumer behavior like that. Instead of regulating surveillance capitalism, governments are using it to bypass laws restricting their power.
If I were a google investor, I would absolutely want them to defeat ad-blocking, ban yt-dlp, dominate the ad-market and all the rest of what you said. In capitalism, everyone looks out for their own interests, and governments ensure the public isn't harmed in the process. But any time a government tries to regulate things, the same crowd that decries this oppose government overreach.
Voters are people and they are moral entities, direct any moral outrage at us.
Why should the collective of voters be any more of a moral entity than the collective of people who make up a corporation (which you may include its shareholders in if you want)?
It’s perfectly valid to criticize corporations for their actions, regardless of the regulatory environment.
They're accountable as individuals not as a collective. And it so happens, they are responsible for their government in a democracy but corporations aren't responsible for running countries.
> It’s perfectly valid to criticize corporations for their actions, regardless of the regulatory environment.
In the free speech sense, sure. But your criticism isn't founded on solid ground. You should expect corporations to do whatever they have to do within the bounds of the law to turn a profit. Their responsibility is to their investors and employees, they have no responsibility to the general public beyond that which is laid out in the law.
The increasing demand in corporations being part of the public/social moral consciousness is causing them to manipulate politics more and more, eroding what little voice the individuals have.
You're trying to live in a feudal society when you treat corporations like this.
If you're unhappy with the quality of Google's services, don't do business with them. If they broke the law, they should pay for it. But expecting them to be a beacon of morality is accepting that they have a role in society and government beyond mere revenue generating machines. And if you expect them to have that role, then you're also giving them the right to enforce that expectation as a matter of corporate policy instead of law. Corporate policies then become as powerful as law, and corporations have to interfere with matters of government policy on the basis of morality instead of business, so you now have an organization with lots of money and resources competing with individual voters.
And then people have the nerve to complain about PACs, money in politics, billionaire's influencing the government, bribery,etc.. you can't have it both ways. Either we have a country run partly by corporations, and a society driven and controlled by them, or we don't.
When we criticize corporations, we really are criticizing the people who make the decisions in the corporations. I don’t see why we shouldn’t apply exactly the same moral standards to people’s decision in the context of a corporation as we do to people’s decisions made in any other context. You talk about lawfulness, but we wouldn’t talk about morals if we meant lawfulness. It’s also lawful to vote for the hyper-capitalist party, so by the same token moral outrage shouldn’t be directed towards the voters.
I get that, but those CEOs are not elected officials, they don't represent us and have no part in the discourse of law making (despite the state of things). In their capacity has executives of a company, they have no rights, no say in what we find acceptable or not in society. We tell them what they can and cannot do or else. That's the social contract we have with companies and their executives.
Being in charge of a corporation shouldn't elevate someone to a platform where they have a louder voice than the common man. They can vote just as equally as others at the voting booth. they can participate in their capacity as individuals in politics. But neither money, nor corporate influence have places in the governance of a democratic society.
I talk about lawfulness because that is the only rule of law a corporation can and should be expected to follow. Morals are for individuals. Corporations have no morals. they are neither moral or immoral. Their owners have morals, and you can criticize their greed, but that is a construct of capitalism. They're supposed to enrich themselves. You can criticize them for valuing money over morals, but that's like criticizing the ocean for being wet or the sun for being too hot. It's what they do. It's their role in society.
If a small business owner raises prices to increase revenue, that isn't immoral right? even though poor people that frequent them will be disaffected? amp that up to the scale of a megacorp, and the morality is still the same.
Corporations are entities that exist for the sole purpose of generating revenue for their owners. So when you criticize Google, you're criticizing a logical organization designed to do the thing you're criticizing it of doing. The CEO of google is acting in his official capacity, doing the job they were hired to do when they are resisting adblocking. The investors of Google are risking their money in anticipation of ROI, so their expectation from Google is valid as well.
When you find something to be immoral, the only meaningful avenue of expressing that with corporations is the law. You're criticizing google as if it was an elected official we could vote in/out of office. or as if it is an entity that can be convinced of its moral failings.
When we don't speak up and user our voice, we lose it.
Why are you directing the statement that "[Corporations are] not a moral entity" at me instead of the parent poster claiming that "[Google has] been the great balancing force (often for good) in the industry."? Saying that Google is a force "for good" is a claim by them that corporations can be moral entities; I agree with you that they aren't.
I could have just the same I suppose, but their comment was about google being a balancing force in terms of competition and monopoly. it wasn't a praise of their moral character. They did what was best for their business and that turns out to be good for reducing monopolies. If it turned out to be monopolistic, I would be wondering what congress and the DOJ are doing about it, instead of criticizing Google for trying to turn a profit.
I don't understand your logic, it seems like victim blaming. Using the internet and pointing out that targeted advertising has a negative effect on society is not "having it both ways".
Also, HN is by definition algorithmic content and social media, in your mind what do you think it is?
You are not a "victim" for using or purchasing something which is completely unnecessary. Or if that's the case, then you have no agency and have to be medicinally declared unfit to govern yourself and be appointed a legal guardian to control your affairs.
What kind of world do you live in? Actually Google ads tend to be some of the highest ROI for the advertiser and most likely to be beneficial for the user. Vs the pure junk ads that aren't personalized, and just banner ads that have zero relationship to me. Google Ads is the enabler of free internet. I for one am thankful to them. Else you end up paying for NYT, Washinton Post, Information etc -- virtually for any high quality web site (including Search).
Most of the time, you need to pick one. Modern advertising is not based on finding the item with the most utility for the user - which means they are aimed at manipulating the user's behaviour in one way or another.
Outlook is not better in ways that email or gmail users necessarily care about, and in my experience gets in the way more than it helps with productivity or anything it tries to be good at. I've used it in office settings because it's the default, but never in my life have I considered using it by choice. If it's better, it might not matter.
Google always has been there, its just that many didn't realize that DeepMind even existed and I said that they needed to be put to commercial use years ago. [0] and Google AI != DeepMind.
You are now seeing their valuation finally adjusting to that fact all thanks to DeepMind finally being put to use.
For what it's worth, most of those examples are acquisitions. That's not a hit against Google in particular. That's the way all big tech co's grow. But it's not necessarily representative of "innovation."
Taking those products from where there were to the juggernauts they are today was not guaranteed to succeed, nor was it easy. And yes plenty of innovation happened with these products post aquisition.
If you consider surveillance capitalism and dark pattern nudges a good thing, then sure. Gemini has the potential to obliterate their current business model completely so I wouldn't consider that "waking up".
Why stop at YouTube? Blame Apple for creating an additive gadget that has single handedly wasted billions of hours of collective human intelligence. Life was so much better before iPhones.
But I hear you say - you can use iPhones for productive things and not just mindless brainrot. And that's the same with YouTube as well. Many waste time on YouTube, but many learn and do productive things.
Dont paint everything with a single, large, coarse brush stroke.
All those examples date back to the 2000s. Android has seen some significant improvements, but everything else has stagnated if not enshittified- remember when google told us not to ever worry about deleting anything?- and then started backing up my photos without me asking and are now constantly nagging me to pay them a monthly fee?
They have done a lot, but most of it was in the "don't be evil" days and they are a fading memory.
Seriously? Google is an incredibly evil company whose net contribution to society is probably only barely positive thanks to their original product (search). Since completely de-googling I've felt a lot better about myself.
The way the OP phrased it
> Is a skill essentially a reusable prompt that is inserted at the start of any query?
Actually is a more apt description for a different Claude Code feature called Slash Commands
Where I can create a preset "prompt" and call it with /name-of-my-prompt $ARGS
and this feature is the one that essentially prefixes a Prompt.
The other description of lazy loading is more accurate for Skills.
Where I can tell my Claude Code system: Hey if you need to run our dev server see my-dev-server-skill
and the agent will determine when to pull in that skill if it needs it.
reply