IANAL, but as an analogy perhaps: Some of the investments made from FTX stolen assets got amazing gains (Anthropic being a notable investment). But it doesn’t undo the crime.
In FTX's case there obviously was a crime - stealing customer money. With OpenAI I don't really see it, hence I guess the lack of a criminal case. The 'stealing a charity' bit is more metaphorical - it wasn't actually stolen, just the control kind of moving from friends of Musk to friends of Altman which was inevitable when Musk quit.
I think Musk is alleging unjust enrichment of Altman and Brockman but given they only have a small percentage in the for profit they set up it's going to be hard to prove that's unjust in court I guess.
You could argue OpenAI had drifted from its mission to be open.
The zf1s/zend-search-lucene fork still works and gets occasional updates, but it's essentially legacy maintenance on a PHP 5.3-era codebase. It crashed on PHP 7 for a while, and the original ZF team themselves recommended moving on.
> By agreeing to these Terms, you represent and warrant to us: (i) that you have not previously been suspended or removed from the Websites and Online Services
CloudFlare ToS has you covered. A human must accept it, even with the new agentic flow.
I think this is just saying you can’t sign up for a new account after a previously created account gets suspended, not that the act of suspension itself causes you to violate the the terms of service in perpetuity because, pedantically, any suspension that has happened, happened “previously”.
One of the sources of that problem is that GitHub is pushing all new products on top of Actions, making it load-bearing. A few examples are Dependabot, Pages, and Copilot Reviews. These aren't products that need to run on a CI system. Dependabot worked fine before Actions was a thing. Same with Pages, which ran fine for more than a decade without Actions.
My outsider perspective is that GitHub teams are having to fight for compute, and since Actions can be billed and timed, it has become the default compute layer for everything. But it makes for a terrible experience as an end-user.
The program is randomly generated and I am guessing that the seed for this is deterministically determined from the current block head (or something similar) making it hard to attack.
It might lead to scenarios where a miner may optimise block generation itself, I guess?
I was more curious about the possibility of generating optimised branchless variants and then running them in parallel on multiple ASICs to ensure you cover every branch and submit all the results and hope you’re fast? Would that be more inefficient than relying on branch prediction and CPUs?
Read a little and turns out Monero requires a chain of programs, each with a Blake hash construction to generate the next one. That makes it very hard to optimise since it adds a layer of “hard to avoid” branching.
And this also makes it hard to generate favorable programs.
If you (like me) are hearing about this for the first time, Bret Taylor is the co-founder.
> Bret is Co-Founder of Sierra. Most recently, he served as Co-CEO of Salesforce. Prior to Salesforce, Bret founded Quip and was CTO of Facebook. He started his career at Google, where he co-created Google Maps. Bret serves on the board of OpenAI.
I keep getting distracted by side-quests. The last one was building an Electron Zoo, and the current one is doing accurate SBOMs for each electron version.
Canonical has a tradition of inventing something that’s ahead of its time only to see nobody else is going the same way as they are. Sometimes they realise it was a mistake and follow everyone else.
Juju had a different problem: it was big-bang rewritten in Go and that froze features for too long for them to keep their mindshare. Rewriting was the right decision, as Python had poor concurrency back then, but doing that while freezing features was a mistake.
Worse than that, these are all vibe coded changes. If you look at any public Anthropocene codebase, they are all vibe coded messes with no coherent vision. I was looking at the Claude Code GitHub Action and it is a mess of options that don’t exist together, unclear documentation, and usage story being terribly unclear.
People say that a mostly-vibed project will collapse under its own weight. I personally doubt it, but I will be amused if the first big one falls this way is Claude Code itself.
Unfortunately it will all probably sort of work, But best not to dwell too much on how the sausage is made, it is pretty unpleasant. There will be some interesting job titles in the future however.
I just read Vernor Vinge's "A deepness in the sky" And the way he modeled their compute systems felt depressingly believable, they have thousand of years of libraries floating around, sort of loosely tacked together. and specialist programmer-archaeologists are the ones who who dig deep and try to understand the system.
> Unfortunately it will all probably sort of work, But best not to dwell too much on how the sausage is made, it is pretty unpleasant.
Interestingly, most long-running codebases are like that, no?
It's just that producing (incl. reviewing/testing and all those, even AI-assisted) that amount of code in a significantly shorter period of time highlights this discrepancy much more to us.
I've seen ancient codebases that you need to be blessed by a priest to even touch but they keep chugging away and having new features added. I wouldn't hold my breath for a collapse, just a quagmire that we continually have to wade through to get anything done.
Isn't it also true that the deeper and thicker the quagmire, the more tokens one will have to use to wade through it?
This seems like a path to eventual LLM lock-in once the codebase gets messy enough. These things could end up being like 0% interest credit cards for technical debt. I guess it all depends on how the token usage scales over time. My guess is it will be steeper than linear.
reply