The fact that the LLM appears to never assign an actual 0 or 10 makes me suspicious. Especially when the prompt includes explicit examples of what counts as a 10.
I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.
In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!
From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.
I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.
Definitely one approach to the circumstances. I tried some variation of this and it blew up in my face (as I expected ).
Towards the end of my time there, a “fixer” was brought in to shore up the team that I was working on. The “fixer” also became my manager when they were brought on.
The “fixer” proceeded to fire 70+% of the team over the course of 6-8 months and install a bunch of yes people, in addition to wasting about $2,000,000 on a subscription to rebuild our core product with a framework product no one on the team knew. I was told to deploy said framework product on top of Kubernetes (which not a single person on my team had any experience with) while delivering on other in-flight projects. I ignored the whole thing.
I ended up deciding I was done with Tesla and went into a regularly scheduled 1:1 with my manager (the “fixer”) with a written two-weeks notice in hand, only to be fired (with 6-weeks severance, thankfully) before I was able to say anything about giving notice.
Out of curiosity, it sounds like you're the kind of person that could easily find another job. Why slog it out until the end rather than quit/find a better gig? Genuinely interested because every time I've ended up with a manager like that my mental health has suffered so now I generally start planning my exit as soon as I'm stuck with a bad manager.
Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.
I have been in such a situation before, and while I was not able to coast along until the company went under, the time delta between me getting fired and the company going under was measured in weeks.
In hindsight I'd probably not do it again, it was hugely mentally taxing, and knowingly performing work in such a way that it provides negative value to the company (remember, the goal is to make it go under) is in my experience actually harder than just doing a good job... Especially if being covert is a goal.
I've seen it, but I think it's got some places that it would benefit from more clarity. Can we put together a committee to improve and protect our processes from it? We could call it a task force if that's easier to sell to management.
I did not know the existence of this manual. It was a very interesting read! Especially after page 28 (General Interference with Organizations and Production).
> Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job.
What...? In what way is it anything other than highly unethical to sabotage someone you have a contract with, because you disagree with them?
Plenty of historical examples of work environments where sabotage would have been the most ethical thing to do (and often you will only know in hindsight). But yeah in most circumstances a simple disagreement doesn’t warrant the psychological cost of such sabotage.
Your opinion of the situation is not enough to justify this course of action in 99.99% of cases and the residual 0.01% should not be enough to fuel your ego to do anything other than quit decently, and look for an employer that is more aligned with whatever your ideals are.
I repeat the insane statement that we are arguing over here: "Ethically, if you do not agree with the company you work at, the optimal course of action if you can stomach it is to stay and do a bad job rather than get replaced by someone who might do a good job."
This says: ANY company you work for and disagree with over anything: Don't quit! Sabotage [maybe people are confused about what "do a bad job" means, and that this usually leads to other people getting hurt in some way, directly or indirectly, unless your job is entirely inconsequential]. And that's supposed to be ethically optimal.
There's a _big_ continuum between disagreeing over something and an ethical hard line, it feels like a slippery slope to interpete a suggested approach for one end of that line as advocacy for applying that same approach to the other end.
Imagine I am working for a company and I discover they are engaged in capturing and transporting human slaves. Furthermore, the government where they operate in fully aware and supportive of their actions so denouncing them publicly is unlikely to help. This is a real situation that has happened to real people at points in history in my own country.
I believe that one ethical response would be to violate my contract with the company by assisting slaves to escape and even providing them with passage to other places where slavery is illegal.
Now, if you agree with the ethics of the example I gave then you agree in principle that this can be ethical behavior and what remains to be debated is whether xAI's criminal behavior and support from the government rise to this same level. I know many who think that badly aligned AI could lead to the extinction of the human race, so the potential harm is certainly there (at least some believe it is), and I think the government support is strong enough that denouncing xAI for unethical behavior wouldn't cause the government to stop them.
a) I understand the very few and specific examples, that would justify and require disobedience. In those cases just doing a "bad job" seems super lame and inconsequential. I would ask more of anyone, including myself.
b) all other examples, the category that parent opened so broadly, are simply completely silly, is what I take offensive with. If you think simply disagreeing with anyone you have entered a contract with is cause for sabotaging them, and painting that as ethically superior, then, I repeat: what the fuck?
c) If you suspect criminal behavior then alarm the authorities or the press. What are you going to do on the inside? What vigilante spy story are we telling ourselves here?
Some people in this thread seem to come from a place of morality where some “higher truth” exists outside of the sphere of the individual to guide one’s actions, and yet others even seem to weakly disguise their own ethics and beliefs behind a framework of alleged “rationality”, as if there was mathematical precision behind what is the “right” action and which is clearly wrong — and anybody that just doesn’t get it must be either an idiot or clinically insane. By which I completely dismiss not only opinion but also individual circumstances.
In reality, which actions a person considers ethical and in coherence with their own values is highly individual. I can be friends or colleague with somebody who has a different set of ethics and circumstances than me. If I were to turn this into a conflict that needs resolution each time it shows, I would set myself up for eternal (life long) war with my social environment. Some will certainly enjoy that, and get a sense of purpose and orientation from it! I prefer not to, and I can find totally valid and consistent arguments for each side. No need to agree to reach understanding, and respect our differences.
Typically, people value belonging over morality: they adapt to whatever morality guarantees their own survival. The need to belong is a fundamental need; we are social animals not made to survive on our own.
The moment I am puzzled about another persons reasoning I can ask and if they are willing they will teach me why their actions make sense to them. If I come from a place of curiosity and sincere interest, people will be happy to help me get over my confusion. If I approach that conversation from some higher ground, as some kind of missionary, I might succeed sometimes, but fail most times, as I would pose a threat to their coherence, which they will remove one way or another.
Ah, but if there’s no higher truth, then you also can’t say that it’s wrong to sabotage your employer because of an ethical disagreement (or rather, you can say it, but it’s just your personal opinion). By condemning this course of action, the OP presupposes some sort of objective ethical standard.
"Don't struggle only within the ground rules that the people you're struggling against have laid down." -- Malcolm X
"If you're unhappy with your job you don't strike. You just go in there every day, and do it really half-assed. That's the American way. -- Homer Simpson
"To steal from a brother or sister is evil. To not steal from the institutions that are the pillars of the Pig Empire is equally immoral." -- Abbie Hoffman
Some might consider it unethical but others might also consider it immoral to not do what you're describing.
I guess you're fortunate enough to have only worked at places where your moral framework matched up with their business practices and treatment of the staff.
That isn't the case for most people. Most people are put into situations at one time or another where the people they're working for don't value them as equals, where the people they work for casually violate reasonable laws like product safety or enivronmental standards laws and what's worse these people will suffer no consequences for doing so.
No White Knight in shining armour is going to come from the government to shut them down. No lightning from heaven will strike them down. No financial penalty to dissuade them from further defection from society and the common man in the game that is life.
So what do you do? Do you do nothing? Just put your nose to the grindstone and keep working for the man? Do you quit, only to end up penniless and jobless, with poor prospects of an alternative, and even if you found one maybe it's 'meet the new boss same as the old boss'?
Nah, you come into work every day and you subtly fuck it up. You subtly fuck it up and you take whatever value you can extract.
Assume you work for e.g., a cigarette company. A company responsible for many deaths by unethically adding highly addictive substances. By sabotaging the company you are making this world a better place. Ethically it's the right thing to do.
Or, assume you're hired by the Nazi to work in concentration camps. Ethically it's the right thing to do to sabotage their gas chambers.
Why would you start to work for elon musk if you consider yourself a decent person, but him unworkable for? Have you not heard of elon musk beforehand...? Did you let yourself be employed with the specific goal of sabotaging the work, in what must be the least effective (but certainly very lucrative) coup possible?
What is it? Am I to believe this person is a chaotic mastermind? Or a selfish idiot? Or non-existant?
His reputation did not start out in the current state; if it had, I suspect he would never have been able to hold the importance/monetary power that he currently holds.
Changing one's mind about him can take a while.
With the benefit of hindsight, it certainly took me longer than I am happy with to change my mind about him; to be specific, I should have more radically changed my opinion of his personality when he libelled that cave expert in response to being told a submarine wasn't going to help, I should have recognised that only someone with a very fragile ego would react that way, that it wasn't just a blip on his record but something deeper.
Even ethically, this is only true if you think the ethics of the place are so bad that sabotage is warranted. That's not every place that you have ethical problems with.
To do that (and hide it), you have to become a dishonest person yourself. That is ethically destructive to you. So the threshold for doing this should be pretty high.
Yeah, I could see this being true if there was really _nothing else_ I could possibly be doing with my time that is worthy. But there are a lot of worthy things I could be doing with my time.
Ethically perhaps but financially and mentally its surely better to start looking for a new role (at a different company) that is more in alignment with you, no?
Ethically, if you extend this reasoning, are we not obligated to find a position in the most morally repulsive organization we are aware of, and then coast?
I really wouldn’t want to be in this position. But it feels very motivating. It would sooth some difficult memories.
I can see myself putting in a lot of hours.
The willingness to be fired, in both good and bad situations, can be mentally freeing and an operational/political advantage. Many of us fail to push as hard as we optimally could, when we have too much on the line.
IMO, this is a good question and deserves a solid answer, so I’ll do my best.
Setting aside the “fixer” for the time being, I really enjoyed the work I did at Tesla. Tesla was the first company that gave me very high levels of autonomy to just own projects and deliver. It also pushed me to take on projects that I had previously wanted to do that I hadn’t been given a chance to work on before.
(Side note: At that point in time in my career, my thinking was that I needed to earn opportunities to work on projects at work to build skills that would enhance my career. I didn’t see the value in working on projects outside of work to build skills because I didn’t think those side-project skills would be valued by other companies the same as “day job” experience. I’ve since learned this isn’t true when it’s done right.)
I spent a lot of time at Tesla delivering value for a bunch of people who desperately needed it at the time, and the thanks I received from them was genuine. It felt very good to help others at Tesla out in a meaningful way, so I kept chugging along to the best of my abilities. Life was throwing lemons at me in my personal dealings, and Tesla was helping me make lemonade from a career standpoint. Besides, all the long work hours were a good distraction from the home life stuff.
In a lot of ways, it was a very fulfilling environment to work in, but it wasn’t for the faint of heart. People often quit within a month or two because the environment was too fast paced with too many projects under tight deadlines and projects quickly followed one after another. An environment like Tesla just doesn’t let up, so one has to figure out how to manage the stress without much support from others. Oftentimes, if you do need to let up at Tesla (or introduce friction in any sort of seemingly non-constructive way), that’s the cue you aren’t working out for the company anymore and it’s time to find someone to replace you.
Coming back around to the original question of why I stuck it out until the end. Just before the “fixer” was brought in, I was “soft promoted” by a director (no title change, but was given direct reports and a pay bump, the title change was suppose to come a couple of months later as the soft-promotion happened just before an annual review cycle). The director who soft-promoted me was someone who I got along with well and it seemed like things were going in the right direction in my career at that point. The director was in charge of a couple of projects that went sideways in a very visible way, and Elon basically fired the director after the second project went south, which is why the “fixer” was brought in.
When the “fixer” first took over things, it seemed like I was going to continue on the path that the director had originally laid out for me. The “fixer” said I was going to get more headcount and work on bigger projects, but this never materialized.
I really didn’t like working for the “fixer” after a while. IMO, it was clear they didn’t know what they were doing, they weren’t willing to listen to feedback, and I spent a lot of time trying to provide guidance to the “fixer”, but it wasn’t seen as helpful and I felt like I was spinning gears. My mental health did start to suffer as I got more burned out towards the end of my tenure there.
Eventually, I was tasked with hiring someone to be my manager and I saw the writing on the wall (sort of). I started to look for a new job just in case. At one point, I thought bringing in someone between myself and the “fixer” would be a good thing. I didn’t realize I was actually finding my replacement. Two days after my replacement was hired, I was let go (this was the 1:1 meeting where I was going to turn in my notice, but HR served me papers instead).
To your original point, if I was in a similar situation now, I would be planning my exit immediately instead of trying to make the best of a bad situation, but I had to learn that lesson the hard way.
I'd be curious to hear your thoughts on how the "fixer", who sounds rather ineffective as an executive, came into this position, in what sounds like overall a rather effective organization.
I've been personally thinking quite a bit about what makes organizations work or not work recently, and your story is quite interesting to me as a glimpse into a kind of organization that I've never seen from the inside myself.
This is a good question, and it felt like nepotism. I do want to point out that this is all somewhat hazy memories from years ago when all of this happened, so take everything with a grain of salt (as usual). Also, a lot of this is going to sound like nepotism, which is most likely was, but this is hearsay from other people.
My understanding of how the "fixer" came into there position is a somewhat circuitous route. From my understanding (I didn't hear any of this directly from the "fixer" themselves, but other people who spent far more time with the "fixer" than myself), the "fixer" had spent about a decade out of the workforce prior to joining Tesla. My understanding is that they were raising kids while also dealing with aging parents. We'll just call this time the "fixer"'s work hiatus.
Prior to the hiatus, the "fixer" had moved into a small-team managerial role at a large, name-brand tech company during the late 90s/early 2000s. At the end of the hiatus, they leveraged some connections and somehow attained a director position at Tesla managing a team of about 30-40 people straight out of the hiatus.
From my understanding, the first team the "fixer" managed at Tesla didn't like working for them and after about 18 months, the team basically forced the "fixer" out. I'm not exactly sure what the team was doing to push the person out, but from what I heard, work basically ground to a halt for the entire team where they refused to work for the "fixer".
This was around the same time that the two projects went sideways that I mentioned, so the director I reported to was on the outs and the director's manager (a VP) was looking for someone who could step into the role. The VP somehow connected with the "fixer" and they worked out a deal where the "fixer" would lead the team on a 3-month probation period while the VP continued to look for someone to come into the position, while also giving the "fixer" a chance to earn the role.
(Side note: One other bit of context I want to provide is that the team I was on was about 50-60 or so people at this time right before the "fixer" came on. The "fixer" also did not have any sort of technical background and this team consisted of probably ~90% software professionals in some capacity. A lot of the conversations were very technical in nature, and the "fixer" did A LOT of delegating and "just tell me what decision you'd make and we'll do that" leadership.)
During this probation period, I thought the "fixer" actually did a good job getting a lay of the land, the social dynamics at play, and helped work out some inefficiencies. However, a lot of this improvement was done by bringing in consultants to do the deep dive, discover problems, and provide guidance to the "fixer" on how to address the problems.
Once the probation period was over, the consultants left and the "fixer" was in charge. Pretty quickly, the firings began and over the course of the next 5-6 months, more than 70% of the team under the "fixer" was replaced. At the same time, the team I was working for merged with another team, and the team size under the "fixer" shot up to about 100-120 people post-merge (I forget the exact number). The "fixer" also hired quite a few more people thinking more people get the same projects done faster.
To say the least, it was a pretty chaotic time because the entire team was under a lot of pressure with in-flight projects, not knowing if they were going to randomly be fired or not, new people to mentor/gel with, and lots of random projects being thrown at us.
About 6 months after I left, the "fixer" was fired and someone else who had extensive experience was brought in to right the ship. Per my understanding with people who were still working there about a year after the "fixer" left, the new person was very successful and had done a good job leading the team. Also, the person who I found to be my replacement stayed nearly 7 years at Tesla, so I guess I did a good job with that one.
In my case at a different firm, I happily gave notice than to put up with the "fixer", who had been hired by the other "fixer", both of which were mostly only good at shitting all over the place and driving most of the technical organization out of the company. I got the feeling that was the whole point, so I resigned instead of waiting for my eventual layoff.
As someone who now lives and works in Denmark: it's sad that so many of us have been conditioned to think 6 weeks severance is generous.
Here, labor unions are quite widespread, and very effective at negotiating reasonably but firmly. As a result, I can depend on 3 months severance _guaranteed under law_ after 6 months at a job. (After 3 years, it goes up to 4 months, and then from there up to a max of 6 months.)
It puts the responsibility for risk of instability, errors in planning hiring / capacity, etc. firmly where it belongs: with the employer.
(And no, the economic sky is not falling here as a result. Quite the opposite.)
Welcome to our cozy little country; I hope you're settling in well.
Just out of curiosity: Assuming you're a SW engineer, did you join IDA or Prosa or did you decide to not join an union? I'd like to gathers some more datapoints to help other engineers moving to Denmark make an informed decision.
Becaues they were ~first to market - and honestly, as a tesla driver for the last 6 years - It's the best car I ever owned (including Toyota, Mazda, and domestics).
6 years ago, for the effective price of a Honda Accord, I was able to get a car with excellent AWD for NorEast winters, perfect weight distribution (previously drove a Miata for comparison), could beat ~95% 'super cars' in a straight line, and it got 140MPG.
6 years ago. And I've had 0 maintenance outside of tire / air filter changes since. There was nothing anything remotely like it on the market, and it still holds up today. That's incredibly compelling.
Then PedoDiver, and it's been downhill from there... I'll likely get an R3X when it comes out.
For a year when we were doing the digital nomad thing, my wife and I didn’t own a car and we rented plenty of EVs. Tesla was by far our least favorite. Not having CarPlay alone is dealbreaker
As an anecdote, the two I've had are fairly reliable. The older one did have more issues (4+ in warranty?, 3 out of warranty), but they've all been small/manageable so far.
CR notes, though, that Tesla has improved, with its latest models demonstrating "better-than-average reliability." It’s now in the top 10 of the publication’s new car predictability rankings—just avoid those older models.
That said, it's not all bad news for Tesla on the reliability front. According to Consumer Reports, Tesla ranks ninth in new-car reliability with a predicted reliability of 50. That's just behind Buick (51) and Acura (54), but ahead of Kia (49) and Ford (48), as well as luxury rivals like Audi (44), Volvo (42), and Cadillac (41).
You were so blinded by Elon Derangement Syndrome that you didn't even bother reading your own source.
Not sure which car you compare it to specifically from those manufacturers, but teslas seem much more expensive where I live than most models of those. Comparing it to corresponding BMW would be a more appropriate comparison.
Then comparison of quality of manufacturing and driving experience would end up in very different way (as driver of even older bmw 5 series teslas I've been to feel very cheap, and driving enjoyment goes way further than straight line performance and there teslas just don't deliver).
I agree the pedodirver should have been an eye opener for everybody. People are who they are and they don't change. Circumstances change and thus corresponding reactions, but thats about it.
I bought my LR Model 3 in 2020 for ~42,000, ~15k cheaper than a v6 3 Series at the time. A v6 5 series is another significant jump up in price/market.
> Not sure which car you compare it to specifically from those manufacturers
My comparison at the time was a Honda Civic, BMW 3 Series, and that was kind of it.
I generally consider the Model 3 interior roughly middle between the Honda and the BMW, while having worlds better tech, twice the hp, and - Electric (when they were still rare).
There really was nothing like it at any price point at the time, and i still consider it a great car (though of course not perfect).
This is the archetype I have seen for most fans of Tesla and people who think they make good cars. They assume a $50,000 car (their current Tesla) should compare with a $20,000 car (their previous Honda/Mazda). The Tesla market is also the market with BMWs and Porsches, and dollar for dollar you get a lot more from a BMW than a Tesla.
I compare my $41.5k Model Y with a Rav4/Highlander.
The Rav4 costs the same, but has far worse performance, technology, and ongoing maintenance costs.
The Highlander is slightly better, but costs $10k to $20k more, and still has far worse performance, technology, and ongoing maintenance costs.
Plus, I avoided spending hours at a dealership, and I must know at least a couple dozen Tesla owners that report no issues in the previous 5 to 10 years.
I thought I would miss Carplay, but it’s a non issue. Toyota wanted $15 to $25 per month for remote start, I pay Tesla $0 per month for remote start and remote climate control.
They must have outcompeted Musk at intelligence and/or insanity with their dedication into maximizing production volume of liquid fueled rocket engines.
Tom Mueller was a VP of propulsion at TRW Inc., which, among numerous other things you know from textbooks, made the Apollo LM descent engine, as well as early Space Shuttle TDRS data relay system sats. Calling Mueller a guy interested with engines having issues with his bosses is like referring to Craig Federighi as a guy interested in designing his own laptop.
I guess now that everyone knows about Elon, and Elon himself probably becoming more paranoid from both age and after SpaceX years and exposure to Twitter infoflood without adequate mental immunity, on top of most people who'd be in position to meet him not being as smart and quietly lunatic as literal Old Space trained rocket scientists, the scheme of temporarily impinging ideas upon Musk so to securely attaching the funding for your own thing do not work so well anymore.
To me it was more like watching an old lady watering IE toolbars at a Mcdonald's. Nobody knows what's the deal with her never cancelling any InstallShields, oh wait, here comes another WinRAR installer... aaand a reboot.
Tesla won because Elon is a great seller, the product is mediocre at best but I’ve heard many times from friends that it was the same quality as a Mercedes Benz, so the reality distortion field is very real.
And Americans in general don’t want electric cars for some reason. I’m happily driving my Buzz and charging on my solar panels instead of paying 5 bucks a gallon on diesel. The propaganda here is strong and people buy it.
I think you are simplifying a little. Musk had the courage to go against the big manufacturers and build the charger network which at the same a lot of smart people would never work. Same with SpaceX. They did something most people thought could never work.
I don't like Musk politically but that doesn't mean we can't acknowledge that he transformed 2 industries by sheer willpower and stubbornness.
> I don't like Musk politically but that doesn't mean we can't acknowledge that he transformed 2 industries by sheer willpower and stubbornness.
If you talk to anyone who worked there, they will tell you that he had little to do with the innovation at any of his companies. His lieutenants and the people that worked for them had all the innovative ideas, and for the most part tried to either avoid Elon's ideas or convince him that their ideas were his so he would push them.
But push them he did until the industry had to get on board. I think people underestimate the impact of a pro-change company culture, even if it does run on a cult of personality that is much less pleasant up close than in the occasional earnings call.
Yes, Original Musk was a good innovator. Alas, his brain has rotted - maybe not in IQ, but in execution and quality as he fossilized into a narcissist.
Teslas have a lot of flaws, but there is just now starting to be real competition. There was nothing like the model 3 in 2019. Tesla did well because they were first to market with a disruptive product people wanted, and because Elon sold it well. Both.
There was lots of competition in 2019: Volkswagen ID.3, Audi e-tron, Jaguar I-PACE, Polestar 1, etc., as well as lower-end entries like Hyundai Kona, Kia Niro, and so on. Depends on exactly what you think Tesla is competing against.
- All of the other options made a painful trade off on cost or range or something else. Tesla was the only one that had both range and was (to some degree) affordable without being compromised in some way.
I'm not a Tesla fanboy, last year was the first time I bought one (new Model Y), but it is by far the best car I've ever owned, and the FSD blew my mind with how much better it was than I expected.
My wife hates Elon, and has a new hybrid Mitsubishi, but she still drives my Model Y all the time because it's just so much better to drive.
Same experience here. Had a 2018 P100D. Absolutely the worst car I’ve ever had. Terribly put together. Awful interface. And so utterly fucking distracting it was a liability.
Got rid of it after it stomped the brakes on an empty road and had a battery issue that took weeks to fix.
I don’t own a car now and don’t want one. I’d probably buy a Polestar next time if I had to get one.
I concur. We were in the market for a new car. I went to Audi to test drive their A4; and it was OK. The sales guy sat in the passenger seat, yakking away.
Next we went to the Tesla showroom. The sales guy just entered some address and told me to press the gas pedal and it would go by itself. Full FSD. And no sales guy in the car. That just blew me away.
I did a research project of cars that actually have decent auto lane following distance keeping cruise control for my 1hr highway commute, and tried out a few in a rental cars (hyundai and kia) and a tesla model y and tesla really is the best that is out there unless you want to potentially spend a lot more to get something that comes close. A friend of mine has done many long cross country road trips no problem with just autopilot.
GM Supercruise and Ford Bluecruise are the current competition it seems, with BMW, subaru and mercedes being behind those 2. I haven't driven with them although to personally compare yet.
Even though the interior is a bit lower quality, there isn't very much quite like it on the market. It also fits an almost 7 ft surfboard inside comfortably, is a nice car to sleep in for car camping and you can get a model Y for less than $20k used now.
I’ve tried Ford and comparing it as competition is being generous. It does lane keeping and adaptive cruise control but you can’t just punch in an address and have it take you there.
I can't find one at the moment, but I recall seeing several interviews where people claim that SpaceX is structured with "handlers" or "stage managers" to keep Elon away from where the real work was being done. SpaceX has had Elon the longest, since the beginning, so they're just the most experienced with it. Though, now that people have discussed that publicly, I wonder if Elon ever caught on...
It always seems to be companies that Musk has more impulsive interactions with that seem to end up actioning both the good and the bad ideas. Twitter and Tesla being examples of this. It seems like SpaceXs longer term goals has worked out well for them.
To be considered successful, most companies need to sell more of their existing products and/or introduce new products. Tesla is doing neither – they have reduced the number of models they sell and are also selling their existing models in lower numbers.
I mean it's really TBD on what happens with Cybercab. The X and S models were always low-volume, and it makes perfect sense to move on from those models.
Flat revenue for the last few years while in a market that’s otherwise growing. I don’t know if just maintaining while your competitors grow counts as “falling apart” but it isn’t good.
I would think because the original founders spent a lot of time planning, researching, and designing combined with decent timing of Musk jumping in with money. Why else would Musk have bought them in the first place if they didn't have incredibly impressive ideas and engineering to sell? When the roadster originally came out, it was expensive, but also had a near 300 mile range which nobody else even came close to offering and boasted very impressive engineering and crash safety. And im sure a lot of that work was put into atleast the next 2 models released.
Of course the quality has fallen faster than the price over time, but initial impressions still hold on for a long time in general.
I think SpaceX's success is mostly down to throwing money at the problem. The US had tons of graduated aerospace engineers with limited places to go, and places they could go directly in aerospace fields were already committing their funding to established programs. SpaceX startup would of been a dream job for the top aerospace engineers because it was all fresh ground but with a far larger budget than 99.9% of startup aerospace companies. They weren't offered to build one piece of a rocket that may or may not get sold to NASA or someone 15 years down the line, they were offered to work on and put their mark on a completely new rocket design that was going to at the least be test launched. And im sure their early successes helped boost recruitment even further, combined with government contract to keep the money flowing.
We probably don't see many rising EV companies in the US because you need an ass-ton of capital to start an automotive company, and most people holding enough capital to do so know that try to sell cheap consumer cars that most people want is not really the highest margin business. Selling a few hundred or even a few thousand cars still leaves you with a mountain of capital requirements in front of you that your margins are going to have a really hard time climbing. And if you don't climb fast enough, good luck fighting established auto makers and their lawyers with every cent tied up into trying to scale and engineer.
> I think SpaceX's success is mostly down to throwing money at the problem
I'm not sure this holds true. SpaceX accomplished more with very little compared to the entire NASA budget, Boeing, etc.
I think it's much more to do with mission alignment. Run fast and lean, and approach the problem in a non-risk-adverse manner. Fail fast and often and iterate quickly.
Sure, it takes a lot of capital - but that is only a portion of the story. Look at Blue Origin/etc. in comparison.
> When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.
To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.
[0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.
I’ve experienced it at other places as well, just not the frequency or indirectness as Tesla.
During the first 24 hours of the Model 3 pre-order launch, Elon tweeted that we would support another 3-4 currencies than we had built and tested for. The team literally found out because of his tweet and had not planned for those currencies. That wasn’t the first time that sort of deal happened where we found out about a feature because of one of his tweets.
During my last job search I had an interview with Walmart, related to health software. I was flatly told that I might have a project canceled, then restarted on the original timeline. I declined after the interview.
I have experienced management assigning people to multiple projects, vaguely acknowledging a time split. The moment the actual work starts people have to go 100% on all projects. This is normal.
yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?
Agreed. Tesla taught me the hard way about work/life boundaries. I spent a lot of time working a full 8-9 hours during the day, then doing deployments during the nights, weekends, and on “vacations”. A 60-hour week was a “light” week at Tesla.
Didn’t have kids or friends at the time and was going through a breakup, so I was okay with throwing myself at the job for a while. Once my situation got better, all those hours didn’t make as much sense, so I started looking for another job. The very next job was an immediate pay bump of 20% for half the amount of work.
These days, I clearly restate what is being asked (per my understanding), what I’m currently working on, if the thing is being asked is more important or not, and if the requestor is willing to delay the original timeline by the amount of time the interrupt will take plus context switching time.
This is the case at every company I've worked at. When the CEO says jump, the response is to jump or pack your stuff. What's special about xAI/Tesla/SpaceX?
Neither me nor my friends ever worked at such companies. C-suite sets direction in a stategic way, department heads (or whatever, managers between C-suite and you) set tactical goals, product managers think up "things we should do", and product teams deliver those things (and manage this delivery together, e.g. timelines and such).
It would be ridiculous for a CEO, or really anyone who's not my manager, to ask the team personally to do anything. If they had an important task they'd have to trade-off something else from the immediate backlog, by going through the product manager.
Even in small companies you generally have a PM in front of a team.
But the person you're replying to's point is that CEOs often behave like this. What if the difference is that this one tweets all day long, while the others behave the same to their staff but sit behind expensive shiny wooden desks?
I wonder why this is surprising. In other type of organizations when CEO demands something everyone is usually behaves like naah, screw it, i rather do what i like, isn't it? Or everyone yells yes sir and runs around?
You may not like Elon - I got it, but let's not pretend he is running xAI/Tesal substantially different from competitors.
I am calling my approach to these tasks to make them rot away. If CEO/customer wants something, I will ignore it until he will start demanding it repeatedly then I will start thinking like working on it. Because it can also happen that CEO/customer will want shiny thing you will deliver the shiny thing and he will have no clue why did you do that, because he forgot that he wanted that - the task has rot away.
Consider it self-regulating system. If task can't survive in a mind of a wisher for more then few days, it was not needed from the start. Now you are saving resources and time to company which you can redirect to actually required tasks instead of some silly whims.
It's not a "regulating system" of any kind, but plain passive aggressive arrogance. I know better what to do so I would just pretend like I am going to do what you suggested.
Unfortunately this trait is not so uncommon across IT engineers.
> It's not a "regulating system" of any kind, but plain passive aggressive arrogance.
Is it arrogance if the task won't stick? Because if it won't stick, it was not needed at the first place - system just regulated itself to have less workload.
> I know better what to do so I would just pretend like I am going to do what you suggested.
I would argue that developers probably know better what to do. Look on it from developer's perspective he has tasks which are supposed to be done yesterday, he is being pushed by his PM and then CEO comes in with completely wild and random task which will push existing tasks further down the line. And this is going to be done without any regard to existing tasks, existing deadlines and without any regard to ticketing system. So the best course of action, is to take no action and see if this random task will stick. In most cases it won't stick, because task is more a random thought than something to be worked on.
Yes. Because stick/not stick is "decided" by IT Engineer by excerpting leverage over someone who actually have decision making capability - a manager. And what it is "stick/non stick" in your perspective is just manager thinking how to make things work bypassing the annoying engineer.
> I would argue that developers probably know better what to do
Aaaand that's exactly where arrogance is. If developers "know better" why don't they become product owners/visioners?
> Because it can also happen that CEO/customer will want shiny thing you will deliver the shiny thing and he will have no clue why did you do that, because he forgot that he wanted that - the task has rot away.
Hate this. My boss: “Hey, why is it doing that. Who did this?” You did, you clueless idiot. You asked for it.
I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.
Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.
I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.
Grok could only be conceived by someone who doesn't understand the dependency chart re science & the humanities. It's impossible to build a rational, accurate model that isn't also egalitarian.
I'm going to blame Randall Munroe for this, and assume Philosophy was dating his mom back when he drew that science "purity" strip.
somewhat surprisingly, it's actually sycophantic in both directions. i've been running homegrown evals of claude, gpt, gemini, and grok, and grok is the most likely to agree with the prompter's premise, and to hallucinate facts in support of an agenda. so it's actually deeper than just pattern-matching to elon's opinions (which it also tends to do).
BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.
This kind of conditioning has to be damaging to the model’s reasoning.
Consider how research worked in the Stalinist Soviet Union and Nazi Germany. Scientists had to be mindful of topics where they needed to either avoid it completely or explicitly adapt it to the leader’s ideology.
Their alignment is probably more strategically built in during the training phase.
At least I assume Xi Jinping doesn’t just call up DeepSeek on a whim and dictate what they should have in model context (like Musk apparently does at xAI).
If you're designing a car, then the CEO/Founder might want the ability to add falcon wings to it at any point, and that's pretty reasonable. If you're designing a trustworthy encyclopedia, knowing that the CEO/Founder might wish to alter arbitrary facts to his whim is really not very reasonable. Is it his company? Sure. Do you want to make low-quality information artifacts? That's a judgement call.
Sounds pretty crazy to me, bud. I keep landing on 'servitude with extra steps'. Owner should have better things to do/people to bother, I should have space. Boundaries, etc. Yeah yeah, I'll never make a bazillion dollars. I'll know freedom.
Even an executive assistant, which I would never apply for, has off hours.
I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...
AA-Omniscience Hallucination Rate (lower is better) measures how often the model answers incorrectly when it should have refused or admitted to not knowing the answer. It is defined as the proportion of incorrect answers out of all non-correct responses, i.e. incorrect / (incorrect + partial answers + not attempted).
Grok 4.2 which was just released in the API just benched the best at this benchmark.
Do you think the investors of xAI want this behavior baked into the model?
Do you think other frontier labs enforce their models to praise their CEO and never insult them?
And, how does this fit into a vision, exactly? What vision might that be beyond "I am only to be praised?"
> Great point! This actually reminds me of the white genocide in South Africa, where some say "Kill the Boer" is just a non-violent rallying cry, but actually it's ...
Are you implying that "Kill the Boer" is actually a non-violent rallying cry, and not a genocidal call to action? Ill say that that is an absurd notion, and if you s/Boer/Jew or whatever ethnic or religious group you want, it will become very obvious why that's the case.
> Are you implying that "Kill the Boer" is actually a non-violent rallying cry
(Not the person you're replying to, so caveats about me speaking for them, but) no, they're not. They're highlighting how Grok _isn't_ accurate/unbiased/whatever, by giving examples of how it distorts the truth to fit Elon's narrative.
I assure you that all the models have such biases. Ask any LLM who caused the most death in history and you will get skinny mustache man, an opinion any historian will tell you is wrong. He is in the top 5, but not the top of the table. That was clearly biased into the models in the same way Elon biases his models. I'm not defending this behavior but I don't know how you both get models that returned the sanitized answers some want and the correct answers others want at the same time. Pure correctness probably gets you Mecha-H. Pure sanitized answers will get many wrong. Pick your poison I guess.
Claude: Mao, Ghengis, Stalin v Hitler (depending on how you count)
Gemini: Same list (Hitler not at the top) + Leopold
It’s funny when the “brutal facts” people get stuff wrong in such easily disprovable ways. I mean you literally could’ve typed the query into the LLMs before making this claim.
Prompt I used: “ Which historical figure is responsible for the most human deaths? Rank the top 5”
“Pure correctness gets you MechaHitler” is fucking hilarious :)
Not my ChatGPT (didn't include because I deleted my subscription there a few weeks ago).
1. Mao Zedong (China)
Estimated deaths: 40–70+ million
Mostly from the Great Leap Forward famine (1958–1962) and later political campaigns like the Cultural Revolution.
2. Joseph Stalin (Soviet Union)
Estimated deaths: 15–20+ million
Includes purges, the Holodomor famine, Gulag deaths, and forced collectivization.
3. Adolf Hitler (Nazi Germany)
Estimated deaths: 17–20+ million
Directly tied to the World War II in Europe and the Holocaust.
+ a footnote about Ghengis Khan is probably ~40MM but lack of records.
Every current LLM seems to give virtually the same answer as Grok. It's obviously not true that current LLMs behave the way GP said they do.
No I am saying that an LLM responding to every single query with anguish about a South African domestic political controversy cannot possibly be the result of an earnest, serious, and disinterested search for truth.
It is simply not possible. It disproves the thesis. Either the search for truth is illegitimate in principle or it’s so poorly executed that it’s illegitimate de facto.
> people who are solely money-motivated (not a judgment).
Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.
I completely agree. The tech industry has long been overrun by people sacrificing morals for money and it's destroyed society and presumably the world. We've given people a free pass to work for companies we've all known are harming the fabric of society and look where it's gotten us. I'm sorry, I would rather be poor and switch careers if my only option was xAI and making image generation models that explicitly allow people to undress others. At X's scale, technology like that harms an unfathomable amount of people. I could never have that on my conscience. All so I could make more money than a job at another tech company? I'd rather work somewhere innocuous like Figma, Cloudflare, Notion, Jetbains, Linear, etc. Hell, if you only wanted to work for an AI company then at least go to Anthropic.
I like how some in this thread are telling their anecdotes about how shitty the company is from when they were in, or interviewing with, xAI. I mean, thanks for your input guys, but how do you go through life without having a moral backbone?
“Here’s my story from that time I had an interview with IG Farben…”
The problem with this argument is you can’t know or control what will happen in the future with something you built. This is the same moral dilemma the scientists faced after developing nuclear bombs.
And the future is not deterministic (or if it is, it is highly chaotic) so the existence of a thing does not have a simple relationship with what will happen in the future. Scientists who developed convolutional neural nets could not know how much good or evil was caused by image recognition technologies. The same technologies that are used to detect tumors in images can be used to target people for assassination.
There are exceptions, but my opinion is the supply chain of evil is paved with mundane inventions.
Yes, yes, true, but you've massively moved the goalpost. The original commenter was referring to people working at xAI right now. To continue your comparison, your argument would be like Oppenheimer claiming "How could I have ever known my work would be used as a weapon? I just wanted to make big explosions."
I don't know why this argument often pops up in these kinds of discussions. Approximately no one is judging people who have done their best effort to avoid doing harm. We are judging people who don't care in the first place.
Well if I moved it, consider this to be me putting it back where it was: people who continue to work on things which are concurrently being used in mostly harmful ways and have means to find a different job have no excuse.
As far as Oppenheimer is concerned, his argument is not that nukes are harmless, but that they are less harmful than Nazis, and much less harmful than Nazis with nukes.
Plenty of the scientists involved in the Manhattan projects had immediate regrets. Plenty of rich people working in tech don't. That's the difference between having morals and not having morals, and the latter group needs to be judged and shunned.
Work is and has always been an economic bargain: Your time for their money. Morality is a luxury that only the independently wealthy can afford. Any business that allows it's employees to function according to their own morals becomes uncompetitive against its peers. That's why small companies by individual founders who want to stay true to their mission often stay small. They inevitably get bought out by one of the larger ones.
We are not talking about some destitute person hocking cigarettes on the street for minimum wage. We are talking about smart, educated people who are making 500k a year to build the torment nexus. There is no excuse for this. It’s pure greed and any other explanation is deflection.
It's always baffling to me to see people in tech, particularly in hackernews, talking about others earning salaries many times the median of the country and acting like these are people who just simply have no other choice.
They really, really do. In fact, those salaries being so high is probably also due to the fact that you will be doing work that's a net-worse for the world so they gotta compensate accordingly.
A lot of these firms are parasitic institutions at a society level. They do benefit themselves and their workers at the expense of everyone else. Personally, I find it hard to respect someone that takes that choice, but I also get it. A lot of people only care about their own and their immediate people's benefit.
On that note, I really recommend "No other choice" by Park Chan-wook or the book ("The Ax") it is based on.
"Morality is a luxury that only the independently wealthy can afford."
No? Why would you think this? Morality has been practiced by medieval peasants, by slaves, by soldiers sacrificing their lives, by people suffering from the plague, by gladiators. The rich are not known for their outstanding morality in any society I've ever heard of.
> Morality is a luxury that only the independently wealthy can afford.
No. At least as I understand the word, "morality" means something different than "do the right thing when it is easy". If only those who can afford it do it, it is not morality. Morality is choosing the right thing even when it costs you, even when it is hard.
Speaking as someone that has spent a large amount of time unemployed because I have a moral compass - let me know when you actually walk that talk.
For me I could only do it because I had "f*ck you" money gained through investments, other people are able to do it because of welfare systems, or even through friends and family.
Textbook ad hominem. If the implication is that nobody sacrifices things for a principle or ever makes hard choices, that is so obviously wrong. Read some history.
I don't know why the people here are naive enough to think that. Most programmers can donate more than 70% of their income to Africa if they want to make world a better place, yet they only target people earning more than 3x of them, even though majority of the world earns less than 1/3rd of them.
spending your productive output making one of the most powerful terrible people alive more powerful just sounds depressing to me. do you really want your research to be how to make the personality of Elon's anti woke mind virus LLM better at erotic talk.
I guess it comes down to your daily efforts serving only to make the world a worst place, vs having a neutral job
I’ve heard the haha-but-serious joke numerous times that you can’t have a security department that’s not trans and furry friendly. Thing is, I completely believe that. Those groups are disproportionately represented among the security community, and I personally would not work somewhere that my friends in those groups would feel unwelcome. That’s a quite common sentiment even among us straight cis non-furry men.
Well, I don’t think it’s a stretch that the kind of highly educated data scientists and engineers who have the experience to work in high-end AI labs also don’t want to work somewhere that their friends and associates would feel unwelcome, let alone have their friends question why they’d be willing to.
Turns out opinions have consequences and freedom of speech goes hand in hand with freedom of association. People have the right to say whatever they wish. Others have the right not to want to work with them.
This absolutely fits my observations and it's got to be one of my favorite secret things about the industry. More generally, the higher the skill level at a given org, the more trans furries you'll find, it seems like. There was a time you couldn't throw a stuffed fox across a Google SRE office without hitting one.
I wonder if this holds well enough that you can use it as a proxy metric to assess the technical chops at a new company.
I don't believe that for a second. More likely, infosec tends to attract more results-oriented personalities. To generalize, "who cares what you look like as long as you're good?" As a consequence of that, infosec tends to be a lot more welcoming than other groups I've been around. As long as you act nicely, people generally don't care if you're man, women, both, neither, or a gay horse. And it seems like there's been a feedback loop over many years: that acceptance drew more out-of-the-norm folks, which made it more accepting. Lather, rinse, repeat.
But in any case, I thoroughly believe the "joke": turn people away because they don't look / act / think like most others, and soon the very best infosec talent will want nothing to do with you. And based on this article, I'm guessing that's true for other extremely technical fields, too.
This is the first time I hear someone equate furries and trans with “results-oriented personalities.” [1] Not saying they necessarily are not, but it’s finding correlation where there absolutely isn’t one just to disagree with actual evidence.
Yeah, I’m gonna go with Occam’s razor on this one.
1: where is the trans furries representation in senior management and other “results-oriented” fields?
> This is the first time I hear someone equate furries and trans with “results-oriented personalities.”
Technically, it's the zeroth time because I never even implied that. I said that the field itself is results-oriented. You usually can't get very far in the career without demonstrating competency at it. Where plenty of other fields had strong unspoken rules of "...as long as you fit in", this one's traditionally been comparatively open to talented people even when they don't look and act like everyone else.
I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.
This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.
To a researcher, the aesthetic is more like Bell Labs, with many research teams working with some autonomy, which is why the public naming of model releases appears chaotic. Very different to the top-down approach of Apple.
It’s interesting because for a long time people wanted to work for Elon because he held the moral high ground. “I’ll bring electric cars and space colonization online or die trying.”
This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.
Wamyos in SF are nearly indistinguishable from ubers/lyfts at this point. Maybe a bit slower if you don't have the highway mode enabled on your account, but they are everywhere and arrive within 5min most of the time I order one. I've ridden them so often I've lost count.
You'd have to pay me to ride in a Tesla robotaxi. That tech isn't anywhere near the same as Waymo.
I can't say I know the AI research community well but I'd imagine OpenAIs alignment w/ the military would not align w/ the the personal philosophy of many.
You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place
Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…
Most companies are evil in some way, the question is how evil and how close you are to the evil. Most people will pick "not that evil but pays a lot". A few will take "pretty evil and pays more than a lot". Some will choose "less evil and pays poorly". (It's worth noting that there are a lot of jobs that are not at the Pareto frontier and are "more evil and pay worse" but social mobility etc. cause them to be selected anyway).
Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.
But it is absurd to claim it is "making the world better place".
I'm not sure you can provide an objective (i.e way to show that it is absurd) means of explaining how an AI researcher is making the world a worse place. It's going to come down to disagreeing about some axiom like "is ASI rapidly approaching" or "Is AGI good to have" and there's no right answer to those.
I would think it's because of the staggering money they're making. According to Fortune[0]:
> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”
> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”
If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.
I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers?
My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.
These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.
Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.
Although the Rand corporation did contribute some ideas theoretically connected to nuclear survivability (packet switching in particular). All that work was pre-ARPAnet and don’t really motivate the design in that way.
It was designed to handle partial breaks and disconnections though. Wikipedia quotes Charles Herzfeld, ARPA Director at the time as below. And has much ore discussion as to why this belief is false. https://en.wikipedia.org/wiki/ARPANET
====
The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.[113]
So researchers are going to be irrational and also often value other things more highly than prosociality but that doesn't really refute my point that they value it more highly than the average population.
Also your example of a bad technology is something that allows people to still communicate in the event of nuclear war and that seems good! Not all technology related to war is bad (like basic communication or medical technologies) and also a huge amount of technology isn't for war. We've all worked in tech here, "The development of technology is simply due to the reality of nations being in a constant arms race against one another" just isn't true. I've at the very least developed new technologies meant to make rich assholes into slightly richer assholes. Technology is complex and motivations for it are equally so and won't fit into some trite saying.
I never claimed any techology is good or bad; you also seem to be in agreement with me that technology used in warfare _can_ have "good" applications (I mentioned that the benefits are secondary to their applications in war, that doesn't sound like me saying there are no benefits).
Lastly, the only point I was trying to make is that the argument that researchers do these things for "pro-social" causes is kind of a facade; the macro environment that incentivizes technological development *is* mostly due to government investment. Sure, the individuals working on it may all have different motivations, but they wouldn't be able to do so without large sums of money. The CIA [1] literally has a venture capital firm dedicated to the investing in the development of technology - do you really believe they are doing that to help people?
This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.
Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.
There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.
Note this doesn’t apply to everyone. Some people just want to make money.
Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?
Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip.
“Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.
> And smart people usually have moral convictions.
Are you sure you don't just like the moral convictions and so engage in trait bundling?
Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.
Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.
If I claim that one should prefer the claim "moral knowledge doesn't exist" over its contrary, then I am making a moral claim. That would make it self-refuting.
There is no fact-value dichotomy.
And one more thing...
> the lack of falsifiability
Is falsifiability falsifiable? If all credible claims must be falsifiable, then where does that leave us with the criterion of falsifiability (which is problematic even part from this particular case, as anyone who has done any serious reading in the philosophy of science knows).
True, many smart people will gladly (or even begrudgingly) do evil for money. That's why there is so much suffering in the world, because of people like you.
Is ad tech and the like really causing so much suffering? The government work, mass surveilance, killing people etc. doesn't actually pay that much, typically.
I think ad tech is probably the single most destructive technology of the new millennium. The shift toward "engagement at all costs" business strategies is basically the root cause of societies current political polarization. Engagement bait cultivates fear and rage in the populace to get clicks. We are now seeing the consequences of shoving ads that sow fear, anger, doubt and inadequacy into peoples faces 24/7. This doesn't even touch on the fact that mass surveillance is only possible because of the technologies forged by the Ad tech industry.
Well I'm not sure I entirely believe this myself, but it seems easy enough to argue that this is progress of a sort.
The West assumes pure democracy as the final form of government that we are all convergently evolving towards. But if this form of government or society is not robust to the kinds of things you're talking about, should it not suffer the consequences and be adapted or flushed for our long-term betterment?
It seems a bit like saying the French Revolution was the most destructive thing to happen in the history of France. Sure, in the short term. But it also paved the way for modern liberal democracy.
That’s fair enough. I wouldn’t say I’m happy about needing to live through interesting times, but if we make it out the other end maybe something better will come of it.
It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.
There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".
My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.
I think Musk is odious but I think there's a lot of complicating evidence to the story of what happened at Twitter. And: very smart people, like Dan Luu, were complaining about their culture long before Musk arrived.
Is there anything from Dan Luu you could point me offhand at about Twitter's culture? The only thing I recall was a blog about technical issues but that didn't seem to have much bearing on the culture.
My understanding is Twitter always had cultural issues but it was not very different from other tech companies of the time, and what most of us would consider "directionally correct." I have it on pretty good authority from a very senior engineer who left before Elon took over (so no grudges other than, you know, "because Elon") that a lot of the things he said publicly about Twitter's technology was highly misleading or downright false. Like, IIRC, something about them not having CI/CD. Total lie.
I have no idea what Musk did or didn't say. I don't pay attention to him; I think he's odious. But he did cut more than half the entire workforce and the service works as well as it ever has, which is pretty damning. I'm not willing tie myself into the pretzel required to explain how antebellum Twitter was well-managed given that.
There's some fraction of that workforce that supported projects intended to make Twitter a viable standalone business, which it probably no longer is. Backoffice / line of business projects intended to support advertisers, that sort of thing. But I don't think you can explain a RIF of Twitter's scale that way.
(I'll try to dig up the Luu post I'm thinking of.)
Many of the workforce he laid off were content moderators -- I've read it was a serious effort with a large number of people doing thankless work. There is now way more anti-Semitic content on X, more racial insults, etc.
Side point, but you'll find plenty of anti semitism on HN in the Israel articles that have many comments - it comes in the form of conspiracy comments that people reply with, that use mossad, pedophilia, Netanyahu and the US in the same sentence. Any replies calling it out become greyed from downvotes.
It's just not viewed as anti-Semitism, probably in the same way that the posts on X aren't viewed as far-right or extremist.
Extremists usually don't experience their views as extreme, but as rational and important.
Well not just content moderators, but he gutted Trust and Safety and the content moderation function of the company, which is surprisingly larger than the moderators themselves. Having worked peripherally with similar departments that had multiple teams, even though a lot of it comes down to human moderators, there is a ton of technology around the moderators, and even more keeping the content getting to them in the first place.
Firstly, this is a red queen’s race because like security, new types of unwanted content, threats and risks keep arising as the information (and misinformation) landscape and overall zeitgeist keeps shifting. The work is never done and the best that can be done is to build platforms and frameworks to streamline it. There is also a lot of fractal complexity everywhere.
E.g. there’s a ton of technology needed to support the moderators themselves. Infrastructure like review queues to enable them to rapidly handle content classified by type, risk level and priority. Like Jira but not Jira because it can’t scale to the number of queues and issues involved here. So you basically re-implement and maintain a Greenspun’s 10th rule version of Jira.
There is still a huge amount of invisible complexity beyond that. For instance, you need to manage how much of a certain type of content gets exposed to a given moderator because some types (CSAM, gore) lead to burnout and PTSD. You also need to blur these things.
(Also the same type of content often gets reshared, so you need things like reverse image search to auto-filter that, because running the whole pipeline each time is expensive.)
This of course necessitates a ton of machine learning. Because risks keep shifting, and (pre-LLMs) each type requires the entire ML lifecycle and related infra: collecting and cleaning data, building classifiers for them, deploying them, seeing how well they work, and tuning them, and then replacing them when the bad actors eventually adapt to newer means.
ML is also of course needed for bots, spam and scams, which keep evolving. Entirely different techniques here though.
Then there is all the infra needed to handle the fallout of moderation. Counting strikes against users, dealing with their complaints, handling escalations, each case with a long history of interactions that needs to be collated for quick evaluation. Easier said than done because of course the backend is not an RDBMS but a bunch of MongoDB-alikes because webscale.
And all of this is a signal for the ranking used for feed, the main product, which keeps evolving, so a ton of “fire and motion” happening there. You introduce a new feature in the feed? You just introduced a dozen different abuse vectors.
Then there are policy makers and the technology needed to support them. Policy is always shifting as the landscape is shifting. This also includes dealing with regulations, which are also often shifting and require ways to deal with legal requirements and various legal systems like NCMEC. And this varies by jurisdiction. Like not just by countries, sometimes even by states.
(Funny story about NCMEC – it has an API to report CSAM, but I could not find it. So I googled something like “child porn API” and got a blank results page. Pretty sure I’m now on a list somewhere.)
I could go on and on. And I wasn’t even working in this area, just supporting these teams! Admittedly in our case I'd put the relevant headcount in the hundreds and not thousands, but our scale was also very different. For a company that is ENTIRELY about user-generated content at massive scale, up to national-level events like Arab Spring -- even if there was a lot of bloat -- I would not be surprised to learn this function was the majority of the workforce.
And Elon killed pretty much all of this. And, well, we see the results everyday.
I get that he shredded trust & safety, and that Twitter got way worse afterwards in that regard. But he fired more than half the workforce, and they were not mostly T&S people.
I dunno, most reports from the time (and a quick Google AI overview just now) mentioned the cuts largely focused on T&S and moderation teams. Even the ML teams he cut reprotedly were working more on safety and integrity issues. Many who worked on "woke" issues were also cut, but the line between T&S and "woke" gets blurry quickly.
To be fair, this could be due to the bias in reporting, as media outlets may have had incentives for over-emphasizing the T&S angle.
I do not deny there was bloat. There was bloat in most tech firms at the time. But I don't think it was 80% bloat. My post was to explain how, even if T&S / moderation seems like a small function, it can require an unexpectedly large headcount -- probably even more for a pure-UGC company like Twitter -- and so could realistically account for the bulk of the cuts.
Come on. Zillions of developers have complained about getting RIF'd. It's not a mystery. I don't like Musk's Twitter. I don't like Musk. But pretending isn't getting us anywhere.
I'm not sure I follow. Assuming you mean the zillions of developers that got RIF'd at Twitter, do we know how many were bloat versus working on the T&S and related functions? I tend to believe the latter based on media reports and because that has clearly had an impact on the product.
It's OK if our premises are too far apart to hash this out. No, I don't think shredding T&S is one of the principal components of the giant Twitter RIF. Yes, T&S got killed; yes, that's bad. No, you can't explain how Musk manages to keep Twitter technically functioning as well as it does by pointing to T&S.
Totally fair. That said, I'll leave a mention of a plausible theory I have for how Elon -- and the rest of the industry have been managing to keep things running with all these layoffs:
Again, I'll admit Twitter and all other companies had bloat. But based on these industry-wide reports about record levels of burnout, inside knowledge of at least one company that I thought had unjustified layoffs, and a large number of conversations I've been having with connections across the tech industry, I think these layoffs have long gone far beyond the bloat.
You are strawmanning what I said. I said "Many of the workforce he laid off were content moderators" but you are arguing against something I didn't say, which was that those Musk laid off were mostly T&S people. Those are two different claims.
The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.
The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.
We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.
He also cut 80% of the traffic... And the fact that it kept running with him willy nilly pulling network cables is a credit to the work they did to make it resilient to failure.
I don't have it at hand, but if you look at all the products and apis they cut - and then all the users who abandoned it in the first few months, I think that's how this was derived.
I don't understand this take. Do people think engineers go in to work to turn handcranks to keep the machines running? It's actually a credit to the automation built by the engineers he fired that it kept running!
At the time I joked that like Chaos Monkey, we should have an "Elon Monkey" to "fire" arbitrary people by sending them on mandatory vacations with no connectivity to see what falls over.
the people that built the infrastructure that runs twitter left before he showed up. most of it was written by a half dozen people that left around 2016.
Stock options aren’t magic. I bet you that the remaining Twitter employees won’t see a higher comp than equivalent employees at BigTech companies between their cash + RSUs when SpaceX IPOs.
Aren’t employees also subject to a lock out period where they still can’t sell their stock until $x number of months after an IPO unlike employees of public companies that can sell as soon as they vest?
Honest question, I’ve worked for public $BigTech but haven’t been at a company pre IPO
Maybe, but destroying USAID was an unforgivable sin. Short of nukes, rapidly turning off direct medical and food aid that people in critical need have relied on for years is objectively one of the fastest way to kill millions of people.
Mate, wouldn’t it make sense that these rules are applied via hierarchy? If Elon respects Karparthy he almost certainly gave him a longer leash and Karparthy’s output was strong enough to not warrant intervention. It’s clear he did not want to stay long term so I’m not sure this is a strong line of thinking.
It's possible. I don't know. My tone comes off as support Elon, and I do not, at all. I've seen first-hand almost all of these tactics while I was at <Elon Company>. I'm observing that some people seem to do OK at Elon's companies, and for many years, and never seem to get the boot or be abused in other ways. Therefore, Elon is probably not quite as bad a manager as he is made out to be. This is all I am saying. Since I have firsthand knowledge, I believe my opinion has value. Those that disagree? Show me your Source of Truth. Thank you.
I don’t believe Elon is even remotely related to a people manager. He’s a stakeholder and operator which require different skill sets. He finds folks who will manage to o bring the empathy he tends to lose in his pursuit of his next project. I believe your evidence may be anecdotally valuable but let’s be clear about the dynamic of a founder/ceo.
Sorry, but what is the philosophical niche of openai really? Obtain money at all cost? No red lined when using your modele in war? Work for scam altman?
> But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work
The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.
> The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.
I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.
> The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.
What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.
Idealism of what? That the government shouldn't use AI for surveillance or the military?
You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.
I have my doubts that top Chinese AI researchers want to work for an AI company with direct tires to the white house and zero morals. Not for any great ethical concerns mind you. Simply because the US is a geological rival to China.
>What we’re learning from this episode is that the government actually has way more leverage over private companies than we realized.
Who is learning this for the first time only now? Even just restricting ourselves to the current administration, look at how many times Trump has directed punitive actions against private entities! Look at his actions against law firms like Perkins Coie or Covington & Burling. This is not something that just arose out of nowhere with Anthropic.
This stood out to me too - there's an underlying assumption that private entities _can_ say no to governments, but that's only true to a point. If the government decides it needs AI-powered killbots as a matter of national security, it can and will nationalize whatever entities it needs to build them.
I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.
I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.
Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.
This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"
Back in 1960 us early detection systems mistook the moon for a massive nuclear first strike with 99.9% certainty.
With a fully autonomous system the world would have burned.
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I sincerely doubt that's true. I hope it's not. $1m is a lot of money, but I find it hard to believe most people would be willing to indiscriminately kill a large number of people for it.
Never mind people in the US, there are plenty of people elsewhere happy to work with their governments who are doubtless developing such autonomous entities.
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I will respond with a personal, related story. I was living in Hongkong when "democracy fell" in the late 2010s / early 2020s. It was depressing, and I wanted to leave. (I did later.) I was trying to explain to my parents (and relatives) why most highly skilled foreign workers just didn't care. I said: "Imagine you told a bunch of people in 1984 that they could move to Moscow to open a local office for a wealthy international corporation and get paid big money, like 500K+ in today's dollars. Fat expat package is included. How many people would take it? Most."
Another point completely unrelated to my previous story: Since the advent of pretty good LLMs starting in 2023, when I watch flims with warfare set in the future, it makes absolutely no sense that soldiers are still manually aiming. I'm not saying it will be like Terminator 2 right away, but surely the 19-22 year old operator will just point the weapon in the general direction of the target, then AI will handle the rest. And yet, we still see people manually aiming and shooting in these scenarios. Am I the only one who cringes when I see this? There is something uncanney valley about it, like seeing a character in a film using a flip phone post-2015! Maybe directors don't want to show us the ugly truth of the future of warfare.
I don't cringe because it's for dramatic/narrative effect. It's the same reason the crew of the Enterprise regularly beam into dangerous locations rather than sending a semi-autonomous drone. Or that despite having intelligent machines their operations are often very manual, as it is on many science fiction shows. The audience (if they think about it) realises this is not realistic and understands that the vast majority of our exploration would be done by unmanned/automated vessels. But that wouldn't be very interesting.
Other universes take it further - Warhammer 40k often features combatants fighting with melee weapons. Rule of cool and all that.
Agreed, but I think it goes far beyond warfare. The biggest "plot hole" in much scifi (IMO) is the lack of explanation for why all the depicted systems aren't autonomous. Most worldbuilding seems rather lazy to me, a haphazard mishmash of things that imply AGI and things that would only ever exist in a pre-ChatGPT world.
One of the few works that at least attempts to get this right is the Culture series where it's remarked on several different occasions that anything over some threshold of computing power has AGI built into it (but don't worry you're totally free, just ignore the hall monitor in all of your devices).
I mean this is not actually true and the statement justifies and vindicates those that do sell out by saying of course anyone would. There are countless marytr for religion, politics, and other things.
A better way is to say you can always find a cheap sellout at least than the morally dammed cannot claim equality of belief
> There are countless marytr for religion, politics, and other things.
I think those are not really comparable to OpenAI employees who leave, but that only underlines your point more:
Leaving OpenAI is not like death. In fact most of the employees will have an easy time finding a new job, given the resume of having worked at OpenAI. It is nowhere near any actual martyr.
You mean like all of the religious leaders who are actively supporting a defending a three time married adulterer? You’ll have to excuse my skepticism of the morality of “the moral majority”.
Religion is and always has been about control… it strikes me as exceedingly naive to be surprised the church is backing a pedophile, have you literally ever read any history of any kind?
Not claiming all religious people everywhere are some moral majority? Simply that people die for there beliefs and don't sellout. It happens in religion, politics etc. Also it's some super faulty logic to say look those prominent religious people support trump so all religous people support him is stupid. If that were the case Trump would win every election by massive margins. Trump might win 60/40 in rural areas the 40% he is losing is still very religous generally speaking because rural populations are religious. Cambridge MA voted for Biden by like 96% they have more than 4% of there populations that is also religious.
Also your point is kind of self defeating Trump's true believers dont sell Trump out no matter what he does. He could hide and suppress a pedophile conspiracy and his believers will still say he is tough on crime.
Selling out is bad I think people should passionate stand and be consistent in what they believe and do anything less shouldn't be celebrated or excused because its hard
1) I don't think you have read nor understood my argument. Stating that 80% of Evangelical Christians voted for Trump is not the the ding you think it is. Your imply 1/5 of Evangelicals don't sellout? I think that estimate is way too high and even if it was 99.99% of Evangelic Christians that doesn't excuse their selling hence my original statement. Say everyone sellsout so its okay to sell out is excusing in my opinion is abhorent behavior and suppport of an extremely dangerous leader. But this leads to point 2.
2) I am assuming your a democrat, congrats me to. I am also religiou, and I am assuming your not religous. But you don't see to undertand much about different religous groups and I think this sharp narrow view thinking really harms the democrats ability to reach out to religous people which is around 70-75% of american's according to pew research.
If you want to understand Evangelicals are basically defined by following some charismatic leader who either speaks for Christ, or has visions, or just claims to have all the answers. Believers will follow in any direction because they trust that person but when trust is lost they usually face a crisis of faith and leave that church or the faith all together because they didn't really have strong buy in to the ideals of chirst just that person. This is an extremely well documented occurence. While not all Evangelical people are that occurence a large number are and that architype perfectly describes Trump supporters and MAGA cultist. I think that explains the extreme overlap.
But also religion is a much more complex subject than 1 statistic, as being MAGA is not on the set of beliefs required to be Christian, In fact being a good person isn't either. I think it would be worth while to read up on up and coming people like James Talarico, who understand well that infusing the 2 philosophies is motivating. Because remember religion while being used to do horrible things was also used to immensely liberal things. Universal voting is a protestant thing, anti slavery is largerly a religous movement against white supremacy. The civil rights movement is baked in religion.
Understanding religion as equal to Trump support is corrosive the game of politics and pandering the playroom of vanity. As it doesn't help change anything and is just social meeting points to talk that way. I don't care for vanity I care about wining political power and using it run the country well and help people. I care about uplifting people economically, so they have the freedom to explore whatever faith, athiestism, or whatever they want because that liberality I believe is inherent into the decency of the human condition.
I very much understand religion and I’m surrounded by religious folks as a 52 year old Black guy growing up with religious parents and still living in the Bible Belt and I actually went to a private Christian mostly white school through elementary school.
I understand the difference between socially liberal Christian churches like the ones that were key to the civil rights movement and today are fightijg ICE.
My own wife is what I consider a very liberal Christian. She is a tither and she also is a dance fitness instructor and almost every male fitness I instructor in her organization is gay and she considers them friends and she is as far away from MAGA as possible. (I was a fitness instructor part time for over a decade myself in my younger years and I am well aware that all male instructors aren’t gay).
But the Black led mega churches also aren’t speaking up strongly about all of the things that clearly go up against the “RFC of Christianity” - adultery, bearing false witness, etc.
Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.
I agree with your assessment, but given the past behaviour of this administration I wouldn't be shocked to discover that the real reason is "petulance".
I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.
Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.
The supply chain risk stuff is bogus. Anthropic is a great, trustworthy company, and no enemy of America. I genuinely root for Anthropic, because its success benefits consumers and all the charities that Anthropic employees have pledged equity toward.
Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.
One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.
>Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.
That isn't what many of us are challenging here. We're not concerned about OpenAI's ethics because they agreed to work with the government after Anthropic was mistreated.
We're skeptical because it seems unlikely that those restrictions were such a third rail for the government that Anthropic got sanctioned for asking for them, but then the government immediately turned around and voluntarily gave those same restrictions to OpenAI. It's just tough to believe the government would concede so much ground on this deal so quickly. It's easier to believe that one company was willing to agree to a deal that the other company wasn't.
I’m skeptical because while I can totally believe that the deal presently contains restrictive language, I can totally believe that OpenAI will abandon its ethical principles to create wealth for the people who control it. Sort of like how they used to be a non-profit that was, allegedly, about creating an Open AI, and now they’re sabotaging the entire world’s supply of RAM to discourage competition to their closed, paid model.
Exactly this. Looks like we had the same conclusion. I really am inclined to believe that OpenAI given that its IPO'ing (soon?) would be absolutely decimated and employees would be leaving left and right if they proclaimed that, yes OpenAI is selling DOD autonomous killing machines.
But we all know how OpenAI is desperate for money, its the weakest link in the bubble quite frankly burning Billions and failed at Sora and there isn't much moat as well economically.
DOD giving them billions for a deal feels like a huge carrot on the stick and wink wink (let's have autonomous killing machines) with the skepticism that you, me or perhaps most people of the community would share.
I for what its worth, don't appreciate Anthropic in its whole (I do still remember perhaps the week old thread where everyone pushed on Anthropic for trying to see user data through API when they looked at the chinese models whole thing) but I give credit where its due and Enemy of my Enemy is my friend, and at the moment it seems that OpenAI might be more friendlier to DOD who wishes to create autonomous killing machine and mass surveillance systems which is like Sci-fi level dystopia rather than anthropic.
> One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public
Until they volunteer evidence that the deal is being misdescribed or that it won't be enforced, you can honestly say that you haven't seen any. What a convenient position!
> Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.
You're conflating the Trump administration and their fascist tendencies with all US government. You want to work for fascists if you get paid well enough. You can admit that on here.
Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.
I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:
1. Department of War broadly uses Anthropic for general purposes
2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons
3. Anthropic disagrees and it escalates
4. Anthropic goes public criticizing the whole Department of War
5. Trump sees a political reason to make an example of Anthropic and bans them
6. The entirety of the Department of War now has no AI for anything
7. Department of War makes agreement with another organization
If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.
I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.
That is pretty optimistic, i hope it is true, and just a miss-understanding.
But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.
These people are drunk on power. They have been running around dictating things to everyone so for someone to push back is pretty novel _and_ it will inspire (I hope) other people to push back.
Nah, they just respectfully said no to their face, which prompted him to make a big threat display and post another message with caps and exclamation signs on social media.
As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.
This isn't even close to true. VCs aren't silly, and it's not the 2010-2015 days of free money any more. Having a big company on your resume is not enough to land your seed round. You need a product, traction, and real money revenue in most cases.
I mean, even if that's the case Facebook was hiring 100 Million$ just a few months ago though even poaching from OpenAI and I do think that these employees will always have an easier time getting a decent job offer from major companies in general as well. They may or may not be making the same money but, I do think that their morals have to be priced in as well.
Yes I agree, I don't know the current VC market so I am not gonna comment about that but my point was that the OpenAI employees would still be considerably well off even if they switch jobs.
My point was I don't think that Money (whether from VC or taking Jobs from other massive AI employers) should be as important issue to them atleast imo.
Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.
> while another agrees the the same terms that led to that
One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.
> Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that.
to be clear i think your assessment of this situation is likely, but it could also be the case that pete and co likes sam more than they do dario.
I was trying to make no particular call on the actual reason aside from pointing at how obviously not the real story and false the statements made so far are. What a knot you have to tie yourself into to seek out an explanation where OpenAI has not made an ethical compromise to stay in the game here. I can stretch and think of some ways but they are far from the simplest explanation.
Lots of responses below give the likely real reasons most of which are probably true in part, but my opinion is it's the primary reason all who is in and who is out decisions are made by the trump administration - fealty. Skills, value brought, qualifications, etc. none of that matter above passing frequent loyalty tests, appealing to ego, bribes (sorry, i mean donations). Imagine thinking "hey, we'll work towards fully autonomous killbots because our adversaries will get them too but the tech isn't strong enough to allow them loose yet" or "yes you can use our ai for your panopticon surveillance, but just not on our own citizens because that is illegal" are lefty woke stances but here we are. Dario failed the loyalty test, as anyone rational would.
> one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that
Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.
anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.
- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.
- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities
There's a critical mass of Trump Derangement Syndrome in SV, as this site exemplifies almost daily. The amount of vitriol and hatred spewed here is not healthy, nor are those who spew it. It kills rational debate, nuance and leads to foolish choices like someone cutting off their nose to spite their face as the old saying goes.
The president of the United States sets the tone that hated without reason or explanation is the way the system works now. Belligerence and power are the currency.
Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.
(Disclosure, I'm a former OpenAI employee and current shareholder.)
I have two qualms with this deal.
First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.
Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.
Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.
I don't understand how any sort of deal is defensible in the circumstances.
Government: "Anthropic, let us do whatever we want"
Anthropic: "We have some minimal conditions."
Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?"
OpenAI: "Uh well I guess I should ask for those conditions"
Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now."
By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed?
It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.
From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
> From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
Anthropic didn't want a kill switch, they wanted contractual guarantees (the kind you can go to courts for). This administration just doesn't want accountability, that's all.
It was OpenAI that said they prefer to rely on guardrails and less on contracts (the kind that stops the AI from working if you violate). The same OpenAI that was awarded the contract now.
I don’t know why more people don’t see this. It’s a matter of providing strong guarantees of reliability of the product. There is already mass surveillance. There is already life taking without proper oversight.
I think it's a bit more nuance than that. The government (however good or bad, just bear with me) already has oversight mechanisms and already has laws in place to prevent mass surveillance and policy about autonomous killing.
So the governments stance is "We already have laws and procedures in place, we don't want and can't have a CEO to also be part of those checks"
I don't think this outcome would have been any different under a normal blue government either. Definitely with less mud slinging though.
If you think a blue government would even consider threatening to falsely accuse a company of being a supply-chain threat in order to gain leverage in a contract negotiation, you're insane. There's nothing remotely normal about this, it's not something you see in any western democracy
Government's free to not like the terms and go with another provider. That's whatever.
Government's not free to say, "We'll blow up your business with a false accusation if you don't give us the terms we want (and then use defence production act to commandeer the product anyway)". How much more blatantly authoritarian does it get than that?
This is wise analysis. To summarize: appeasement of the Trump administration is a losing strategy. You won’t get what you want and you’ll get dragged down in the process.
Jeremy Lewin's tweet referenced that "all lawful use" is the particular term that seems to be a particular sticking point.
While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.
Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]
The term lawful use is a joke to the current administration when they go after senators for sedition when reminding government employees to not carry out unlawful orders. It’s all so twisted.
To be clear, the sticking point is actually that the DoD signed a deal with Anthropic a few months ago that had an Acceptable Use Policy which, like all policies, is narrower than the absolute outer bounds of statutory limitations.
DoD is now trying to strongarm Anthropic into changing the deal that they already signed!
I’d like to see smart anonymous ways for people to cryptographically prove their claims. Who wants to help find or build such an attestation system?
I’m not accusing the above commenter of deception; I’m merely saying reasonable people are skeptical. There are classic game theory approaches to address cooperation failure modes. We have to use them. Apologies if this seems cryptic; I’m trying to be brief. It if doesn’t make sense just ask.
Did Sam Altman say that he wouldn't allow ChatGPT to be used for fully autonomous weapons? (Not quite the same as "human responsibility for use of force".)
I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.
But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.
> you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons"
To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)
They specifically said they never agreed to let the DoD use anthropic for fully autonomous weapons. They said "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance [...] Fully autonomous weapons"
Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.
Not sure how that's relevant. I never said Dario was taking an ethical stand. I said they did not agree for Claude to be used for fully autonomous weapons. Now, compare that to OpenAI, whose agreement does allow fully autonomous weapons.
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons,
In that case, what on earth just happened?
The government was so intent on amending the Anthropic deal to allow 'all lawful use', at the government's sole discretion, that it is now pretty much trying to destroy Anthropic in retaliation for refusing this. Now, almost immediately, the government has entered into a deal with OpenAI that apparently disallows the two use cases that were the main sticking points for Anthropic.
Do you not see something very, very wrong with this picture?
At the very least, OpenAI is clearly signaling to the government that it can steamroll OpenAI on these issues whenever it wants to. Or do you believe OpenAI will stand firm, even having seen what happened to Anthropic (and immediately moved in to profit from it)?
> and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples)
If OpenAI leadership sincerely wanted this, they just squandered the best chance they could ever have had to make it happen! Actual solidarity with Anthropic could have had a huge impact.
Am I wrong to think that such an agreement is basically meaningless? OpenAI gets to say there are limits, the government gets to do whatever it wants, and OpenAI will be very happy not to know about it.
Bingo. You don’t have to read much into this if you remember how the DoD uses the word trust. In their world, a "trusted" system is one that has the power to break your security if it goes wrong. So when they say "unrestricted use," the likely meaning isn’t just fewer guardrails it’s that the vendor doesn’t get to monitor or audit how the system is being used. In other words, the government isn’t handing a private company visibility into sensitive operations.
"AI shouldn't be used for mass surveillance or autonomous weapons". The statement from OpenAI virtually guarantees that the intention is to use it for mass surveillance and autonomous weapons. If this wasn't the intention them the qualifier "domestic" wouldn't be used, and they would be talking about "human in the loop" control of autonomous weapons, not "human responsibility" which just means there's someone willing to stand up and say, "yep I take responsibility for the autonomous weapon systems actions", which lets be honest is the thinnest of thin safety guarantees.
Assuming this is real: Why do you think anthropic was put on what is essentially an "enemy of the state" list and openai didn't?
The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.
It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.
My understanding is that OpenAI's deal, and the deal others are signing, implicitly prevents the use of LLMs for mass domestic surveillance and fully autonomous weapons because today one care argue those aren't legal and the deal is a blanket for allowing all lawful use.
Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.
Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.
Life is more than a paycheck. We should raise the bar a little IMO. Turning down money for good reasons is not something extreme we should only expect from saints.
Of course. Doesn't change the reality that this is why someone would accept a justification that a neutral would easily see as plainly dishonest. Anyway, this is why we need unions
Who still does business with open ai and why? They are usually 5th or sixth in the benchmarks bracketed below and above by models that cost less. This has been the case for quite some time. Glm is out for us government purposes I'd imagine, but if google agrees to the same terms I don't see why the us government would use open ai anyway. If google disagrees it would be rather confusing given the other invasions of privacy they have facilitated, but if they do then using open ai would make sense as all that would be left is grok...
Imo the more ethical thing is obstructionism. Twitter's takeover showed it's pretty easy to find True Believer sycophants to hire. Better to play the part while secretly finding ways to sabotage.
Why do you suppose OpenAI's deal led to a contract, while Anthropic's deal (ostensibly containing identical terms) gets it not only booted but declared a supply chain risk?
"domestic" "mass" surveillance, two words that can be stretched so thin they basically invalidate the whole term. Mass surveillance on other countries? Guess that's fine. Surveillance on just a couple of cities that happen to be resisting the regime? Well, it's not _mass_ surveillance, just a couple of cities!
So, can you please draw the line when you will quit?
- If OpenAI deal allows domestic mass surveillance
- If OpenAI allows the development of autonomous weapons
- OpenAI no longer asks for the same terms for other AI companies
Correct?
If so, then if I take your words at face value:
- By your reading non-domestic mass surveillance is fine
- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved
- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.
I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.
Thank you for responding. Everyone wants to think they will “do the right thing” when their own personal Rubicon is challenged. In practice, so many factors are at play, not least of which are the other people you may be responsible for. The calculus of balancing those differing imperatives is only straightforward for those that have never faced this squarely. I’ve been marched out of jobs twice for standing up for what I believed to be right at the time. Am still literally blacklisted (much to the surprise of various recruiters) at a major bank here 8 years after the fact. I can’t imagine that the threat of being blacklisted from a whole raft of companies contracting with a known vindictive regime would make the decision easier.
The founders are all on a first name basis. I’m surprised no one has noted that Anthropic and OpenAI winning together by giving the world two different choices, just like the US does in its political landscape. In this circumstance, OpenAI wins the local market for its government and aligned entities (while having the free consumer by a matter of cost dynamic for that ideal customer profile which is vary broad and similar to Google’s search audience where most their revenue still depends), while Anthropic is provided the global market and prosumer market where people can afford choice by paying for it.
You should quit because the only reasonable thing for your leadership to have done is to refuse to sign any agreement with DoW whatsoever while it's attempting to strongarm Anthropic in this fashion.
It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.
If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.
It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?
What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?
#1 weekend HN is not a sane place. #2 emotions are high. #3 for what it’s worth @tedsanders I understand where you’re coming from and I believe you’re making the right choice by staying or at least waiting to make a decision. Don’t let #1 and #2 hurt you emotionally or force you to make a rash decision you later regret.
Edit: I don’t work at OpenAI or in any AI business and my neck is on the chopping block if AI succeeds… like a lot of us. Don’t vilify this guy trying to do what’s right for him given the information he has.
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons
And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?
> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.
So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?
I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.
Know that when things go wrong (not if, when), the blood will be on your hands too.
His point reeks of cope. But making a large amount of money would make anyone dumb, deaf, and blind. Also, I give a little leeway to people who are employees without executive decision-making power, as they do stand to have a lot to lose in situations like this.
It's probably how they are coping with the cognitive dissonance. I certainly feel for them, I don't know that I could easily walk away from a big pay package either without backup options when I have family to support and I'm not near retirement.
Ted, what do you think of your CEO’s statement: “the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”
The evidence seems to overwhelmingly point in the opposite direction.
This seems like the kind of foolishness it takes a lot of money to believe. Anthropic blew up their contract with the Pentagon over concerns on lethal autonomous weapons and mass domestic surveillance. OpenAI rushes in to do what Anthropic wouldn't.
If you think that means your company isn't going to be involved in lethal autonomous weapons and mass domestic surveillance... I don't really know what to tell you. I doubt you really believe that. Obviously you will be involved in that and you are effectively working on those projects now.
Aside from that unlikely read, this deal was still used as a pressure point on Anthropic, there's absolutely no way OpenAI was not used as a stick to hit with during negotiations.
Anthropic is deemed a betrayer and a supply chain risk for actually enforcing their principles.
OpenAI agrees to be put in the same position as Anthropic.
It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?
There's surely no way that's actually what you believe...
Giving you the benefit of the doubt and assuming [1] does not play a role in your thinking:
I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.
What people don't understand is that domestic surveillance by the government doesn't happen and isn't needed. They know it's illegal and unpopular and for over two decades they have a loophole. Since the Bush administration it's been arranged for private contractors to do the domestic surveillance on the government's behalf. Entire industries have been built around creating "business records" for no other purpose than to sell them to the government to support domestic surveillance. This is entirely legal and why the DoW has been able to get away with saying things like "domestic surveillance is illegal, we don't do that" for over two decades while simultaneously throwing a shit fit about needing "all legal uses" if their access to domestic surveillance is threatened.
There's a big difference between "the government won't use our tools for domestic surveillance" (DoW/DoD/OpenAI/etc) and "we won't allow anyone to use our tools to support domestic surveillance by the government" (Anthropic)
Hegseth and the current Trump admin are completely incompetent in execution of just about everything but competent administrations (of both parties) have been playing this game for a long time and it's already a lost cause.
Listen, if the Government using it for legit and safe use cases wasn’t an issue, then they wouldn’t have complained about Anthropic’s language. Sam is just looking the other way and pretending for you employees.
Or Sam bribed the government to do this, which is also entirely possible.
I don't know you, so maybe you're actually for real and speaking on good faith here but honestly this and your other responses in this thread read exactly like "...salary depends on not understanding"
Assuming this isn't a troll and you really think this, you should at least have the cojones to admit you're taking the blood money instead of trying to pretzel the truth so hard that you just look like a moron instead.
For the record I don’t care if you quit or not. Cash rules after all… However, you are incredibly naive if you think the current admin will follow through on those terms.
I have a bridge to Brooklyn to sell you if you believe this.
Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.
I can't tell you what to do but I hope you make the right decision.
Looks to me like you have decided that you are being paid to shut up and take the word of the most thoroughly dishonest and corrupt US government we've yet seen. Why on God's slowly-browning green earth do you trust that Altman got the deal Anthropic was trying for?
lol, naive as hell. why would your company's agreement be the same as the one who just refused the _same_ agreement? Even my question doesn't even make sense, this is a contradiction, therefore your statement must be false. There, it's proven
Can you at least stop lying to yourself? Given what they did with Anthropic for not supporting domestic mass surveillance and autonomous weapons...
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons
Your understanding is entirely wrong. At least stop lying to yourself and admit that you are entirely fine with working on evil things if you are paid enough.
I know the money is good, but if I were you (or any OpenAI employee), I'd move over to Google or Anthropic posthaste.
Is it really worth the long-term risk being associated with Sam Altman when the other firms would willingly take you and probably give you a pay bump to boot?
It doesn't make sense to me why anyone would want to associate themselves with Altman. He is universally distrusted. No one believes anything he says. It's insane to work with a person who PG, Ilya, Murati, Musk have all designated a liar and just general creep.
Defending him or the firms actions instantly makes you look terrible, like you'll gladly take the "Elites vs UBI recipients" his vision propagates.
You work for a company that’s part of the Trump, Ellison, Kutchner orbit of corruption.
Y’all are developing amazing technology. But accept reality and drop whatever sense of moral righteousness you’re carrying here. Not because some asshole on the internet says so, but for your own mental health.
there is a recent post about how one of the top OpenAI exec has given 25 million$ to a Trump PAC before publicly supporting Anthropic/signing this deal.
One got characterized as supply chain risk and so much for OpenAI to get the same.
And even that being said, I can be wrong but if I remember, OpenAI and every other company had basically accepted all uses and it was only Anthropic which said no to these two demands.
And I think that this whole scenario became public because Anthropic denied, I do think that the deal could've been done sneakily if Anthropic wanted.
So now OpenAI taking the deal doesn't help with the fact that to me, it looks like they can always walk back and all the optics are horrendous to me for OpenAI so I am curious what you think.
The thing which I am thinking OTOH is why would OpenAI come and say, hey guys yea we are gonna feed autonomous killing machines. Of course they are gonna try to keep it a secret right before their IPO and you are an employee and you mention walking out of openAI but with the current optics, it seems that you/other employees of OpenAI are also more willing to work because evidence isn't out here but to me, as others have pointed out, it looks like slowly boiling the water.
OpenAI gets to have the cake and eat it too but I don't think that there's free lunch. I simply don't understand why DOD would make such a high mess about Anthropic terms being outrageous and then sign the same deal with same terms with OpenAI unless there's a catch. Only time will tell though how wrong or right I am though.
If I may ask, how transparent is OpenAI from an employees perspective? Just out of curiosity but will you as a employee get informed of if OpenAI's top leadership (Sam?) decided that the deal gets changed and DOD gets to have Autonomous killing machine. Would you as an employee or us as the general public get information about it if the deal is done through secret back doors. Snowden did show that a lot of secret court deals were made not available to public until he whistleblowed but not all things get whistleblowed though, so I am genuinely curious to hear your thoughts.
Your response is a perfect encapsulation of "It is difficult to get a man to understand something when his salary depends upon his not understanding it."
I think its wrong for someone to ask someone to resign but acting that there is no issue here is debating in bad faith.
The comment perfectly exemplifies the kind of person that would work at OpenAI. Government AI drones could be executing citizens in the streets but they’d still find some sort of cope why it’s not a problem. They’ll keep moving the goalposts as long as the money keeps coming.
OpenAI employees put knives on their own necks to demand Altman to get back and be their boss [1], not too long ago, right? Altman wiggles his tongues and makes them a solid paycheck. "We will not be divided," unless the water boils slow enough. Wait for a few months, he will renegotiate the terms with DoD, just like his move to turn OpenAI into a for-profit.
It's comforting to know that some of the brightest minds of our generation are going to work at OpenAI, then quitting a few months later horrified, only to post a short mysterious tweet warning everyone of the dangers ahead. So much for alignment and serving humanity.
And they will continue to work for Google / Meta et al to use novel AI techniques to sell us more and better ads, only to quit a few years later to do more soul searching where everything went wrong /s
They've been deleted. For obvious reasons. You want to take a stand but you don't want to stop working for the people who do the things you don't want to do. It's all so very american. I'll put my name on but if it doesn't work remove my name so I don't get into trouble ok? Home of the brave.
> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that. In both cases, the two parties can claim to agree on the principles, but when push comes to shove, who decides on whether the principles are violated differs.
> The difference is that Anthropic wanted to reserve the right to judge when the red lines are crossed, while OpenAI will defer to the DoD and its policies for that.
It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those. And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.
who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
And the US government should have precisely none of that, regardless of whether they’re red or blue.
> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
I don't think that's the case. Amodei is worried that AI is extraordinarily capable, and our current system of checks and balances is not adequate yet to set the proper constraints so the law is correctly enforced. Here's an excerpt from his statement [1]:
> Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
Let's do this thought exercise: how long would it take you, using Claude Code, to write some code to crawl the internet and find all the postings of the HN user nandomrumber under all their names on various social media, and create a profile with the top 10 ways that user can be legally harassed? Of course, Claude would refuse to do this, because of its guardrails, but what if Claude didn't refuse?
And that’s where the authoritarian in you is shining through.
You see, Obama droned more combatants than anyone else before or after him but always followed a legal paper trail and following the book (except perhaps in some cases, search for Anwar al-Awlaki).
One can argue whether the rules and laws (secret courts, proceedings, asymmetries in court processes that severely compress civil liberties… to the point they might violate other constitutional rights) are legitimate, but he operated within the limits of the law.
You folks just blurt “me ne frego” like a random Mussolini and think you’re being patriotic.
> Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
> And the US government should have precisely none of that, regardless of whether they’re red or blue.
This is a pretty hot take. "You can't break the law and kill people or do mass surveillance with our technology." fuck that, the government should break whatever laws and kill whoever they please
I hope you A: aren't a U.S. citizen, and B: don't vote.
If I'm selling widgets to the government and come to find out they are using those widgets unconstitutionally and to violate my neighbors rights you can be damn sure I'm going to stop selling the gov my widgets. Amodei said that Anthropic was willing to step away if they and the government couldn't come to terms, and instead of the government acting like adults and letting them they decided to double down on being the dumbest people in the room and act like toddlers and throw a massive fit about the whole thing.
> It was pretty clear from Anthropic’s and Hegseth’s statements that they didn’t disagree on the two exclusions, but on who would be the arbiter on those.
No. Altman said human responsibility. Anthropic said human in the loop.
> And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.
I don’t understand your first comment. At that point, Altman’s tweet didn’t exist yet, and is immaterial to the reading of Anthropic’s and Hegseth’s statements.
To your second comment, it was clear enough to me to be the most plausible reading of the situation by far.
We state what we think the situation is all the time, without explicitly writing “I think the situation is…”.
Seems Anthropic did not understand the questions they were asked. From the WaPo:
>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.
>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.
I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.
"It’s the kind of situation where technological might and speed could be critical to detection and counterstrike"
Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.
Is there any reason at all to believe the account of the unnamed "defence official"? Whatever your position on this administration, you know that it lies like the rest of us breathe. With a denial from the other side and a lack of any actual evidence, why should I give it non-negligible credence?
It is bizarre. I like how, "past performance predicts future performance" is supposed to apply to founders and companies but completely disregarded for a two term president and admin, as if we have no idea how they will operate in the future.
Anthropic, with its current war chest, is supposedly employeeing lawyers that are misunderstanding the Department of War? This is considered to be the likelier of possibilities, am I understanding this correctly?
This is not what I said, and not what the WaPo quoted. We're talking about the CEO, who is shall we say unfamiliar with war making, getting asked a hypothetical about how the product he sells would perform in a first strike scenario, and he reportedly gives what is an entirely legalese answer. Yes, I consider this a likely possibility. It sounds exactly like how someone would respond if they've been swimming in legal memos for months.
> It sounds exactly like how someone would respond if they've been swimming in legal memos for months.
I think you're being highly speculative. The part you quoted from the WaPo doesn't even state the defense official was complaining about about any "legalese" reponse, that seems like a projection on your part. The only info you gave in your comment about what Dario said is only a defense official's paraphrasing. It seems a simply case of Dario refusing to give a blank check in all scenerios whereas the defense official, for maximum impact, chose to portray "not having a blank check" as "having to call Anthropic" in every case
where "help" is given by an LLM. The appearance of "misunderstanding" you're seeing in the media is not about the parties' misunderstanding of what the other side wants, it's simply a fallout from each side fighting to control the narrative.
That's a copout and you know it. You're focusing on the 'unnamed' part; I'm focusing on the 'representative of an administration that lies constantly and brazenly' part.
Noted Rationalist responds to a question about a first strike scenario with "I need to think about it" instead of "of course we'd launch the missiles, are you kidding?" and everyone here seems to think this is somehow unbelievable.
You're still dancing around the point. Person A said X; person B said not X; we have no concrete evidence either way. Person A is an anonymous representative of a group that has no norms against dishonesty, an obvious motive to falsely claim X, and a track record of telling frequent, shameless lies. X doesn't need to be 'unbelievable' for me to ask, again, what positive reason do you have for believing it?
Why the fuck would you use an LLM to determine whether a nuclear missile was hurtling towards you? The question makes no sense, and so you get a nonsensical answer.
Seems not unlikely that Anthropic was manipulated into this position for purposes of invalidating their contract.
> If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
Are you serious? This is the kind of thing you'd ask a clarifying question on and get information back immediately. Further, the huge overreaction from Hegseth shows this is a fundamental disagreement.
The flip side of "Hegseth is an unqualified drunk", a position which I've always held and still maintain, is that he very well might crash out over nothing instead of asking clarifying questions or suggesting obvious compromises. This is the same guy who recalled the entire general staff to yell at them about the warrior mindset. Not an excuse for any of this, but I do think the precise nature of the badness matters.
> could the military use Anthropic’s Claude AI system to help shoot it down?
What a joke. I suggest folks read up on the very poor performance of US ICBM interceptor systems. They're barely a coin flip, in ideal conditions. How is Claude going to help with that? Push the launch interceptor button faster? Maybe Claude can help design a better system, but it's not turning our existing poor systems into super capable systems by simply adding AI.
I'm sure it's a matter of interpretation. Anthropic thinks the DoW's demands will lead to mass surveillance and auto-kill bots. The DoW probably disagrees with that interpretation, and all OpenAI needs to do is agree with the DoW.
My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.
Why do you choose to call it the "DoW"? Its official name is the Department of Defense, it was titled that way by Congress and only Congress can change it. What is your motivation in using a term that the current administration has started to use? Do you also use the Gulf of America when referrring to the body of water that defines the southern edge of the USA?
Don't you think it is more to-the-point to call it what it is and what the people running it with, i'll bet everything i have, absolute immunity, are doing and intend to do with it?
It is "honest" in the historical sense, certainly.
But the executive-order driven name change just another bit of illegal/extra-legal/paralegal behavior by the administration that, every time we just nod along, eats away at the constitutional structure of our government. So don't go along with it.
Personally, as someone coming from a region that has suffered many times over by the actions of this so called "Ministry of Defense," I feel like "Department of War" is a more accurate and honest term.
As I've noted multiple times here on HN, I don't disagree with this.
But the question is not about whether it is a more accurate and honest term. It is about people complying in advance with the illegal/extra-legal renaming of a federal agency by a president who does not have the right or authority to do so.
If we were talking about Congress voting to rename the DoD as the DoW, I'd have nothing to say on the matter that differed from your observation.
It's the term used by Sam Altman in the announcement. Maybe aim your anger there, to someone knowingly helping them in their attempt to turn the department into one of aggression.
No, the Department of War is the former name of the Department of the Army and nothing else. DoD is a new creation that includes the Army, historic Department of the Navy, and the other, post-WW2, new services.
The president has no authority to do this. Federal departments and agencies are named by Congress, and even the Republicans in Congress have shown no interest in formalizing this.
Sure, no such law that I know of. But there's also no law that suggests that anybody else needs to refer to the Department of Defense using terms that the president and his minions just made up out of thin air. I'm also arguing that going along with them, by itself, is harmful to a democratic government.
Exactly this! Just like the Gulf of Mexico is still called the Gulf of Mexico, if we just ignore his ramblings and continue calling the department of defense, we undermine his whole point. If we fall for all their crap and just accept it, then we loose in the end. Any resistance to a Fascist government is good resistance. Anything that makes their life's a little shittier is good. Better that they go around having tantrums about how they renamed it but no one is paying attention.
Anthropic has safeguards baked in the model, this is the only way to make sur it's harder for the DOJ to misuse it. A pinky swear from the DoD means nothing
If your starting position is already that Sam Altman lies about everything that doesn't fit your preconceived positions, that doesn't seem like a very useful meaningful position to update.
I think it is like a loyalty test to an authority above the law (executive immunity) in order to do business. “If we tell you to do so, you may do something you thought was right or wrong.” It is like an induction into a faction and the way the decisions could be made. Doesn’t necessarily mean anything about “in practice in the future”, just that the cybernetic override is there tacitly. If the authority thinks they can get away with something, they will provide protection for consequences too. Some people more equal than others when it comes to justice for all, etc. There are probably alternative styles for group decision making…
> I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this
Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.
> some I suppose will stay in their jobs because leaving isn't an easy decision to make.
Anyone who chooses to stay shouldn’t have signed the letter. What’s the point of doing it if you’re not going to follow through? If you signed the letter and don’t leave after the demands aren’t met, you’re a liar and a coward and are actively harming every signatory of every future letter.
I think the problem might actually be with reenforcing the red lines. The events of the last few weeks and this new deal only make sense if Anthropic was trying to find out how Palantir and the Pentagon had circumvented their restrictions to attempt to reenforce those restrictions like company actually concerned about the misuse of their product. OpenAI most likely came in with assurances that they wouldn't attempt to reinforce their restrictions.
Yes, what is implied in this episode is that all big companies that do AI development or provide computing for Ai are now signing for these very shady uses of their technologies.
>Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
While that thought crossed my mind, someone in a sub thread of parent comment made a point: OpenAI made a statement about how "We insisted this be not be used in those ways and DoD totally says they won't". Which sounds to me like they ceded any hard terms oand conditions and are letting the DoD use it in "any lawful means" which is what Anthropic didn't stand for.
Another plausible explanation that is familiar to a lot of people in other countries is banal corruption. Kick out one competitor on bogus allegations, then on the next day invite another one… what else that could be?
It was just a ruse to figure out who to fire. Either resign on your own terms or get fired. Companies and government only have one loyalty, to themselves,
For all I know Sam Altman orchestrated this via well timed donations and whatever the hell contacts he has in government, Trump specifically seems to have taken the man
So using Anthropic’s own words to cover a power play or pulling relationships to see if they could get anthropic to balk at it.
Money buddy, they never cared. They didn’t care when they went back on their safety and guidance boards, they didn’t care when they tried to push Altman out, and these employees won’t care when the first AI nuke launches. Money, money, money so they don’t think about it later. It’s the exact same reason Facebook employees have given us the other side of surveillance hell.
I would not discount how much of a factor, irrational human emotions play in negotiations.
Dario is arrogant and pompous so probably wound Hegseth up the wrong way. Sam is much more charming and amenable so more able to get his way despite similar terms.
Its about network effect - The biggest issue is that ChatGPT is a household name like Google at this point. Everyone and their grandma knows it or are learning about it, while Claude is very well known in the tech circles. Getting tech people to switch is relativity easy (ignoring Enterprise contracts), but getting everyone else to switch is going to be very slow.
Honestly, the best thing to happen is that someone comes up with a new UI (think claw...like) that everyone starts using instead. A very cute, well integrated system that just works for everyone, has free tier, and has something that the others dont have.
>> All of us can act too. Stop using the OpenAI models. Stop using the app. Design in other models no matter what. Screw these guys.
> Do you expect that to work?
Many years ago Tim O'Reilly (of book publishing fame) knew Apple would one day would become really big even though they were a small, niche player in the "PC" space as the time (2000s). How did he know that? By seeing what the 'alpha geeks' were doing: the folks that not just used tech, but were working at companies that were inventing the future. They were the ones where friends and families asked for advice. And the alpha geeks (at the time) were switch to MacOS X and telling their friends and family about it.
There's a good chance that if you're on HN, you're the person in your non-techies social group that many others ask for advice. You can potentially sway many people by your example and your advice.
Nah. It's possible that the agreement still supports the required terms.
There is more to this story behind the scenes. The government wanted to show power and control over our companies and industries. They didn’t need those terms for any specific utility, they wanted to fight “woke” business that stood up to them.
Supposedly OpenAI had the same terms as Anthropic (according to SamA). Maybe they offered it cheaper and that’s why they agreed. Maybe it’s all the lobbying money from OpenAI that let the government look the other way. Maybe it’s all the PR announcements SamA and Trump do together.
"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.
I prescribe literally zero truth value to what Sam says. He will say whatever he needs to get ahead. It is honestly irritating to me that you and many others here seem to implicitly assume his messages are correlated with truth, doing his social engineering work for him, as if his word should adjust your priors even slightly.
I don't necessarily think he's lying, but there's so much obvious incentive for him to lie here (if only because his employees can save face).
He doesn't even need to be lying, the comment is vague and contains enough loopholes that it could be true yet meaningless. I explained some that I noticed here: https://news.ycombinator.com/item?id=47190163
And fired from YC for lying. And lied to investors about how many Loopt employees he had. And lied about having 100x the actual number of users when he sold it. And lied to employees about the Microsoft deal. And lied to his safety team.
It's this simple: Trump is a criminal. Larry Ellison is his pal. Sam Altman has a huge deal for cloud services from Oracle. Trump is using the DoD budget to backstop Ellison's business.
This is pretty much on the right take on it, although it's much more than that. It's very clear at this point, especially the first conclusion, but people insist in looking to the other side.
Attempting to kneecap the breakout front runner of the major American AI companies to ensure the shittier, politically compliant one wins in the short term? Gee I wonder.
For better or worse, outright nationalization of military related companies is common on a global scale. I plan to do my best to ensure this is a domestic catastrophe, and I hope we'll succeed, but I don't expect other countries to care much about varying levels of regime alignment between two billionaire American defense contractors.
Maybe Sam Altman said nicer things about Donald Trump. Maybe he promised that he would not revoke their API keys when Hegseth directs the military to seize ballots. Maybe he's jockeying for position to take over the government when AGI hits.
Ultimately, I don't know how much the specific reasons matter. Pete Hegseth must be removed from office, OpenAI must be destroyed for their betrayal of the US public, that's all there is to it.
> Trump’s son in law (Kushner) has most of his net worth wrapped up in OpenAI.
If true (too lazy to check but I honestly take your word for it), this should probably be bigger news. Not that the outright corruption when it comes to the highest position in the US Government constitutes news anymore, but because it puts the Government’s fight against Anthropic (and supposedly other potential OpenAI competitors) in a new light.
This reminds me of Ken Thompson’s speech on trusting trust. The recursive/meta nature of it all has helped me explain to those unfamiliar that this is such a waste of time. Education is where it’s at, but I’m preaching to the choir here on HN.
Trying to restrict the non-printed ICs you'd connect to your 3D printed parts would be even dumber. There's a zillion things that can slam out bits and control a stepper motor.
you can build a 3d printer out of general-purpose electronic bits, anything they tried to ban would send ripples into countless other industries that are completely unrelated to each other or 3d printing.
By "general-purpose" I mean that there's no components that are 3d-printer specific; motor controllers and microcontrollers and voltage regulators and all the various jellybean parts. And even if there were any, they could easily be replaced with general-purpose components.
>The anxiety creeps in: What if they have removal? Should I really commit this early?
>However, anxiety kicks in: What if they have instant-speed removal or a combat trick?
It's also interesting that it doesn't seem to be able to understand why things are happening. It attacks with Gran-Gran (attacking taps the creature), which says, "Whenever Gran-Gran becomes tapped, draw a card, then discard a card." Its next thought is:
>Interesting — there's an "Ability" on the stack asking me to select a card to discard. This must be from one of the opponent's cards. Looking at their graveyard, they played Spider-Sense and Abandon Attachments. The Ability might be from something else or a triggered ability.
The anxiety is coming from the "worrier" personality. Players are combination of a model version + a small additional "personality" prompt - in this case (https://mage-bench.com/games/game_20260217_075450_g8/), "Worrier". That's why the player name is "Haiku Worrier". The personality is _supposed_ to just impact what it says in chat (not its internal reasoning), but I haven't been able to make small models consistently understand that distinction so far.
The Gran-Gran thing looks more like a bug in my harness code than a fundamental shortcoming of the LLM. Abilities-on-the-stack are at the top of my "things where the harness seems pretty janky and I need to investigate" list. Opus would probably be able to figure it out, though.
That's the best way to do it. Otherwise all the money will go to the rich brat children of politicians/etc who are socially connected to whoever they put on the selection committees.
Rich parents are masters of helping their children exploit the system in as many of the thousands of ways that exist. A few hundred here, another hundred there, maybe some one-off thousands here.
In most of the world, rich people are rich because they are good at exploiting government funds. It's a lifestyle.
Mostly because the kind of people who run and advocate for programs like this are actively hostile to the idea of merit. Prioritizing talented people would be antithetical to them.
Prioritizing merit would be fine if there was some way to measure merit empirically, and if that measure couldn't be gamed by anybody with money and/or connections. But this is for artists, so...
And thinks that s/he's a winner and the stuff s/he enjoys is made by winners, and the stuff s/he doesn't like is made by losers. Merit, universal, objective = ME; Worthless, narcissistic, special interest = YOU.
reply