Corporate bonds were simply not being bought, at any price. Same with commercial paper. Nobody knew what firms were going to still exist in a week so nobody was willing to lend any money at all.
> Nobody knew what firms were going to still exist in a week so nobody was willing to lend any money at all.
Perhaps I'm misunderstanding, but isn't this another way of saying it was too risky for people to invest? That seems to be the same concept as the quote you cited from the parent comment: "either the return wasn't commensurate to the risk".
I guess you could say that but the underlying problem was that the risk was entirely opaque so it couldn't actually be quantified and hedged against. The TARP loan ("shakedown" might be a better term honestly) gave financial firms time to sort out what their actual positions and exposure were; there wasn't time to let the market sort that out over months and at the cost of every major company (even non-financials) failing because of lack of access to credit.
Yeah it seems there's a bit of asymmetry between a normal lender and the federal government here where as a normal lender you might not be able to lend enough to guarantee the debtor survives. Also what the gov decides to do may significantly influence the lender's behavior. If the lender thinks there's a chance the gov will bail them out, they would probably prefer that and not give a loan.
Whereas the federal government can write a check for $633.6 billion and be much more certain the debtors will survive and pay it back.
So the government has negotiated from a position where the average taxpayer could be buying $10 worth of assets for $1 and have a go at managing it properly and creating some wealth, to a position where the taxpayer pays $1, the government buys the $10 in assets and gives it to some wealthy idiot, and there is a nominal return which at that time I imagine went into killing people in Iraq because Muslims, amirite? All those bombs cost a bomb.
And then we see 20 good years of economic prosperity where the US predictably got even wealthier than it previously was and there is great political stability and well-loved presidents like Mr Trump who represent the satisfaction US citizens feel for the economic highs they have reached!
What a fantastic deal for the average taxpayer. Let the confetti fall. Well done government, saved the day there.
Where it went was bailing out the automakers. It was a big story at the time and I'm starting to worry people just don't form long-term memories anymore.
> the US predictably got even wealthier than it previously was
If you just look at the economic indicators, then it did. Certainly way better than the "no intervention" counterfactual would have gone. People do not like it when all the ATMs stop working.
There is a lot of discourse to be had as to why people aren't feeling that personally.
> killing people in Iraq because Muslims, amirite? All those bombs cost a bomb.
Sadly there is/was massive bipartisan support for this bullshit. Including from the public. I note from a chronology perspective that most of the money in Iraq was spent/lost/wasted before 2008.
The problem is circular. The risk is that your counterparty goes bust. Therefore nobody wants to make any moves until they can be sure that (mostly) every other player is stable. But because no moves are happening, that in itself is destabilizing.
That is, the big risk is "what if the state doesn't intervene?"
Correspondingly, the state has a special move that only it can play, because "what if the state doesn't intervene" is not a risk to the state itself. The act of intervening makes the risk go away. That's part of the privilege of being the lender of last resort with the option to print currency.
(which is why this was a much more serious problem for Greece and Ireland, which as Eurozone members were constrained in their ability to even contemplate printing their way out of the problem!)
> It cost the taxpayers nothing (in fact it made us money)
I was surprised to learn that the "bailout" was in fact a loan that was repaid with interest for a "net profit of $121 billion" [1] rather than just giving the banks money. After learning this, I polled many people around me and few had understood the terms of the transaction. So I think there may be significant public misunderstanding there.
Even if people do understand it was a loan, there's an argument to be made that the money could have been spent in better ways (e.g. early education improvement, preventative healthcare etc. that also give long term returns in preventing crime and reducing healthcare costs). If you believe not giving the loans would have caused the total collapse of the economy and worsened of all of those things (crime, healthcare, education etc.), then it seems a worthwhile investment. But not everyone may share that perspective.
> What part of that are people mad about, and why?
Another element of the controversy was the payment of $218 million of bonuses to the executives of AIG which was being bailed out and effectively run by the federal government [2]. Apparently the government allowed the bonuses because Geithner said there was no legal basis for voiding the bonus contracts.[3]
Some people think controversy over government mortgage relief spawned the Tea Party movement based on this speech by Rick Santelli [4] about his dissatisfaction with the government's bailing out the "losers" who couldn't afford their mortgages.
Some people also feel there could have been more regulation of the financial sector or breakup of big banks [5] or more stipulations attached to the loans.
Just some suggestions based on my understanding of the history.
It seems only the very simplified narrative actually sticks, especially when it is convenient for anti-establishment types to do so (and realistically, approximately no-one really _likes_ Wall Street). But I think it's important to consider that while probably the government didn't go as far as it could, it did for the most part help prevent the crisis from getting worse for those who were not responsible while for the most part not doing much for the people working in finance, especially those that they could nail for outright fraud.
A former NASA engineer with a PhD in space electronics who later worked at Google for 10 years wrote an article about why datacenters in space are very technically challenging:
I don't have any specialized knowledge of the physics but I saw an article suggesting the real reason for the push to build them in space is to hedge against political pushback preventing construction on Earth.
I can't find the original article but here is one about datacenter pushback:
But even if political pushback on Earth is the real reason, it still seems datacenters in space are extremely technically challenging/impossible to build.
The real reason is, Elon has SpaceX and xAI. He can create an illusion of synergy and orders of magnitude advancements to boost the market cap and pocket all the money. He realized long time ago you don't need to deliver to play the market cap game, in fact it's better if you are selling a story far in the future rather than a something you can deliver now.
Ok, he delivered your Tesla and your Starlink, but so far he has hasn't delivered your Robotaxi, your Optimus, your lunar lander, your space datacenter etc. And the list keeps getting longer instead of shorter...
>Robotaxi, your Optimus, your lunar lander, your space datacenter etc. And the list keeps getting longer instead of shorter...
Lets go through this one by one
[1]Robotaxi.
Someone just drove coast to coast USA fully on autopilot. I drive my tesla every day, and i literally NEVER disengage autopilot. It gets me to work and back home without fail, to the grocery store, to literally anywhere i need. Whats not full self driving about that? I got in two crashes before i got my Tesla cause i was a dumb teen, but i'm sure my Tesla is a much better driver than my younger sister. Politically it's not FSD, but in reality, it has been for a while.
[2]
Optimus has gone through three revisions and has hand technology that is 5+ years ahead of the competition. Even if they launched it as a consumer product now, i'm sure a million people would buy it just as a cool toy/ gadget. AKA a successfull product.
[3] Lunar Lander
Starship, a fully reusable, 2 stage rocket that has gone through 25 revisions and is 95% flight proven and has even deployed dummy starlinks. 10+ years ahead of everyone except maybe stoke.
[4]Space Datacenter
Have you ever used starlink? They have all the pieces they need... Elon build a giant datacenter in 6 monmths when it takes 3-4 years usually. He has more compute than anybody and Grok is the most intelligent AI by all the metrics outside googles. Combine that with Starship, which can launch 10X the capacity for 10% of the cost, and what reason do you have to doubt him here?
Granted... it always takes him longer than he says, but he always eventually comes through.
Eventually comes through? Have you forgotten Hyperloop, new roadster, instant battery swaps, tunnels to replace all traffic, your car appreciating in value, your car being used as a robotaxi during downtime to make you money, semi convoys, etc etc?
He does (or at least a good proportion) if you want to use as precedent for delivering on these promises, though. Especially for the larger more extreme statements and not just buying himself into an existing business.
His investors are quite happy with his success rate. He is constantly building new stuff. And as a consumer who has had great experience with every product I've bought, so am I
No one buys into Elon's firms because he's expecting dividends.
His investors are not investing because of his success rate in delivering on his promises. His investors are investing exclusively because they believe that stock they buy now will be worth more tomorrow. They all know that's most likely not because Elon delivers anything concrete (because he only does that in what, 20% of cases?), but because Elon rides the hype train harder tomorrow. But they don't care if it's hype or substance, as long as numbers go up.
Elon's investors are happy with his success rate only in terms of continuously generating hype. Which, I have to admit, he's been able to keep up longer now than I ever thought possible.
Theranos were also hyping a lot and trying to build some stuff. There is some threshold (to be decided where) after which something is more of a fraud than a hype.
Also these days stock market doesn't have much relation to real state of economy - it's in many ways a casino.
Not sure who determines the threshold, he certainly goes to court more than your average person, but these are not start ups, they are large companies under a lot of scrutiny. I don't think the comparison is valid
>he certainly goes to court more than your average person
Yes because he sues a lot of entities for silly things such as some advertisers declining to buy ads that display next to pro-hitler posts, or news outlets for posting unaltered screenshots of a social media site he acquired.
> The hype to substance ratio isn't quite as important as some choose to beleive
Musk's ratio is such that his utterances are completely free from actionable information. If he says something, it may or may not happen and even if it does happen the time frame (and cost) is unlikely to be correct.
I don't get why anyone would invest their money on this basis.
Some combination of the two, for sure. doesn't mean that Musk can't keep doing it. however you describe it or define it, it's a proven strategy at this point. I'm not sure Larry knew how Musk would make him good on Twitter, but he knew enough about Musk to be confident it would happen.
I think this is why he gets away with it. A "win" is a product delivered years late for 3x the promised MSRP with 1/10th the expected sales. With wins like these, what would count as a loss?
He gets away with it for one reason only, and because he consistently delivers good returns on capital.
Most of Tesla's revenue derives from Model Y and FSD subs. I agree that Cybertruck was a marketing ploy. Don't think it was ever intended to be materially revenue generating.
Revenue has flatlined, but investors' confidence comes from Musk's track record for delivering good returns to investors. I think we can agree Musk succeeded in 2020 to 2025 in this regard. Whether you are confident he can do it again over next five years is the key question.
I'm personally more persuaded by the argument that Tesla is a meme-stock at this point - like much of crypto, it runs on "vibes", not solid fundamentals.
But even if share price is the metric for success, 33.6% over 5 years is like 6% compounded annually, which is okay I guess? [0]
What'd be the point of inflating market caps like this when it's obvious they'll crash the moment the owner tries to liquidate any of it before the promises are kept?
The story is that they have a person (or people?) who are REALLY good at managing him and shoving him through the SpaceX offices so that he things he's contributing and out the back door before he has time to fuck anything up.
The product Elon has been most directly involved in is the Cybertruck which is a complete disaster. When talking about Elon you have to specify pre drug addict Elon and ketamine fried brain Elon. The latter makes very bad decisions.
Please stop posting these throwaway, sneering replies, no matter how bad the comment you're replying to. Just downvote it, and if you must comment, do so substantively.
We don’t even have a habitable structure in space when the ISS falls, there is no world in which space datacenters are a thing in the next 10, I’d argue even 30 years. People really need to ground themselves in reality.
Edit: okay Tiangong - but that is not a data center.
We have 15,000 satellites in orbit that are almost literally the exact same premise currently being proposed - a computer with solar panels attached. We've being doing exactly this for decades.
> We don’t even have a habitable structure in space
Silicon is way more forgiving than biology. This isn’t an argument for this proposal. But there is no technical connection between humans in space and data centers other than launch-cost synergies.
Okay, but a human being represents what, 200 W of power? The ISS has a crew of 3, so that's less than a beefy single user AI workstation at full tilt. If the question is whether it's practical to put 1-2 kW worth of computing power in orbit, the answer is obviously yes, but somehow I don't think that's what's meant by "datacenter in space".
I don't know, 10 years seems reasonable for development. There's not that much new technology that needs to be developed. Cooling and communications would just require minor changes to existing designs. Other systems may be able to be lifted wholesale with minimal integration. I think if there were obstacles to building data centers on the ground then we might see them in orbit within the next ten years.
The same things you are saying about data centers in space was said by similar people 10-15 years ago when Elon musk said SpaceX would have a man on Mars in 10-15 years.
We have had the tech to do it since the 90's, we just needed to invest into it.
Same thing with Elon Musks hyperloop, aka the atmospheric train (or vactrain) which has been an idea since 1799! And how far has Elon Musks boring company come to building even a test loop?
Yeah, in theory you could build a data center in space. But unless you have a background in the limitations of space engineering/design brings, you don't truly understand what you are saying. A single AI data center server rack takes up the same energy load of 0.3 to 1 international space station. So by saying Elon musk can reasonable achieve this, is wild to anyone who has done any engineering work with space based tech. Every solar panel generates heat, the racks generate heat, the data communication system generates, heat... Every kW of power generated and every kW of power consumes needs a radiator. And it's not like water cooling, you are trying to radiate heat off into a vacuum. That is a technical challenge and size, the amount of tons to orbit needed to do this... Let alone outside of low earth... Its a moonshot project for sure. And like I said above, Elon musk hasnt really followed through with any of his moonshots.
> A single AI data center server rack takes up the same energy load of 0.3 to 1 international space station.
The ISS is powered by eight Solar Array Wings. Each wing weighs about 1,050kg. The station also has two radiator wings with three radiator orbital replacement units weighing about 1,100kg each. That's about 15,000 kg total so if the ISS can power three racks, that's 5,000kg of payload per rack not including the rack or any other support structure, shielding, heat distribution like heat pipes, and so on.
Assuming a Falcon Heavy with 60,000 kg payload, that's 12 racks launched for about $100 million. That's basically tripling or quadrupling (at least) the cost of each rack, assuming that's the only extra cost and there's zero maintenance.
Falcon Heavy does not cost 100M when launching 60 metric tons.
At 60 metric tons, you're expending all cores and only getting to LEO. These probably shouldn't be in LEO because they don't need to be and you probably don't want to be expending cores for these launches if you care about cost.
The real problem typically isn't weight, it's volume. Can you fit all of that in that fairing? It's onli 13m long by 5m diameter...
His time estimates are notoriously, um, aggressive. But I think that's part of how his companies are able to accomplish so much. And they do, even if you're upset they haven't put a human on Mars fast enough or built one of his side quests.
"We specialize in making the impossible merely late"
I note that their accomplishments tend to be in the past, prior to his Twitter addiction absorbing his attention. Tesla is a solid decade late on FSD, cutting models, and losing market share rapidly thanks to his influencer stunts. SpaceX has a solid government launch business, which is great, but they’ve been struggling with what’s been the next big thing for a while and none of that talk about Mars has made meaningful progress. Boring Company, Neurolink, etc. show no signs of profit anytime soon no matter how cool they sound.
Being ambitious is good to an extent but you need to be able to deliver to keep a company healthy. Right now, if you’re a sharp engineer you are looking at Tesla’s competition if you want to work on a project which doesn’t get cancelled (like it’s cars) and the stock price being hyped to the moon means that options aren’t going to be as competitive.
> Cooling and communications would just require minor changes to existing designs.
"Minor" cooling changes, for a radically different operating environment that does not even have a temperature, is a perfect insulator for conduction and convection, and will actively heat things up via incoming radiation? "Minor" ? Citation very much lacking.
Then you picked the wrong thread to insert yourself, it's literally about that.
Which is funny, there are multiple other replies to you, explaining at length that while your ideas are physically possible, they are completely impractical. And yet you think they still could be "minor".
People always make this claim about world hunger elimination with no sources. Keep in mind we make more than enough calories to feed everyone on the planet many times over, it's a problem of distribution, of getting the food to the right areas and continuing cultivation for self sufficiency.
Even the most magnanimous allocators cannot defeat the realities of boots on the ground in terms of distribution. It is a very difficult problem that cannot be solved top down, the only solution we've seen is growth of economic activity via capitalistic means, lifting millions, billions out of poverty as Asia has done in the last century for example.
I argue that if you have literal hundreds of billions of hard cash to burn for stupid things like AI datacenters, you could afford to make the lives of millions of starving people not suck instead, pretty easily so. But to do that, you'd have to try, and that would mean actually doing something good for humanity. Can't have that as a billionaire.
Who has hundreds of billions of hard cash for data centers? All of the AI spending has been in IOUs between Nvidia, OpenAI, Coreweave, etc. And even if you did have hard cash, how will you spend those billions? No one actually seems to have a sound plan, like I said. They just claim it can be done.
> SPIEGEL: Mr. Shikwati, the G8 summit at Gleneagles is about to beef up the development aid for Africa…
> [Kenyan Economist] Shikwati: … for God’s sake, please just stop.
> SPIEGEL: Stop? The industrialized nations of the West want to eliminate hunger and poverty.
> Shikwati: Such intentions have been damaging our continent for the past 40 years. If the industrial nations really want to help the Africans, they should finally terminate this awful aid. The countries that have collected the most development aid are also the ones that are in the worst shape. Despite the billions that have poured in to Africa, the continent remains poor.
It’s somewhat ironic that the way it has been framed here is as lacking in nuanced understanding as the style of aid which Shikwati argued against in the full interview. Unsurprising we should get a snippet cropped by a right wing libertarian think-tank in such a way that it boils down to simply “hurr aid bad”.
If you're hellbent on arguing with a cult, it will be much cheaper to go down to your local Church of Scientology and try to convince them that their e-meter doesn't work.
As if company performance actually affected stock price when it comes to anything Elon Musk touches.
For fuck's sake, TSLA has a P/E of a whopping *392*. There is zero justification for how overvalued that stock is. In a sane world, I should be able to short it and 10x my money, but people are buying into Musk's hype on FSD, Robotaxi, and whatever the hell robot they're making. Even if you expected them to be successes, they'd need to 20x the company's entire revenue to justify the current market cap.
It's much easier to find a country or jurisdiction that doesn't care about a bunch of data centers vs launching them into space.
I don't get why we aren't building mixed use buildings, maybe the first floor can be retail and restaurants, the next two floors can be data centers, and then above that apartments.
I think data centers, in the areas where they are most relevant (cold climates), are going to face an uphill battle in the near future.
Where I live, Norway, we've seen that:
1) The data centers don't generate the numbers of jobs they promise. Sure, during building phase, they do generate a lot of business, but during operations and maintenance phase, not so much. Typically these companies will promise hundreds of long-term jobs, while in reality that number is only a fraction.
2) They are extremely power hungry, to the point where households can expect to see their utility bill go up a non-trivial amount. That's for a single data center. In the colder climate areas where data centers are being promoted, power infrastructure might not be able to handle the centers (something seen in northern Norway, for example) at a larger scale, due to decades of stagnation.
3) The environmental effects have come more under scrutiny. And, unfortunately for the companies owning data centers, pretty much all cold-climate western countries have stringent environmental laws.
Data centers don't do anything other than sit there and turn electricity into heat. They only emit nothing but heat (which could be useful to others in the building).
In America they have "temporary" jet turbines parked next to them burning gas inefficiently with limited oversight on pollution and noise because they are "temporary".
Mixed-use buildings with restaurants on the lower floors and residential on the upper floors are very common. Not sure what prisons have to do with anything.
The cost per square foot goes up as you add more floors. Construction goes multi-story to save space where land is expensive. But data centers don't need to be in places where land is expensive.
> I don't get why we aren't building mixed use buildings, maybe the first floor can be retail and restaurants, the next two floors can be data centers, and then above that apartments.
I mean a DC needs a lot of infrastructure and space. I think the real estate economics in places where people want to live, shop, and eat preclude the kinds of land usage common in DC design. Keep in mind that most DCs are actually like 4 or 5 datahalls tethered together with massive fiber optic networks.
Also people prefer to build parking in those levels that you're proposing to put DCs into.
> A former NASA engineer with a PhD in space electronics who later worked at Google for 10 years wrote an article about why datacenters in space are very technically challenging
It's curious that we live in a world in which I think the majority of people somehow think this ISN'T complicated.
Like, have we long since reached the point where technology is suitably advanced to average people that it seems like magic, where people can almost literally propose companies that just "conjure magic" and the average person thinks that's reasonable?
I can put things in a box that uses spooky electromagnetic waves to tickle water molecules to the point that they get hot and maybe boil off, given the chance? Sounds like magic to me
I was skeptical at first for much the same reason the author of that first article is; there are a lot of obstacles. But the more I think about it the less daunting those obstacles seem.
The author uses the power capacity of the ISS's solar panels as a point of comparison, but SpaceX has already successfully deployed many times that capacity in Starlink satellites[1] without even needing to use Starship, and obviously the heat dissipation problem for those satellites has already been solved so there's little point in hand-wringing about that.
The author also worries about ground communication bandwidth, claiming it is "difficult to get much more than about 1Gbps reliably", which seems completely ignorant of the fact that Starlink already has a capacity much greater than that.
The only unsolved technical challenge I see in that article is radiation tolerance. It's unclear how big of a problem that will actually be in practice. But SpaceX probably has more experience with that than anyone other than perhaps NASA so if they think it can be done I don't see much reason to doubt them.
Ultimately I think this is doable from a technical perspective, it's just a question of whether it will be economical. Traditional wisdom would say no even just due to launch costs, but if SpaceX can get Starship working reliably that could alter the equation a lot. We'll see. This could turn out to be a boondoggle, or it could be the next Starlink. The prospect of 24/7 solar power with no need for battery storage or ground infrastructure does seem tempting.
> The author uses the power capacity of the ISS's solar panels as a point of comparison, but SpaceX has already successfully deployed many times that capacity in Starlink satellites[1] without even needing to use Starship,
Your link here isn't really a fair comparison, and also you're still short a factor of 10x. Starlink has deployed 50x the ISS's solar cap across its entire fleet (admittedly 3 years ago); the author's calcs are 500x the ISS for one datacenter.
> and obviously the heat dissipation problem for those satellites has already been solved so there's little point in hand-wringing about that.
This reasoning doesn't make any sense to me, the heat dissipation issues seem very much unresolved. A single Starlink satellite is using power in the order of watts, a datacenter is hitting like O(1/10) of gigawatts. The heat dissipation problem is literally orders of magnitude more difficult for each DC than for their current fleet. This is like saying that your gaming PC will never overheat because NetGear already solved heat dissipation in their routers.
> The author also worries about ground communication bandwidth, claiming it is "difficult to get much more than about 1Gbps reliably", which seems completely ignorant of the fact that Starlink already has a capacity much greater than that.
Don't their current satellites have like 100Gbps capacity max? Do you have any idea how many 100Gbps routers go into connecting a single datacenter to the WAN? Or to each other (since intrahall model training is table stakes these days). They have at most like O(1)Pbps across their entire fleet (based on O(10K) satellites deployed and assuming they have no failover protection). They would need to entirely abandon their consumer base and use their entire fleet to support up/down + interconnections for just 2 or 3 datacenters. They would basically need to redeploy a sizeable chunk of their entire fleet every time they launched a DC.
> Starlink has deployed 50x the ISS's solar cap across its entire fleet (admittedly 3 years ago); the author's calcs are 500x the ISS for one datacenter.
So 3 years ago they managed to get to 10% of the power budget of one data center by accident, using satellites not explicitly designed for that purpose, using a partially reusable launch platform with 1/10th the payload capacity of Starship. My point is they've already demonstrated they can do this at the scale that's needed.
> A single Starlink satellite is using power in the order of watts
Then why does each satellite have a 6 kW solar array? Re-read that post I linked; the analysis is pretty thorough.
> Don't their current satellites have like 100Gbps capacity max?
Gen 3 is reportedly up to 1 Tbps ground link capacity, for one satellite.[1] There will be thousands.
> Do you have any idea how many 100Gbps routers go into connecting a single datacenter to the WAN? Or to each other (since intrahall model training is table stakes these days).
Intra-satellite connections use the laser links and would not consume any ground link capacity.
You're also ignoring that this is explicitly being pitched as a solution for compute-heavy workloads (AI training and inference) not bandwidth-heavy workloads.
> So 3 years ago they managed to get to 10% of the power budget of one data center by accident, using satellites not explicitly designed for that purpose, using a partially reusable launch platform with 1/10th the payload capacity of Starship. My point is they've already demonstrated they can do this at scale.
How was it by accident? You make it sound like it was easy rather than a total revolution of the space industry? To achieve 1/10th of what they would need for a single DC (and most industry leaders have 5 or 6)? Demonstrating they could generate power at DC scale would be actually standing up a gigawatt of orbital power generation, IMO. And again, this is across thousands of units. They either have to build this capacity all in for a single DC, or somehow consolidate the power from thousands of satellites.
> Then why does each satellite have a 6 kW solar array? Re-read that post I linked; the analysis is pretty thorough.
You're right, my bad. So they're only short like 6 orders of magnitude instead of 9? Still seems massively disingenuous to conclude that they've solved the heat transfer issue.
> Gen 3 is reportedly up to 1 Tbps ground link capacity, for one satellite.[1] There will be thousands.
Okay I'll concede this one, they could probably get the data up and down. What's the latency like?
I say by accident because high power capacity wasn't a design goal of Starlink, merely a side effect of deploying a communications network.
> My bad. So they're only short like 6 orders of magnitude instead of 9?
No, they're 1 order of magnitude off. (22 MW total capacity of the constellation vs your bar of 100 MW for a single DC.) Again, 3 years ago, using an inferior launch platform, without that even being a design goal.
> What's the latency like?
Starlink latency is quite good, about 30ms round trip for real-world customers on the ground connecting through the constellation to another site on the ground. Sun synchronous orbit would add another ms or two for speed of light delay.
AFAIK nobody outside SpaceX has metrics on intra-satilite latency using the laser links but I have no reason to think it would be materially worse than a direct fiber connection provided the satellites aren't spread out too far. (Starlink sats are very spread out, but you obviously wouldn't do that for a data center.)
> No, they're 1 order of magnitude off. (22 MW total capacity of the constellation vs your bar of 100 MW for a single DC.)
Why on earth would you compare their entire fleet to one project? Power generation trivially parallelizes only if you can transmit power between generation sites. Unless they've figure out how to beam power between satellites the appropriate comparison is 6Kw to 100Mw. And again, the generation is the easy side; the heat dissipation absolutely does not parallelize so that also needs to go by 3-5 orders of mag.
And also: radiation. Terrestrial GPUs are going to be substantially more power and heat efficient than space-based ones (as outlined in TFA). All this for what benefits? An additional 1.4x boost in solar power availability? There's simply no way the unit economics of this work out. Satellite communications have fundamental advantages over terrestrial networks if you can get the launch economics right. Orbital DCs have only the solar availability thing; everything else is cheaper and easier on land.
Why wouldn't you compare to the entire fleet? You think they're going to deploy an entire data center in one sat? That'd be as dumb as trying to deploy an entire data center in one rack back on Earth. Of course if you frame the problem that way it seems impossible.
I already gave my thoughts on radiation and economics in my original comment. I agree those could be significant challenges, but ones SpaceX has a plausible path to solving. Starship in particular will be key on the economic side; I find it very unlikely they'll be able to make the math work with just Falcon 9. Even with Starship it might not work out.
And it's not just a 1.4x boost in solar power availability. You also eliminate the need for batteries to maintain power during the night or cloudy days (or cloudy weeks), and the need for ground infrastructure (land, permitting, buildings, fire suppression systems, parking lots, physical security, utility hook-up, etc).
> It(Solar) works, but it isn't somehow magically better than installing solar panels on the ground
Umm, if this is the point, I don't know whether to take rest of author's arguments seriously. Solar only works certain time of the day and certain period of year on land.
Also there is so limited calculations for the numbers in the article, while the article throws of numbers left and right.
Nice article, the first one. I hope they try it, burn many billions of cash, and then fail. I also hope they don't spread radioactive material across the whole atmosphere when failing, though.
No, rockets landing themselves is just controlling the mechanism you use to have them take off, and builds on trust vectoring technology from 1970s jet fighters based on sound physics.
Figuring out how to radiate a lot of waste heat into a vacuum is fighting physics. Ordinarily we use a void on earth as a very effective _insulator_ to keep our hot drinks hot.
This is a classic case of listing all the problems but none of the benefits. If you had horses and someone told you they had a Tesla, you'd be complaining that a Tesla requires you to dig minerals where a horse can just be born!
It's a matter of deploying it for cheaper or with fewer downsides than what can be done on earth. Launching things to space is expensive even with reusable rockets, and a single server blade would need a lot of accompanying tech to power it, cool it, and connect to other satellites and earth.
Right now only upsides an expensive satellite acting as a server node would be physical security and avoiding various local environmental laws and effects
Lower latency is a major one. And not having to buy land and water to power/cool it. Both are fairly limited as far as resources go, and gets exponentially expensive with competition.
The major downside is, of course, cost. In my opinion, this has never really stopped humans from building and scaling up things until the economies of scale work out.
> connect to other satellites and earth
If only there was a large number of satellites in low earth orbit and a company with expertise building these ;)
> And not having to buy land and water to power/cool it.
It's interesting that you bring that up as a benfit. If waterless cooling (i.e. closed cooling system) works in space, wouldn't it work even better on Earth?
You need to understand more of basic physics and thermodynamics. Fighting thermodynamics is a losing race by every measure of what we understand of the physical world.
From what I understand, very, very large radiators every few racks. Almost as much solar panels every few racks. Radiation shielding to avoid transient errors or damage to the hardware. Then some form of propulsion for orbital corrections, I suppose. Then hauling all of this stuff to space (on a high orbit, otherwise they'd be in shade at night), where no maintenance whatsoever is possible. Then watching your hardware progressively fail and/or become obsolete every few years and having to rebuild everything from scratch again.
The difference is that it was mostly clueless people like Thunderf00t who said it was impossible, who nobody took seriously. I don’t remember that basically all relevant experts claimed it was near impossible with current technology. That’s the situation now.
There’s also fairly clear distinction with how insane Elons plan has become since the first plans he laid for Tesla and SpaceX and the plans he has now. He has clearly become a megalomaniac.
Funnily enough, some of the things people said about Tesla is coming true, because Elon simply got bored of making cars. It’s now plausible that Tesla may die as a car company which I would not have imagined a few years ago. They’re arguably not even winning the self driving and robotics race.
No, people made fun of Elon for years because he kept attempting it unsafely, skirting regulations and rules, and failing repeatedly in very public ways.
The idea itself was proven by NASA with the DC-X but the project was canceled due to funding. Now instead of having NASA run it we SpaceX pay more than we'd ever have paid NASA for the same thing.
SpaceX is heavily subsidized and has extremely lucrative contracts with the US government. Not to mention they get to rely on the public research NASA produces.
He also said he could save the us a trillion dollars per year with DOGE, and basically just caused a lot data exfiltration and killed hundreds of thousands of people, without saving any money at all
Not to be crass, but as much as I dislike Musk US taxpayers are not responsible for the lives of children half a world away. Why is the US the only country held to this standard? No one ever complains that Turkey is killing thousands of children by not funding healthcare initiatives in Africa.
It is our money and we're not obligated to give it away if we think it's needed for something else. I'd note though, that in terms of the budget, USAID was like change in the couch cushions and nothing else in the world was even close in terms of lives saved per dollar. Why the man tasked with saving the government trillions of dollars went there at all was nonsensical to begin with.
Nevertheless, it is fully within our rights to pull back aid if we (collectively) decide it's best thing to do. But the only legal way to do that is through the democratic process. Elected can legislators take up the issue, have their debates, and vote.
If congress had canceled these programs through the democratic process, there almost certainly would've been a gradual draw down. Notice and time would be given for other organizations to step in and provide continuity where they could.
And since our aid programs had been so reliable and trusted, in many cases they became a logistics backbone for all sorts of other aid programs and charities. Shutting it all down so abruptly caused widespread disruption far beyond own aid programs. Food rotting in warehouses as people starved. Medications sitting in warehouses while people who needed them urgently died. The absolute waste of life and resources caused by the sudden disruption of the aid is a true atrocity.
Neither Elon or Trump had legal authority to unilaterally destroy those programs outside of the democratic process the way they did, so they are most directly morally responsible for the resulting death.
To add insult-to-injury, Elon was all over twitter justifying all of it with utterly deranged, insane conspiracy theories. He was either lying cynically or is so far gone mentally that he believed them. I'm not sure which is worse.
Currently SpaceX have managed to land the booster only, not the rocket itself, if you are thinking about Starship. And reusability of said rocket is also missing (collecting blown up pieces from the bottom of the ocean doesn't count!).
This is my second attempt learning Rust and I have found that LLMs are a game-changer. They are really good at proposing ways to deal with borrow-checker problems that are very difficult to diagnose as a Rust beginner.
In particular, an error on one line may force you to change a large part of your code. As a beginner this can be intimidating ("do I really need to change everything that uses this struct to use a borrow instead of ownership? will that cause errors elsewhere?") and I found that induced analysis paralysis in me. Talking to an LLM about my options gave me the confidence to do a big change.
n_u's point about LLMs as mentors for Rust's borrow checker matches my experience. The error messages are famously helpful, but sometimes you need someone to explain the why.
I've noticed the same pattern learning other things. Having an on-demand tutor that can see your exact code changes the learning curve. You still have to do the work, but you get unstuck faster.
Storngly agreed. Or ask it to explain the implications of using different ownership models. I love to ask it for options, to what if scenarios out. It's been incredibly helpful for learning rust.
>In particular, an error on one line may force you to change a large part of your code.
There's a simple trick to avoid that, use `.clone()` more and use fewer references.
In C++ you would be probably copying around even more data unnecessarily before optimization. In Rust everything is move by default. A few clones here and there can obviate the need to think about lifetimes everywhere and put you roughly on par with normal C++.
You can still optimize later when you solved the problem.
I am old but C is similarly improved by LLM. Build system, boilerplate, syscalls, potential memory leaks. It will be OK when the Linux graybeards die because new people can come up to speed much more quickly
The thing is LLM-assisted C is still memory unsafe and almost certainly has undefined behaviour; the LLM might catch some low hanging fruit memory problems but you can never be confident that it's caught them all. So it doesn't really leave you any better off in the ways that matter.
I don't see why it shouldn't be even more automated than that, with LLM ideas tested automatically by differential testing of components against the previous implementation.
Defining tests that test for the right things requires an understanding of the problem space, just as writing the code yourself in the first place does. It's a catch-22. Using LLMs in that context would be pointless (unless you're writing short-lived one-off garbage on purpose).
I.e. the parent is speaking in the context of learning, not in the context of producing something that appears to work.
I'm not sure that's true. Bombarding code with huge numbers of randomly generated tests can be highly effective, especially if the tests are curated by examining coverage (and perhaps mutation kills) in the original code.
Right, that method is pretty good at finding unintentional behavior changes in a refactor. It is not very well suited for showing that the program is correct which is probably what your parent meant.
That doesn't seem like the same problem at all. The problem here was reimplementing the program in another language, not doing that while at the same time identifying bugs in it.
Conversion of one program to another while preserving behavior is a problem much dumber programs (like compilers) solve all the time.
> I don't see why it *shouldn't be even more automated
In my particular case, I'm learning so having an LLM write the whole thing for me defeats the point. The LLM is a very patient (and sometimes unreliable) mentor.
I think the author is significantly underestimating the technical difficulty of achieving full self-driving cars that are at least as safe and reliable as Waymo. The author claims there will be "26 of the basically identical [self-driving car] companies".
If you recall, there was an explosion of self-driving car efforts from startups and incumbents alike 7ish years ago. Many of them failed to deliver or were shut down. [1][2][3]
Article about the difficulty of self-driving from the perspective of a failed startup[3].
Waymo came out of the Google-self driving car project which came from Sebastian Thrun's entry in 2005 Darpa challenge, so they've been working on this for more than 20 years. [4][5]
But that is the author's point. I don't see many of the same alternatives years later.
They have either shut down, got acquired or were sold off and then shutdown. Even Uber and Lyft had their own self-driving programs and both of them shut theirs down. Cruise was recently taken off the streets and not much has been done with them.
The only ones that have been around from more than 7 years are Comma.ai (which the author geohot still owns), Waymo and Tesla and Zoox, but they ran out of money and is now owned by Amazon.
As I understand, Comma.ai is focused on driver-assistance and not fully autonomous self-driving.
The features listed on the wikipedia are lane-centering, cruise-control, driver monitoring, and assisted lane change.[1]
The article I linked to from Starsky addresses how the first 90% is much easier than the last 10% and even cites "The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team."
To give an example of the difficulty of the last 10%: I saw an engineer from Waymo give a talk about how they had a whole team dedicated to detecting emergency vehicle sirens and acting appropriately. Both false positives and false negatives could be catastrophic so they didn't have a lot of margin for error.
Speaking as a user of Openpilot / Comma device, it is exactly what the Wikipedia article described. In other words, it's a level 2 ADAS.
My point was, he had more than naive / "pedestrian level" (pun?) understanding of the problem domain as he worked on Comma.ai project for quite some time; even the device is only capable of solving maybe about 40% of the autonomous driving problem.
The last photo appears to show the view out the author's office in Fort Mason. Didn't know they had offices there, that's quite a nice view of the Bay.
Cool! I'd love to know a bit more about the replication setup. I'm guessing they are doing async replication.
> We added nearly 50 read replicas, while keeping replication lag near zero
I wonder what those replication lag numbers are exactly and how they deal with stragglers. It seems likely that at any given moment at least one of the 50 read replicas may be lagging cuz CPU/mem usage spike. Then presumably that would slow down the primary since it has to wait for the TCP acks before sending more of the WAL.
If you use streaming replication (ie. WAL shipping over the replication connection), a single replica getting really far behind can eventually cause the primary to block writes. Some time back I commented on the behaviour: https://news.ycombinator.com/item?id=45758543
You could use asynchronous WAL shipping, where the WAL files are uploaded to an object store (S3 / Azure Blob) and the streaming connections are only used to signal the position of WAL head to the replicas. The replicas will then fetch the WAL files from the object store and replay them independently. This is what wall-g does, for a real life example.
The tradeoffs when using that mechanism are pretty funky, though. For one, the strategy imposes a hard lower bound to replication delay because even the happy path is now "primary writes WAL file; primary updates WAL head position; primary uploads WAL file to object store; replica downloads WAL file from object store; replica replays WAL file". In case of unhappy write bursts the delay can go up significantly. You are also subject to any object store and/or API rate limits. The setup makes replication delays slightly more complex to monitor for, but for a competent engineering team that shouldn't be an issue.
But it is rather hilarious (in retrospect only) when an object store performance degdaration takes all your replicas effectively offline and the readers fail over to getting their up-to-date data from the single primary.
There is no backpressure from replication and streaming replication is asynchronous by default. Replicas can ask the primary to hold back garbage collection (off by default), which will eventually cause a slow down, but not blocking. Lagging replicas can also ask the primary to hold onto WAL needed to catch up (again, off by default), which will eventually cause disk to fill up, which I guess is blocking if you squint hard enough. Both will take considerable amount of time and are easily averted by monitoring and kicking out unhealthy replicas.
> If you use streaming replication (ie. WAL shipping over the replication connection), a single replica getting really far behind can eventually cause the primary to block writes. Some time back I commented on the behaviour: https://news.ycombinator.com/item?id=45758543
I'd like to know more, since I don't understand how this could happen. When you say "block", what do you mean exactly?
I have to run part of this by guesswork, because it's based on what I could observe at the time. Never had the courage to dive in to the actual postgres source code, but my educated guess is that it's a side effect of the MVCC model.
Combination of: streaming replication; long-running reads on a replica; lots[þ] of writes to the primary. While the read in the replica is going it will generate a temporary table under the hood (because the read "holds the table open by point in time"). Something in this scenario leaked the state from replica to primary, because after several hours the primary would error out, and the logs showed that it failed to write because the old table was held in place in the replica and the two tables had deviated too far apart in time / versions.
It has seared to my memory because the thing just did not make any sense, and even figuring out WHY the writes had stopped at the primary took quite a bit of digging. I do remember that when the read at the replica was forcefully terminated, the primary was eventually released.
þ: The ballpark would have been tens of millions of rows.
What you are describing here does not match how postgres works. A read on the replica does not generate temporary tables, nor can anything on the replica create locks on the primary. The only two things a replica can do is hold back transcation log removal and vacuum cleanup horizon. I think you may have misdiagnosed your problem.
Theoretically yes, but the method that is currently implemented (Hartree Fock) is notoriously inaccurate for molecular interactions. For example it does not predict the Van Der Waals force between water molecules.
I’m planning to add support for an alternative method called density functional theory which gives better results for molecular interaction.
In quantum chemistry, you decide where the bonds should be drawn. Internally, it's all an electron density field. So yes, you can model chemical reactions, for example by constraining the distance between two atoms, and letting everything else reach an equilibrium.
> wrap a small number of third-party ChatGPT/Perplexity/Google AIO/etc scraping APIs
Can you explain a little bit how this works? I'm guessing the third-parties query ChatGPT etc. with queries related to your product and report how often your product appears? How do they produce a distribution of queries that is close to the distribution of real user queries?
How third parties query your product:
For ChatGPT specifically, they open a headless browser, ask a question, and capture the results like the response and any citations. From there, they extract entities from the response. During onboarding I’m asked who my competitors are and the response is going to be recongized via the entities there. For example, if the query is “what are the best running shoes” and the response is something like “Nike is good, Adidas is okay, and On is expensive,” and my company is On, using my list of compeitotrs entity recognition is used to see which ones appear in the response in which order.
If this weren’t automated, the process would look like this: someone manually reviews each response, pulls out the companies mentioned and their order, and then presents that information.
2) Distribution of queries
This is a bit of a dirty secret in the industry (intentional or not): usually what happens is you want to take snapshots and measure them overtime to get distribution. However a lot of tools will run a query once across different AI systems, take the results, and call it done.
Obviously, that isn’t very representative. If you search “best running shoes,” there are many possible answers, and different companies behave differently. What better tools do like Profound is run the same prompt multiple times. From my estimates, Profound runs up to 8 times. This gives a broader snapshot of what tends to show up everyday. You then aggregate those snapshots over time to approximate a distribution.
As a side note: you might argue that running a prompt 8 times isn’t statistically significant, and that’s partially true. However, LLMs tend to regress toward the mean and surface common answers over repeated runs and we found 8 times to be a good indicator- the level of completeness depends on the prompt(i.e. "what should i have for dinner" vs "what are good accounting software for startups", i can touch on that more if you want
As I understand, in normal SEO the number of unique queries that could be relevant to your product is quite large but you might focus on a small subset of them "running shoes" "best running shoes" "running shoes for 5k" etc. because you assume that those top queries capture a significant portion of the distribution. (e.g. perhaps those 3 queries captures >40% of all queries related to running shoe purchases).
Here the distribution is all queries relevant to your product made by someone who would be a potential customer. Short and directly relevant queries like "running shoes" will presumably appear more times than much longer queries. In short, you can't possibly hope to generate the entire distribution, so you sample a smaller portion of it.
But in LLM SEO it seems that assumption is not true. People will have much longer queries that they write out as full sentences: "I'm training for my first 5k, I have flat feet and tore my ACL four years ago. I mostly run on wet and snowy pavement, what shoe should I get?" which probably makes the number of queries you need to sample to get a large portion of the distribution (40% from above) much higher.
I would even guess it's the opposite and the number of short queries like "running shoes" fed into an LLM without any further back and forth is much lower than longer full sentence queries or even conversational ones. Additionally because the context of the entire conversation is fed into the LLM, the query you need to sample might end up being even longer
for example:
user: "I'm hoping to exercise more to gain more cardiovascular fitness and improve the strength of my joints, what activities could I do?"
LLM: "You're absolutely right that exercise would help improve fitness. Here are some options with pros and cons..."
user: "Let's go with running. What equipment do I need to start running?"
LLM: "You're absolutely right to wonder about the equipment required. You'll need shoes and ..."
user: "What shoes should I buy?"
All of that is to say, this seems to make AI SEO much more difficult than regular SEO. Do you have any approaches to tackle that problem? Off the top of my head I would try generating conversations and queries that could be relevant and estimating their relevance with some embedding model & heuristics about whether keywords or links to you/competitors are mentioned. It's difficult to know how large of a sample is required though without having access to all conversations which OpenAI etc. is unlikely to give you.
short answer it depends and idk. When I was doing some testing with prompts like "what should I have for dinner" adding variations, "hey ai, plz, etc" doesn't deviate intention much. As ai is really good at pulling intent. But obv if you say "i'm on keto what should i have for dinner" it's going to ignore things like "garlic, pesto, and pasta noodles". Although it pulls a similar response to "what's a good keto dinner". From there we really assume the user can know their customers what type of prompts led them to chatgpt. You might've noticed sites asking if you came from chatgpt, i would take that a step further and asked them to type the prompt they used.
But you do bring a good perspective because not all prompts are equal especially with personaliztion. So how do we solve that problem-I'm not sure. I have yet to see anything in the industry. The only thing that came close was when a security focused browser extension started selling data to aeo companies- that's how some companies get "prompt volume data".
I see what you are saying, perhaps no matter the conversation before as long as it doesn't filter out some products via personalized filters (e.g. dietary restrictions) it will always give the same answers. But I do feel the value prop of these AI chatbots is that they allow personalization. And then it's tough to know if 50% of the users who would previously have googled "best running shoes" instead now ask detailed questions about running shoes given their injury history etc and that changes what answers the chatbot gives.
I feel like without knowing the full distribution, it's really tough to know how many/what variations of the query/conversation you need to sample. This seems like something where OpenAI etc. could offer their own version of this to advertisers and have much better data because they know it all.
Interesting problem though! I always love probability in the real world. Best of luck, I played around with your product and it seems cool.
reply