Of course you can build these things if you really want to.
But there is no universe in which it's possible to build them economically.
Not even close. The numbers are simply ridiculous.
And that's not even accounting for the fact that getting even one of these things into orbit is an absolutely huge R&D project that will take years - by which time technology and requirements will have moved on.
Lift costs are not quite dropping like that lately. Starship is not yet production ready (and you need to fully pack it with payloads, to achieve those numbers). What we saw is cutting off most of the artificial margins of the old launches and arriving to some economic equilibrium with sane margins. Regardless of the launch price the space based stuff will be much more expensive than planet based, the only question if it will be optimistically "only" x10 times more expensive, or pessimistically x100 times more expensive.
I don't get this "inevitable" conclusion. What is even a purpose of the space datacenter in the first place? What would justify paying an order of magnitude more than conventional competitors? Especially if the server in question in question is a dumb number cruncher like a stack of GPUs? I may understand putting some black NSA data up there or drug cartel accounting backup, but to multiply some LLM numbers you really have zero need of extraterritorial lawless DC. There is no business incentive for that.
The transistors are experimental, and no one is building high-performance chips out of them.
You can't just scale current silicon nodes to some other substrate.
Even if you could, there's a huge difference between managing the temperature of a single transistor, managing temps on a wafer, and managing temps in a block of servers running close to the melting point of copper.
σ is such a small number in Stefan-Boltzman that it makes no difference at all until your radiators get hot enough to start melting.
You not only need absolute huge radiators for a space data centre, you need an active cooling/pumping system to make sure the heat is evenly distributed across them.
I'm fairly sure no one has built a kilometer-sized fridge radiator before, especially not in space.
You can't just stick some big metal fins on a box and call it a day.
Out of curiosity, I plugged in the numbers - I have solar at home, and a 2 m2 panel makes about 500w - i assume the one in orbit will be a bit more efficient without atmosphere and a bit more fancy, making it generate 750w.
If we run the radiators at 80C (a reasonable temp for silicon), that's about 350K, assuming the outside is 0K which makes the radiator be able to radiate away about 1500W, so roughly double.
Depending on what percentage of time we spend in sunlight (depends on orbit, but the number's between 50%-100%, with a 66% a good estimate for LEO), we can reduce the radiator surface area by that amount.
So a LEO satellite in a decaying orbit (designed to crash back onto the Earth after 3 years, or one GPU generation) could work technically with 33% of the solar panel area dedicated to cooling.
Realistically, I'd say solar panels are so cheap, that it'd make more sense to create a huge solar park in Africa and accept the much lower efficiency (33% of LEO assuming 8 hours of sunlight, with a 66% efficiency of LEO), as the rest of the infrastructure is insanely more trivial.
This argument assumes that you only need to radiate away the energy that the solar actively turns into electricity, but you also need to dissipate all the excess heat that wasn’t converted. The solar bolometric flux at the earth is 1300 w/m2, or 2600 for 2 sq m. That works out to an efficiency of ~20% for your home solar, and your assumed value of 750 w yields an efficiency of ~30%, which is reasonable for space-rated solar. But assuming an overall albedo of ~5% that means that you were only accounting for a third of the total energy that needs to be radiated.
Put another way, 2 sq m intercepts 2600 w of solar power but only radiates ~1700 w at 350 k, which means it needs to be run at a higher temperature of nearly 125 celsius to achieve equilibrium.
It receives around 2.5kW[0] of energy (in orbit), of which it converts 500W to electric energy, some small amount is reflected and the rest ends up as heat, so use 1kW/m^2 as your input value.
> If we run the radiators at 80C (a reasonable temp for silicon), that's about 350K, assuming the outside is 0K which makes the radiator be able to radiate away about 1500W, so roughly double.
1500W for 2m^2 is less than 2000kW, so your panel will heat up.
>Depending on what percentage of time we spend in sunlight (depends on orbit, but the number's between 50%-100%, with a 66% a good estimate for LEO), we can reduce the radiator surface area by that amount.
You need enough radiators for peak capacity, not just for the average. It's analogous to how you can't put a smaller heat sink on your home PC just because you only run it 66% of the time.
Yes it's fun. One small note, for the outside temp you can use 3K, the cosmic microwave background radiation temperature. Not that it would meaningfully change your conclusion.
Financial assets are extremely easy to hide. Set up an international chain of shell companies, foundations, and trusts, install a fake beneficial owner or trustee or two at various points, carve out deductibles for IP and "services", and the ownership becomes completely opaque.
And that's just the legal version.
I know someone who used to work as a business lawyer. She spent years trying to track down the true owners in various cases. At the very least it's an expensive business. And sometimes it just couldn't be done.
Of course governments can cut the knot with physical assets, walk into a building with troops and/or police, and say "This is ours now." Or they can order banks to hand over the money in accounts.
But before they can do that, there has to be some certainty about the owner. And even getting part way there can take a while and cost a lot.
A flat tax on wealth would be extremely easy to enforce. Basically if the bill for an asset doesn't get paid it goes to the government, and the bill is trivial to calculate because it doesn't need the rest of the entity.
If you apply automatic tax on bonds people will just not buy them unless you also increase their returns. It's a pointless exercise.
Same goes for stocks, it's just a bit bigger circle in this case.
Capital gain tax is just a bad tax that distorts decisions and make things less efficient for no reason. It's much better to tax resources (mainly land but also infrastructure usage) and charge for enforcement of IP/patents.
Not necessarily. It depends if the process is deterministic and repeatable.
If an AI generates a process more quickly than a human, and the process can be run deterministically, and the outputs are testable, then the process can run without direct human supervision after initial testing - which is how most automated processes work.
The testing should happen anyway, so any speed increase in process generation is a productivity gain.
Human monitoring only matters if the AI is continually improvising new solutions to dynamic problems and the solutions are significantly wrong/unreliable.
Which is a management/analysis problem, and no different in principle to managing a team.
The key difference in practice is that you can hire and fire people on a team, you can intervene to change goals and culture, and you can rearrange roles.
With an agentic workflow you can change the prompts, use different models, and redesign the flow. But your choices are more constrained.
The issue is LLMs are, by design, non-deterministic.
That means that, with the current technology, there can never be a deterministic agent.
Now obviously, humans aren't deterministic either, but the error bars are a lot closer together than they are with LLMs these days.
An easy to point at example is the coding agent that removed someones home directory that was circulating around. I'm not saying a human has never done that, but it's far less likely because it's so far out of the realm of normal operations.
So as of today, we need humans in the loop. And this is understood by the people making these products. That's why they have all these permissions and prompts for you to accept/run commands and all of that.
> An easy to point at example is the coding agent that removed someones home directory that was circulating around. I'm not saying a human has never done that, but it's far less likely because it's so far out of the realm of normal operations.
And it would be far less likely that the human deleted someone else's home directory, and even if he did, there would be someone to be angry about.
If you read hte post, he didn't ask it to delete his home directory. He misread the command it generated and approved it when he shouldn't have.
That's literally exactly the kind of non-determinism I'm talking about. If he'd just left the agent to it's own devices, the exact same thing would have happened.
now you may argue this highlights that people make catastrophic mistakes too, but I'm not sure i agree.
Or at least, they don't often make that kind of mistake. Not saying that they don't make any catastrophic mistakes (they obviously do....)
We know people tend to click "accept" on these kinds of permission prompts with only a cursory read of what it's doing. And the more of these prompts you get, the more likely you are to just click "yes" or whatever to get through it..
If anything this kind of perfectly highlights some of the ironies referenced in the post itself.
Mining asteroids makes no sense whatsoever with any currently imaginable practical tech, especially not economically. The numbers for even the most basic solutions just don't work, and anything cleverer - like adding thrusters to chunks of metal and firing them at the Earth - has one or two rather obvious issues.
The Moon is interesting because it's there, it's fairly close, it's a test bed for off-world construction, manufacturing, and life support, and there are experiments you can do on the dark side that aren't possible elsewhere.
Especially big telescopes.
It has many of the same life support issues as Mars, and any Moon solutions are likely to work on Mars and the asteroids, more quickly and successfully than trying to do the same R&D far, far away.
Will it pay for itself? Not for a long, long time. But frontier projects rarely do.
The benefit comes from the investment, the R&D, the new science and engineering, and the jobs created.
It's also handy if you need a remote off-site backup.
Mining asteroids wouldn’t be for Earth - it would be for satellites or LEO or possibly even Mars, which is a lot closer to the Asteroids than Earth and may need some extra raw materials we don’t want to spend the horrendous cost of lifting out of Earth’s gravity.
The Moon has nothing to offer Mars explorers as everything will be different and solutions for the unique lunar conditions (two weeks of darkness, temperature extremes, moon dust, vacuum) do not apply to Mars at all. It’like saying living under the ocean is good practice for living in the Artic, but we should start under the ocean because it’s closer.
> Mining asteroids makes no sense whatsoever with any currently imaginable practical tech, especially not economically.
With current tech, it's practical enough to extract rocks from a rock. We've already done this on a comet, which I think is much harder to do. With current economics, not practical to fund such an endeavor, even if it was to haul back an asteroid made of solid gold. Regardless, we're discussing the far future, rather than current state.
If raw materials (again, an unknown) continue to become more scarce, it's hard to say what economics might support extra-planetary resource collection. What's for sure, is mining Mars will be harder than mining asteroids for water or metals, et al.
Mining asteroids makes no sense in the current economy with our current technology. But working towards engineering solutions which makes mining asteroids make sense makes sense (if that makes sense).
However, it is much easier to see us send robots to mine these asteroids, or send robots to the moon to build a giant telescope on the dark side (if that makes sense), then it is to see us build cities on the moon to build said telescope, and to mine those asteroids.
You see the difference here is that the end goal of mining asteroids are resources being sent to earth and exploited, while the goal of space settlements are the settlements them selves, that is some hypothetical space expansion is the goal, and that makes no sense, nobodies lives will improve from space expansion (except for the grifters’ during the grift).
> nobodies lives will improve from space expansion (except for the grifters’ during the grift).
Aspiring to goals and accomplishing them makes life worth living to a lot of people. Furthermore, humanity seems to have an innate drive to explore and learn.
Even to those left at home, it's inspirational to think that there are people who are taking steps to explore the universe.
Maybe it won't help anyone live but it will give a lot of people something to live for.
This was the argument about Fury Road (mostly real) vs Furiosa (a lot of CGI.)
But only bad CGI is visible. I guarantee you have watched CGI footage and not noticed. At all.
The problem over the last decade or so hasn't been the technical limits of CGI, but studio unwillingness to spend enough on it to make it good.
And directors have also become less creative. You can find UK newsreels from the 50s on YouTube, and some of the direction and editing are superb - a beautiful mix of abstraction, framing, and narrative.
Most modern directors don't have that kind of visual literacy. The emphasis is more on spectacle and trying to bludgeon audiences into submission, not on tastefulness and visual craft.
This was the argument about Fury Road (mostly real)
Fury Road is pure wall to wall CGI. People keep pointing to it as some example of doing things with live action when the entire movie is soaked with CG and compositing.
It's a lot of CGI, but done in realistic ways. A lot of the examples from the article (which is a very good article, thank you for linking it) were mostly about paint-outs, color grading, or background elements.
There's a good chunk of modern blockbusters that will CGI everything in a scene except the lead actor's face - and sometimes that too.
Predates computers, they used to paint out wires and whatnot by hand and it usually looked just as good.
> Compositing
Predates computers. They've been doing it since forever with miniature overlays, matte paintings, chromakey, double exposures, and cutting up film negatives with exacto blades.
> color grading
Literal cancer which ruins movies every goddamn time. The fact that they shoot movies with this kind of manipulation in mind changes how they use lighting and makes everything flat with no shadows, no depth, everything now gets shot like a soap opera. This also applies to heavy use of compositing too. To make it cheaper to abuse compositing, mostly so the producers can "design by committee" the movie after all the filming is done, they've destroyed how they light and shoot scenes. Everything is close up on actors, blurred backgrounds, flat lighting, fast cuts to hide the lazy work. Cancer.
I'm talking about Fury Road too BTW. It's crap. Watch the original Mad Max, not Road Warrior, then watch Fury Road. The first is a real movie with heart and soul, the world it depicts feels real. The latter feels like a video game, except it somehow comes out looking even less inspired and creative than the actual mad max video game that came out at the same time.
But yeah, they made some real weird cars for the movie. That's fine I guess. The first movie didn't need weird cars, it had this thing called characters. Characters who felt like real people, not freaks from a comic book.
Exactly - they've been doing paint outs and composite shots forever! It doesn't feel fundamentally different to do it "on a computer," to me. They aren't using it to show off, just to make the scene look how you'd expect it to.
They've also been doing color grading forever - digital just makes it way cheaper and easier. Before, you'd have to do photochemical tricks to the film, and you would use different film for different vibes.
I'd argue that the ease of digital manipulation has led some studios to do what you say - postpone creativity until after the movie is mostly shot, which leads to that design-by-committee feeling. That sense of 'don't worry, we'll fix the lighting it the editing room' is the same sloppiness as 'and then the big gorilla will use his magic attack and it will look really cool,' without any thought given to it's actually going to look like. But that's not really a failure of CGI itself - that's a failure of vision, right? If you procrastinate making artistic decisioms for long enough, there's not actually going to be any art in the movie once it's done.
I have watched the original Mad Max, and it was pretty alright. If I had watched it at the right age, I probably would have imprinted on it.
It used to be the case that movies had to be made carefully, with the intended look in mind when they were shooting it. Compositing, etc aren't new, as we both know, but the way they're used has changed; they're used far more than ever before, with important design decisions about the look of the movie deferred to the very last minute ans everything up to that point done in such a way to facilitate making late last minute changes. This is absolute poison for cinematography as an art. Very few big budget movies made in recent years has any artistic merit for this reason. Producers now feel like they have the technology to make all the decisions that, by technical and logistic necessity, the directors/cinematographers would have to make themselves years ago. And the producers are just assholes with money, they cannot make art.
With respect to Mad Max, I think it aged like a fine wine. I didn't first see it when I was young, I saw Road Warrior first. But Road Warrior and everything after it is very camp. Mad Max is more grounded and feels like a commentary on our times, not pure fantasy spectacle. I think the best time to watch Mad Max was the 70s, and the second best time is probably today. In the 90s or 00s it wouldn't have hit right.
I'd argue that the ease of digital manipulation has led some studios to do what you say - postpone creativity until after the movie is mostly shot,
None of this is true. You can't shoot plates and do whatever you want later. Even basic effects shots take intricate planning. They were talking about cleaning up mistakes and small details.
which leads to that design-by-committee feeling
I'm not sure what this means in the context of a movie but it isn't how movies are made.
There are art directors, production designers and vfx supervisors and they answer to the director. Movies are the opposite of design by committee. It isn't a bunch of people compromising, it is the director making decisions and approving every step.
that sense of 'don't worry, we'll fix the lighting it the editing room'
This doesn't happen because it isn't how anything works. You can fix lighting in editing.
the same sloppiness as 'and then the big gorilla will use his magic attack and it will look really cool,' without any thought given to it's actually going to look like.
Enormous thought and planning is given to every stage. This idea of not liking lots of effects in fantasy or comic book movies and then attributing that to sloppiness or apathy simply does not happen in big budget movies. There are multiple stages of gathering reference, art direction and early tests, many times before any photography is shot.
If you procrastinate making artistic decisioms for long enough, there's not actually going to be any art in the movie once it's done.
Not only does this not happen, it doesn't make sense. Just because you don't like something that doesn't mean huge amounts of work and planning didn't go into it.
The person I replied to said it was "mostly real". Lots of CG is done in realistic ways but people pick and choose what they decide is good based on the movies they already like. Fury Road has somehow become an example of "doing things for real" when the whole movie is non stop CG shots.
A lot of the examples from the article (which is a very good article, thank you for linking it) were mostly about paint-outs, color grading, or background elements.
No they weren't, there are CG landscapes, CG mountains, CG canyons, CG crowds, CG storms, CG cars, CG arm replacements and many entirely CG shots. It's the whole movie.
Are you suggesting it's not noticeable in those movies? I found it distracting several times in I think every one of those (maybe the least in 1917? And I haven't seen Ford vs. Ferrari, but I have all the rest). A few entire scenes or sequences in TG:M look awful, and it's usually the mundane ones that wouldn't even have been effects in a pre-CG movie, not the aircraft action stuff. Alien: Romulus looks fake practically the whole movie (that one didn't ruin it for me or anything, but it had an effect like the Riddick movies, of being obviously mostly a cartoon, though of course not as awful about it as those were).
Well, I guess it wasn't exactly distracting in Barbie because that's practically a marionette movie a la Thunderbirds, so it's not really trying not to look off.
> Are you suggesting it's not noticeable in those movies
You can check the youtube link I posted. You'd be hard pressed to notice the good CGI in those movies.
> I found it distracting several times in I think every one of those
Honestly, I really doubt you noticed that much CGI. Well, unless you go in already primed to discount everything as CGI (whether or not it's actually CGI).
But there is no universe in which it's possible to build them economically.
Not even close. The numbers are simply ridiculous.
And that's not even accounting for the fact that getting even one of these things into orbit is an absolutely huge R&D project that will take years - by which time technology and requirements will have moved on.
reply