A reporter just sent me a link to this: https://www.ycombinator.com/launches/LmD-lumen-orbit-data-centers-in-space
He wants to interview me about some fucking start-up company's planned 16 km^2 solar panels and fucking *AI data centers in orbit*
I think maybe I'll just walk into the middle of my hayfield and scream for a while instead. @Jason want to join? (Others are welcome too). Screaming begins in 5... 4... 3...
@sundogplanets I was under the impression that orbit is a moderately hostile environment for computers. Hence things like rad-hard CPUs, is that not the case? Meaning - where are they getting rad-hard GPUs? With luck they'll burn their VC money on the ground and not actually put anything up.
@mhkohne @sundogplanets That's true, but as referred to elsewhere in this thread, cooling would be the real killer. Cooling is very difficult without air or water with which to exchange heat, and it's always a significant design factor for spacecraft. An AI data center creates a LOT of heat.
@internic @sundogplanets They do mention using radiant cooling (which is your only choice in this case). And presumably the idea is to hide behind the solar panels to help with that.
Honestly at this point I'm wondering if the idea is to simply take the VC money and run off to Easter Europe or something. There's no way they are putting this thing up in the stated timeframe.
@michaelgemar @mhkohne @sundogplanets I don't know exactly what a realistic answer would be, but just for an order of magnitude, from physics, if we assume that the CPUs have a maximum temperature around 80 C, that's around 350 K. If we imagine the radiator is around that temperature then if it were 100% emissive it would radiate about 850 W/m^2. Using some random value found online for typical server power consumption of 10 kW for a fully populated rack, that would work out to ~11 m^2 of radiator per rack's worth of computing. But, again, that's just an order of magnitude, and entails a number of questionable assumptions.
@internic @michaelgemar @mhkohne @sundogplanets Seeing as they are explicitly talking about "AI Datacenters", 11kW is a bit of an understatement. This stuff is hungry... Nvidia's H100 GPUs come in clusters of 8, each GPU pulling 700W. CPU etc. are installed separately, optimistically pulling 400W. That's 2U in a rack consuming 6kW. A typical rack is 42U, so even assuming 4U, and keeping the last two for overhead, that's 60kW, 120kW if you use a normal configuration.