How can the solar panel itself radiate heat when it's being heated up generating supplying power? Looking at pictures of the ISS there's radiators that look like they're there specifically to cool the solar panels.
And even if viable, why would you just not cool using air down on earth? Water is used for cooling because it increases effectiveness significantly, but even a closed loop system with simple dry air heat exchangers is quite a lot more effective than radiative cooling
You take the amount of energy absorbed by the solar panels and subtract the amount they radiate. Most things in physics are linear systems that work like this.
It would be way more productive for you to ask these questions to ChatGPT or similar AI with reasoning. Equations are quite simple. But I'm not going to dump it into a HN comment.
You don't have experience of being in space, so your "intuition" about cooling is worth literally nothing without formulas / numbers
It's a matter of deploying it for cheaper or with fewer downsides than what can be done on earth. Launching things to space is expensive even with reusable rockets, and a single server blade would need a lot of accompanying tech to power it, cool it, and connect to other satellites and earth.
Right now only upsides an expensive satellite acting as a server node would be physical security and avoiding various local environmental laws and effects
Lower latency is a major one. And not having to buy land and water to power/cool it. Both are fairly limited as far as resources go, and gets exponentially expensive with competition.
The major downside is, of course, cost. In my opinion, this has never really stopped humans from building and scaling up things until the economies of scale work out.
> connect to other satellites and earth
If only there was a large number of satellites in low earth orbit and a company with expertise building these ;)
> And not having to buy land and water to power/cool it.
It's interesting that you bring that up as a benfit. If waterless cooling (i.e. closed cooling system) works in space, wouldn't it work even better on Earth?
Samsung had been kind of side grading their flagships and offering worse SOCs depending on location, paired with there plainly being more options for Android there'll be more variety spread out over the different manufacturers
NTFS getting corrupted by the tiniest errors would be one reason to use ReFS
Using it for the OS partition is not very well supported right now though (for a consumer), installing etc. works fine, but DISM doesn't support ReFS so adding features generally doesn't work
Can't recall the last time I saw a corrupt NTFS volume... even when using Storage Spaces. I'm sure it's happened to someone given Windows is in use by billions of machines, but NTFS becoming corrupt can't be all that common.
Besides, ReFS doesn't do data journaling by default.
Computer monitors have been getting a lot better while being cheaper, with no ads or services. You can get a high resresh rate 4K ips for about $200 nowadays. Display tech is just advancing faster than other tech at the moment
Huh, interesting. My experience has always been that computer monitors have been more expensive than TVs, even when panels are ostensibly the same. I've attributed it to comparative volume in TV consumers and desktop computer consumers.
At this point (as opposed to a decade ago) there's arguably no difference between a TV and a monitor anymore outside of packaging and the bundling of a remote and input defaults.
How does this work with respect to using a remote? I know something like a Roku remote would work display-wise, but you usually program it to use the signal that the your brand of TV responds to. That way you can use the Roku/whatever remote to turn on the actual TV and control audio. Speaking of, how does audio work for this set up?
HDMI standards allow plugged in devices to control the power state of the TV. e.g. my Apple TV will turn the TV on when I press a button on the aTV remote and will turn the TV off when I turn the Apple TV off.
Audio is a separate challenge, I'm not sure what you'd do there. Do computer monitors have eARC outputs? None of the ones I have do. Again if you had an Apple TV you could pair it with a HomePod (or pair of them) to avoid the issue but that's a niche solution.
Samsung already makes a bunch of "smart monitors", putting there the same software they use on TVs. Not sure about other manufacturers, but would be surprised if they don't catch up soon.
Why is their browser using so much memory?
I'm quite bad at closing tabs since I've switched to vertical and just open new windows instead.
And even with that I don't think I ever broke 10gb on edge except for when I opened many YouTube videos at once and then went through them which kept the tabs loaded
Lost faith from what? On x86 mobile Lunar lake chips are the clear best for battery life at the moment, and mobile arrowlake is competitive with amd's offerings. Only thing they're missing is a Strix halo equivalent but AMD messed that one up and there's like 2 laptops with it.
The new intel node seems to be kinda weaker than tsmc's going by the frequency numbers of the CPUs, but what'll matter the most in a laptop is real battery life anyway
Lunar Lake throttles a lot. It can lose 50% of its performance on battery life. It's not the same as Apple Silicon where the performance is exactly the same plugged in or not.
Lunar Lake is also very slow in ST and MT compared to Apple.
Qualcomm's X Elite 2 SoCs have a much better chance of duplicating the Macbook experience.
Nobody is duplicating the macbook experience because Apple is integrating both hardware and os, while others are fighting Windows, and OEMs being horrible at firmware.
LNL should only power throttle when you go to power saver modes, battery life will suffer when you let it boost high on all cores but you're not getting great battery life when doing heavy all core loads either way. Overall MT should be better on Panther lake with the unified architecture, as afaik LNLs main problem was being too expensive so higher end high core count SKUs were served by mobile arrow lake.
And we're also getting what seems to be a very good iGPU while AMD's iGPUs outside of Strix Halo are barely worth talking about
ST is about the same as AMD. Apple being ahead is nothing out of the ordinary since their ARM switch, as there's the node advantage, what I mentioned with the OS, and just better architecture as they plainly have the best people at the moment working at it
> LNL throttles heavily even on the default profile, not just power saver modes.
This does also show it not changing in other benchmarks, but I don't have a LNL laptop myself to test things myself, just going off of what people I know tested. It's still also balanced so best performance power plan would I assume push it to use its cores normally - on windows laptops I've owned this could be done with a hotkey.
> Lunar Lake uses TSMC N3 for compute tile. There is no node advantage.
LNL is N3B, Apple is on N3E which is a slight improvement for efficency
> Yet, M4 is 42% faster in ST and M5 is 50% faster based on Geekbench 6 ST.
Like I said they simply have a better architecture at the moment, which also more focused on client that GB benchmarks because their use cases are narrower.
If you compare something like optimized SIMD Intel/AMD will come out on top with perf/watt.
And I'm not sure why being behind the market leader would make one lose faith in Intel, their most recent client fuckup was raptor lake instability and I'd say that was handled decently. For now nothing else that'd indicate Windows ARM getting to Apple level battery performance without all of the vertical integration
ETA: looking at things the throttling behaviour seems to be very much OEM dependent, though the tradeoffs will always remain the same
This does also show it not changing in other benchmarks, but I don't have a LNL laptop myself to test things myself, just going off of what people I know tested. It's still also balanced so best performance power plan would I assume push it to use its cores normally - on windows laptops I've owned this could be done with a hotkey.
It literally throttles in every benchmark shown. Some more than others. It throttles even more than the older Intel SoC LNL replaced.
LNL is N3B, Apple is on N3E which is a slight improvement for efficency
Still the same family. The difference is tiny. Not nearly enough to make up the vast difference between LNL and M4. Note that N3B actually has higher density than N3E.
Like I said they simply have a better architecture at the moment, which also more focused on client that GB benchmarks because their use cases are narrower. If you compare something like optimized SIMD Intel/AMD will come out on top with perf/watt.
I did recently see someone compare mpv and vlc on a 8k HDR @ 60fps file with mpv really lagging while vlc doing it fine.
I could confirm the mpv lags but don't have vlc, so not sure if it's just better in that specific case or did something like no actual HDR
This may just be because mpv has higher-quality default settings for scaling and tonemapping. Try mpv with profile=fast, maybe. To properly compare mpv's and VLC's performance you'd need to fully match all settings across both players.
It was with the fast profile using both software and hardware deciding, important detail I forgot was that the video was av1. Don't have the link to it now but it was from jellyfin's test files
And even if viable, why would you just not cool using air down on earth? Water is used for cooling because it increases effectiveness significantly, but even a closed loop system with simple dry air heat exchangers is quite a lot more effective than radiative cooling
reply