I still have one of those lying around in the draw. It's the backup phone and every time I or my partner needs to use it I am surprised at how well it still works.
Sunscreen for the face should just be daily. My dermatologist recommends it even in Berlin in the Winter.
And then, this is most critical, use mineral or at least creamy sunscreen (sprays barely do anything) and put it on a few minutes before sun exposure - not when you start feeling it.
Agree that the UV index is not particularly useful - it's kind of obvious. Still good to know though.
Task manager takes 10 seconds to load the list of processes.
Right-click on the desktop takes about 1.5-2 seconds to show the 'new' context menu.
Start menu is actually fast to start drawing but has a stupid animation that takes about half a second to fully load.
I sort of understand how the anti-consumer 'features' (ads) get added to a piece of software. But I have no idea how they manage to continuously degrade the experience of existing parts of the system for seemingly no one's benefit.
Are there any details around how the round-trip and exchange of data (CPU<->GPU) is implemented in order to not be a big (partially-hidden) performance hit?
e.g. this code seems like it would entirely run on the CPU?
print!("Enter your name: ");
let _ = std::io::stdout().flush();
let mut name = String::new();
std::io::stdin().read_line(&mut name).unwrap();
But what if we concatenated a number to the string that was calculated on the GPU or if we take a number:
print!("Enter a number: ");
[...] // string number has to be converted to a float and sent to the GPU
// Some calculations with that number performed on the GPU
print!("The result is: " + &the_result.to_string()); // Number needs to be sent back to the CPU
Or maybe I am misunderstanding how this is supposed to work?
"We leverage APIs like CUDA streams to avoid blocking the GPU while the host processes requests.", so I'm guessing it would let the other GPU threads go about their lives while that one waits for the ACK from the CPU.
I once wrote a prototype async IO runtime for GLSL (https://github.com/kig/glslscript), it used a shared memory buffer and spinlocks. The GPU would write "hey do this" into the IO buffer, then go about doing other stuff until it needed the results, and spinlock to wait for the results to arrive from the CPU. I remember this being a total pain, as you need to be aware of how PCIe DMA works on some level: having your spinlock int written to doesn't mean that the rest of the memory write has finished.
Why are you assuming that this is intended to be performant, compared to code that properly segregates the CPU- and GPU-side? It seems clear to me that the latter will be a win.
I am not assuming it to be performant, but if you use this in earnest and the implementation is naive, you'll quickly have a bad time from all the data being copied back and forth.
In the end, people program for GPUs not because it's more fun (it's not!), but because they can get more performance out of it for their specific task.
Replaced the keyboard (orange juice spillage), screen (upgraded to a higher resolution panel), hdd (to an SSD of course), RAM, Wifi adapter (Wifi 6) as well as the battery.
I now use an X1 Nano and while it's nice (and very light!) I am sad that the upgradability is nowhere near as good.
I have done this specifically with the second item in the list in the OP.
Not only did I do it to get free shipping, I got it to get free international shipping.
For extra bonus CO2 points, the other item was coming from a different country.
So I basically paid $0.42 to have a single packet of kool-aid shipped across the pacific ocean.
(I'd never had kool-aid before and I must say I was disappointed.)
I've been using XFCE for several years on 4k screens and I agree that it's not great out of the box.
Once you've set it up it works pretty well though.
Now if only I could remember what I did to get it working nicely...(luckily I've had the same installation of XFCE on my machine for the past 5 years so haven't had to fiddle with that in a while)
reply