While it is true that exercise itself does not make you spend much energy, doesn't strength training build additional muscle which in turn makes you need more energy to maintain that muscle, so you increase your basal metabolic rate? Sounds like it'd be more effective for losing weight than cardio on principle.
You are correct. It's just that gaining an appreciable amount of muscle takes a long time. So for it to have any impact would take a couple years of growth
If you enable nested virtualization in your host and shove Valorant in a VM with Hyper-V (through what I believe is a feature in Windows, but forgot the name) Valorant should actually run. Or at least it did a few months ago, not sure if it does work now. Worth a try.
Officials are probably more educated than the general populace, since they almost always need a university degree, but enlisted personnel might even be high school dropouts, so I wouldn't trust any regular soldier to be humble and respectful.
You're just completely wrong and obviously didn't bother to check the facts before commenting. The US military has a much lower proportion of high school drop outs than the general public. Your lack of trust is irrational, elitist, and offensive.
Of course there is. It's vital to check your calculations in some way or another, and cross checking with other humans that know what they're doing ought to yield the correct answer eventually. I suppose this is mostly useful if you find nobody that knows how to use library X, and everyone uses Y, so the only other practical option to cross check are other humans.
Sure, the stuff in the tray goes away, but Wine is not a sandbox. Software can still look into your filesystem and processes. You need Flatpak for proper sandboxing.
Luminosity is the number of particles (in the LHC's case, protons) flowing per unit area and unit time.
Additionally, integrated luminosity is luminosity integrated over time, and is just the number of events produced per unit area. It is the main metric used when CERN releases data and shares just how many events are in a given data set; this is done such that you can take the cross section of a given event (like Higgs production), multiply it by this value, and get how many events of that type happen (how many Higgs were produced).
You clearly didn't think your comment through. Let's say it's your first time cooking. If you have no idea how much of each ingredient you should add, how in the world are you supposed to make a decent meal? I personally fudged a few myself putting too little spice or too many pickles.
I’m a good cook now but I really didn’t start with recipes. I started out cooking the items according to the directions on the package (yes, a recipe of sorts) and then combining the stuff I liked.
That is not quite true. Miners that know what they're doing will undervolt their cards in order to improve power efficiency, which makes cards run cooler and at lower power.
You can’t tell from an eBay listing that a given card was undervolt or overclocked. On average it’s far more risky than buying a gamers card which, so their best avoided.
Also, undervolting isn’t always the correct choice it depends on how valuable the coin being minded is relative to energy costs. Someone mining in their dorm room for example may not be paying based on electricity useage.
My understanding was that gaming cards are pushed far harder, at higher temps, with fluctuating power and thermals, which causes more issues than a single stable power limit and temperature
Where is your understanding coming from? There is no such thing as pushing cards “far harder, at higher temps” than when mining or doing other compute tasks that run the GPU at 100%. Failure rates land squarely on the side of higher temps, and the reasons are well understood https://electronics.stackexchange.com/questions/444474/can-i...
You might be thinking of spinning disk drives rather than GPUs. A lot of people suggest that leaving an HDD powered up and spinning is better than spinning up and down frequently, due to the temperature going up and down a lot and the added wear on this mechanical device. This is completely different from a GPU though.
Higher temps are bad, but thermal cycles are equally bad or worse. Different things on the card have different thermal coefficients of expansion. Getting warm and cooling makes everything flex and stresses solder joints, wire bonds, and thermal interfaces.
Miner cards have longer, sustained high temps. This is bad for life.
Gamer cards have lots of thermal cycles. This is very bad for life.
Miner cards are more likely to be undervolted to improve power efficiency and thermals. This is good for life. (Lower peak temperature, less electromigration).
Gamer cards are more likely to be overvolted and overclocked to improve peak performance. This is very bad for life. (higher peak temperatures, more electromigration).
That’s testing for thermal cycling over wide temperature ranges or longer lifespans. GPU’s are used indoors and don’t have a very long lifespan.
The major risk factor for GPU’s is electromigration which is a major factor in GPU lifespan and directly relates to usage. A 40 hour a week gamer is extremely rare, but a mining GPU is pulling 168 hours a week.
Electromigration is a small risk factor in any kind of reasonable life. Especially if not overvolted (which is something that mostly gamers do-- miners are more likely to undervolt).
Solder fatigue breaking of solder balls is common. I have fixed lots of GPUs by reflowing them. GPUs do cycle over a large temperature range-- delta-T can be 50C+. While maps are loading, etc, you can have delta-T's of 25C+ every few minutes.
This is a thermal cycling induced failure mode. (Of course, a home oven doesn't accomplish proper reflow, so this is more of a "fix things for a couple months" trick as described in the posts).
I strongly disagree. The dominant failure modes of electronics these days are:
A) solder joint failure (thermal cycling)
B) capacitor failure (sustained heat).
Electromigration is a distant, end-of-life condition-- representing only a tiny fraction of failures of non-overvolted devices in a normal use period.
As your link itself says, in the top answer:
"But then there is an important question: How much does this decrease the lifespan? Knowing this, should you make sure that your graphics card stays cool all the time? My guess is no, unless an error was made at the design stage. Circuits are designed with these worst-case situations in mind, and made such that they will survive if they are pushed to the limits for the rated lifetime of the manufacturer. "
GPUs are not mechanical parts (well save for the fans but those can be replaced).
I would imagine thermal stress from heating and cooling would be the biggest issue - you don't get that under constant load.
Huge difference in wear, yes. But not in the direction you think, I think. Warming up an cooling down is more damaging for a card than running constant temperature. It 'jiggles' parts more.