Studies on line length are all over the place, I think the only consistent conclusions are that shorter lines (between 30-60cpl) are beneficial for readers with poor literacy skills (including non-native speakers) or for people with dyslexia or some visual disorders.
I bring this up because recent studies have managed to account for the saccade error associated with return sweeps and have found that reducing said error does not increase reading speed or comprehension.
It's also worth noting that studies done on print line length aren't applicable to screen line length, and that most studies deal with lines of 30-100cpl, sometimes up to 130cpl, which isn't enough most websites rendered on a 16:9 screen or even the current Wikipedia design (on my 1080p monitor at least).
Would it be better if memorisation could be achieved through distributed practice, vocabulary learned through reading and listening, spaced repetition achieved through well written textbooks, curriculum, mentorship, etc? Absolutely, but these factors aren't always a given and often are often outside of one's own control. If failing to keep good Anki hygiene is considered in the ROI calculations, these issues should be included to.
Maybe your textbooks just don't distribute problems well enough, the curriculum changes drastically such that what you've just learned isn't touched on again before you forget it, or maybe you just don't have the time or material to consistently practice foreign languages for 1-2 hours a day. In these cases, the 10-20 minutes spent reviewing cards serves as a useful crutch in preventing that information being forgotten entirely, which undoes all the time spent trying to learn it in the first place.
I find that Anki, by showing up the cards at certain (optimised) intervals, delivers distributed practice.
Out of the efficient learning methods as evaluated by this paper: "Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology" [1] (scroll to Table 4 for results) Anki delivers both distributed practice and practice testing in a very natural way.
If you have 20 minutes a day to use anki for memorization in foreign language, then you have 20 minutes a day to consume foreign language media. You just swap activity. You do not need perfectly designed media for that, you need any media slightly harder than your level. Internet has them.
That is exactly setup where anki makes less sense.
Anki makes great sense when you have a lot of time for that foreign language activity and are spending portion of it in anki vocabulary.You do it in addition to other activities.
It's not quite the same out of the box experience (after making sure you have the latest chipset drivers, use the balanced power plan and NOT the performance plan, and have Xbox Game Bar..), but it's possible to force certain processes and executables to run on specific CPU cores in Linux using taskset and verify that they're running on the correct cores. It will be a little tedious if you play a lot of games on Steam as you'll inevitably run into games that use proton and others that use native binaries, but there's nothing that should prevent you from being able to manually schedule games/programs that need the extra cache onto the cores that have the extra cache.
What about a VM with GPU pass-trough? Let's say I have plenty of storage and just create different VMs with selected distro or winXI that have certain games. Could that also be an option? Just drag and drop the appropriate game in the VM storage?
Yes, for GPU passthrough you usually assign specific cores to the VM and isolate them (to reduce hiccups caused by other tasks using the core), so in this case you'd just assign the cores with the extra cache.
Genes, climate, and types of local foods play a role too, but widespread adoption of agriculture made diets rich in carbs and starchy foods possible, which isn't great for dental health, and the somewhat recent trend (last 200~ years) of highly processed foods that are softer and easier to chew means there's less mechanisms to deal with tooth overcrowding. This has been observed in children from hunter gatherer societies, raised on modern diets, having worse dental hygiene than their parents.
It's molar concentration, the study doesn't really mention it directly of course. 1mM is probably high since they're looking at what concentration of tobacco causes DNA damage in e.g., a cavity, but the study doesn't think it's unreasonable. Other studies in animals used up to 4mM.
The full study mentions that other studies found nasal snuff produced the highest plasma nicotine concentration at 0.8 uM, with speculation that a large (emphasis mine) pinch would produce mM concentrations in the nasal cavity, right on target for observed DNA damage, yet incidents of nasal cancer among western nasal snuff users are practically unheard of. Similarly, rates of oral cancer among Swedish Snus users are so low that several studies now have failed to find any association with oral cancer.
This says nothing about rates of other non-site specific cancers of course (pancreatic being a big one), or the other health risks that are still associated with smokeless tobacco (stroke, cardiovascular disease, impotency), but it's somewhat telling that even anti-nicotine advocacy groups rarely mention the risk of cancer for smokeless tobacco products in their publications.
There has been a social stigma against breathing through your mouth in the UK for as long as I can remember, with it being associated with negative qualities like being undisciplined, unathletic, dimwitted, along with it being considered an impolite thing to do.
I doubt this was part of the curriculum, but in fitness classes at school we were told that breathing through your nose with your tongue pressed against the roof of your mouth helps develop your face/jaw and that breathing through your nose moistens the air and helps filter it. We were also told that it's not necessary to breath through your mouth when you're out of breath as you won't get or don't need more oxygen and that you risk hyperventilating this way, instead we were told that you should breath using your diaphragm to help mix the air in the lungs and that you should hold the breath for about as long as you inhale/exhale if you can help it. This sort of breathing exercise seems pretty common in any fitness/exercise related guide I've seen too.
As far as other countries/cultures go, any country that has a strong yoga/meditative culture likely considers breathing through your mouth to be bad too, but this is just speculation on my part. Pranayama is thousands of years old for instance and is primarily concerned with controlling and reducing your breathing, which is hard to do if you're not breathing through the nose.
As per usual Ken is being pretty disingenuous here, even ignoring the hyperbole about needing a 175MP digital sensor to 'capture the same information'.
Velvia 50 might resolve 160 lines/mm (I'm not sure if this is line pairs/mm or lines/mm) but that's at a contrast ratio of 1000:1, for a contrast ratio of 1.6:1 (more typical for normal shooting conditions) you're down to 80 lines/mm which is 22.1 MP, and that's for just the film stock under optimal conditions. When you factor in real life shooting conditions like the lens MTF charts, the aperture you were shooting at, whether you were shooting on a stable tripod, dust, grain, noise, etc, that number only goes down.
Conventional wisdom has been that you can get good results scanning colour film at 6-12MP, and this tracks with the results people got when they started comparing the first DSLRs to 35mm film - something like the full frame 11MP Canon EOS-1DS from 20 years ago with a mere 25 line pairs/mm (including losses from the Bayer and AA filters) compares favourably(*) to medium format film, with 6-8MP DX format DSLRs from the same era comparing favourably to 35mm. B&W film can be scanned at a higher resolution but still, 87MP is a theoretical maximum not a practical one.
They're not 'officially' supported like you say, but Navi21 cards (6800-6950xt) have undergone the same QA validation as the officially supported pro cards.
Screen tearing in X11 is a tricky one since there's a dozen ways to deal with it and 10 times more ways for it to go wrong, which is just enough uncertainty that you can never be sure if it's supposed to be that way. I know I've spent hours trying to deal with screen tearing by fiddling with desktop compositors that are supposed to prevent tearing out of the box only to fall back to the trusty AMDGPU/Radeon TearFree setting which has always just worked for me.
I bring this up because recent studies have managed to account for the saccade error associated with return sweeps and have found that reducing said error does not increase reading speed or comprehension.
https://link.springer.com/article/10.3758/s13414-019-01742-3
It's also worth noting that studies done on print line length aren't applicable to screen line length, and that most studies deal with lines of 30-100cpl, sometimes up to 130cpl, which isn't enough most websites rendered on a 16:9 screen or even the current Wikipedia design (on my 1080p monitor at least).