> This is correct and also increasingly affecting me as my eyes age. I had to give my Studio Display to my wife because my eyes can't focus at a reasonable distance anymore, and if I moved back further the text was too small to read.
> (I now use a combo of 4K TV 48" from ~1.5-2 metres back as well as a 4K 27" screen from 1 m away, depending on which room I want to work in. Angular resolution works out similarly (115 pixels per degree).)
The TV is likely a healthier distance to keep your eyes focused on all day regardless, but were glasses not an option?
Glasses would have been the "normal person" fix, but my eyes are great otherwise (better than 20/20 distance vision). So I could focus closer with glasses, but the lenses were worse quality than just sitting farther back.
> While I do agree with the content, this tone of writing feels awfully similar to LLM generated posts
> Commenter's history is full of 'red flags': - "The real cost of this complexity isn't the code itself - it's onboarding" - "This resonates."
Wow it's obvious in the full comment history. What is the purpose for this stuff? Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing? On Twitter I already have scroll down to find the one human reply on many posts.
And when the bots get a bit better (or people get less lazy prompting them, I'm pretty sure I could prompt to avoid this classic prose style) we'll have no chance of knowing what's a bot. How long until the majority of the Internet be essentially a really convincing version of r/SubredditSimulator? When I stop being able to recognize the bots, I wonder how I'll feel. They would probably be writing genuinely helpful/funny posts, or telling a touching personal story I upvote, but it's pure bot creative writing.
> Do social marketing services maintain armies of bot accounts that just build up credibility by doing normal-ish comments, so they can called on later like sleeper cells for marketing?
Russia and Israel are known to run full time operations doing this for well over a decade. Twitter by their own account, 25% of users are/were bots back in 2015 (their peak user year). Even here on HN if you go look at the most trafficked Israel/Palestine threads, there are lots of people complaining about getting modded into oblivion, turning the conversation into neutral/pro israel, and silencing negative comments via a ghost army of modders.
Adding high quality binaural audio to Doom would make this even more viable. Earbuds have accelerometers which could also be incorporated to add additional queues.
> Every time the LLM is slightly off target, ask yourself, "What could've been clarified?
Better than that, ask the LLM. Better than that, have the LLM ask itself. You do still have make sure it doesn't go off the rails, but the LLM itself wrote this to help answer the question:
### Pattern 10: Student Pattern (Fresh Eyes)
*Concept:* Have a sub-agent read documentation/code/prompts "as a newcomer" to find gaps, contradictions, and confusion points that experts miss.
*Why it works:* Developers write with implicit knowledge they don't realize is missing. A "student" perspective catches assumptions, undefined terms, and inconsistencies.
Pretend you are a NEW AI agent who has never seen this codebase.
Read these docs as if encountering them for the first time:
1. CLAUDE.md
2. SUB_AGENT_QUICK_START.md
Then answer from a fresh perspective:
## Confusion Points
- What was confusing or unclear on first read?
- What terms are used without explanation?
## Contradictions
- Where do docs disagree with each other?
- What's inconsistent?
## Missing Information
- What would a new agent need to know that isn't covered?
## Recommendations
- Concrete edits to improve clarity
Be honest and critical. Include file:line references."
```
*Uses cases:* Before finalizing new documentation, evaluating prompts for future Agents.
For me it's the motion clarity that I notice the most. Higher FPS is just one way to get more clarity though, with other methods like black frame insertion then even 60 fps feels like 240.
Set nproc_per_node-1 instead of 8 (or run the training script directly instead of using torchrun) and set device_batch_size=4 instead of 32. You may be able to use 8 with a 5090, but it didn't work on my 4090. However it's way slower than expected, one H100 isn't 250x the 4090, so I'm not sure it's training correctly. I'll let it run overnight and see if the outputs make any sense, maybe the metrics are not accurate in this config.
I'm running it now and I had to go down to 4 instead of 8, and that 4 is using around 22-23GB of GPU memory. Not sure if something is wrong or if batch is only scaling part of the memory requirements. (Edit: I restarted running the training script directly instead of torch run, and 8 still doesn't fit, but 4 is now using 16-17 instead.)
On my 4090 the tok/sec is 523, which is 1/2000 of the 1,000,000 tok/sec of the 8 80GB H100s. That feels too slow so maybe something is wrong. The 4090 is about 1/3 of the raw compute. I'm sure there's other losses from less batching but even if it were 1/10ths as fast, I'd expected something more like 1,000,000 / 10 / 8 so at least 10,000 tok/sec.
> first time I've seen such expressiveness in TTS for laughs, coughs, yelling about a fire, etc!
The old Bark TTS is noisy and often unreliable, but pretty great at coughs, throat clears, and yelling. Even dialogs... sometimes. Same Dia prompt in Bark: https://vocaroo.com/12HsMlm1NGdv
Dia sounds much more clear and reliable, wild what 2 people can do in 3 months.
So this a new method that simulates a CRT and genuinely reduces motion blur on any type of higher framerate displays, starting a 120hz. But it doesn't dim the image like black frame insertion which is the only current method that comes close to the clarity of a CRT. But it also simulates other aspects of CRT displays, right?
Can you use this method just to reduce blur without reducing brightness, on any game? They mention reducing blur for many things other than retro games in "Possible Use Cases of Refresh Cycle Shaders" but does reducing blur in a flight simulator also make it visually look like a CRT with phosphors?
They do mention that it does reduce brightness. The selling point compared to strobing sounds to be less eyestrain. I'd expect it to lose more brightness than strobing, considering the lower relative pixel on time.
> (I now use a combo of 4K TV 48" from ~1.5-2 metres back as well as a 4K 27" screen from 1 m away, depending on which room I want to work in. Angular resolution works out similarly (115 pixels per degree).)
The TV is likely a healthier distance to keep your eyes focused on all day regardless, but were glasses not an option?