I get the overwhelming feeling that the author was convinced by an AI that he had a good idea without ever contacting an actual graphics programmer.
"Retro Game Engine owns the full frame lifecycle." - This is completely meaningless. Your engine controls whatever data is in the buffer that's sent to scanout, but the operating system and the GPU drivers and the scanout hardware in the GPU and the input processing and row/column drivers in the display control everything else.
The only actual screenshots this guy has are some "multiply the image with the subpixel mask" demos that.... don't look anything like a real CRT, and certainly nowhere near modern CRT shaders like CRT Royale.
The rest of the posts in the substack page are similarly devoid of actual content, but very heavy on the AI woo-woo this-is-important-and-deep stylings that I've come to find nauseating.
Hey Author, if you can see this - You're clearly a smart guy, but you need a basic grounding in 3D rendering if you're gonna do weird stuff - more than an AI can give you. In particular, the phrase "Light is linear" will be useful to you. Good luck.
It was likely almost entirely AI-generated but there are two oddities:
- MIT — Copyright (c) 2017-present Armagan Amcalar: It would be an interesting bout of hubris to give yourself a copyright that predates the beginning of the project by 9 years.
- The README lists sizes as "kb" rather than "KB": I find it odd that it would get units wrong unless it was specifically instructed to do so?
Hi, the author here. Gea is built upon erste and regie that I released in 2017 and 2019. Hence the copyright. I thought a lot about this, also thought of releasing it as a new version of erste, but it would be too big of a change and re-imagination, and I didn't want to introduce the dichotomy and stress introduced by Angular 2.
Hi, the author of Gea here. Gea is built upon the principles introduced by 11 years of history, based on all of the UI frameworks I have authored so far, the most recent of which are erste and regie from 2017-2019. And Gea itself is written over 6 months, the history of which is compressed in the initial commit as I don't believe in populating Github history with crappy commits that alter design decisions and introduce breaking changes. You can imagine Gea was closed source before the 1.0.0 launch, and made only open at the end.
not the author of the gea but from what I can see in the readme, the ideas go back to 2017, erste.js and regie were earlier versions of the same concept.
Focusing on micro-"optimizations" like this one do absolutely nothing for performance (how many times are you actually calling Instance() per frame?) and skips over the absolutely-mandatory PROFILE BEFORE YOU OPTIMIZE rule.
If a coworker asked me to review this CL, my comment would be "Why are you wasting both my time and yours?"
> If a coworker asked me to review this CL, my comment would be "Why are you wasting both my time and yours?"
If a coworker submitted a patch to existing code, I'd be right there with you. If they submitted new code, and it just so happened to be using this more optimal strategy, I wouldn't blink twice before accepting it.
Among C++ programmers "Best performance" isn't about actual optimization it's a brag with about the same semantic value as for Broadway and so it's not measurable. The woman who wins a Tony for "Best performance in a musical" isn't measurably faster, or smaller, she's just "best" according to some panel. Did audiences like it? Did the musical make money? Doesn't matter, she won a "Best performance" Tony.
Only those that have C++ tatoos and their whole career bound to mastering a single language, selling that knowledge.
Other of us, use it as tool required to integrate with existing products, language runtimes, and SDKs written in C++, which most likely won't get replaced anytime soon.
Getting into the weeds of what a compiler does with your code is fun.
People have been doing micro optimisations since computers became a thing, you benefit from them every day without realising - and seemingly not appreciating - it.
OpenTitan _is_ a microcontroller, just one with a _lot_ of security hardware (and security proofs).
It's intended to be integrated into a larger SoC and used for things like secure boot, though you could certainly fab it with its own RAM and GPIO and use it standalone.
Much worse - from my experience minimax is not suitable for high autonomy on hard projects. The real distant second in my experience is mimo flash v2 (but I did not try the latest version, might be closer to parity). I would not use minimax for serious work.
StepFun 3.5 Flash is better compared to google's gemini 3 flash which is surprisingly good and pretty costly, and to GLM-5.
I find this outcome ironic given minimax's more aggressive marketing and large-scale distillation accusations from Anthropic specifically accusing minimax but not StepFun.
I can only wonder about the true underlying reasons, but deducing from public information I suspect that minimax simply has weaker, benchmaxx-targeting post-training R&D and leans more on distillation of western frontier models, while StepFun has extensive post-training with lots of hard-won custom R&D and internal large-scale distillation teachers.
I don't think it's strictly better than GLM 5, more like they are peers (but in math competitions StepFun is stronger than most), and in my experience have similar coding/bugfix ceiling where world knowledge is not the deciding factor. But I didn't test GLM 5 for more than 30 hours, and my agentic harness (opencode) might be suboptimal - I'm open to the idea that GLM 5 with the right agentic harness is ready for ultra-long autonomy, but I have yet to see it myself.
Where GLM 5 is strictly worse for me though, compared to StepFun, is long-form content generation (planning, research documents) - but this can be said about geminis too and these are obviously very smart models.
Given the free option I'd explore GLM 5 more, but if I had to pay for it myself ofc I'd choose stepfun every time. Basically I think right now the optimal configuration for maximizing output of correct software features per dollar involves using StepFun or its future class competitor for bulk coding and first stage code review.
Maybe I need to write a blogpost about it after all.
I tried them both out with a task of creating a todo-like web app (you can use the chat interface for GLM 5 for free if there's capacity). GLM 5 ended up with a working version. Sadly StepFun didn't quite function right. The main issue was that it ended up putting everything that should be in different columns into a single one. I didn't prompt it further to fix it, but it seems relatively capable. I think it beat what the big Qwen model came up with.
What's really surprising to me is the cost of the model. It's definitely very good for its price. DeepSeek is the only one that offers and competition to it at that price point (GLM 5 is literally 10x more expensive).
I don't quite understand why the author is doing special handling for PSG versus PCM audio.
My GameBoy emulator generates one "audio sample" per clock tick (which is ~1 mhz, so massive 'oversampling'), decimates that signal down to like 100 ksample/sec, then uses a low-pass biquad filter or two to go down to 16 bit / 48 khz and remove beyond-Nyquist frequencies. Doesn't have any of the "muffling" properties this guy is seeing, aside from those literally caused by the low-pass.
This article is taking some liberties with the word "prodigy" that I disagree with.
If the way you nurture a talented student is via "intense drilling", I would argue that the student is not a prodigy in the traditional sense, but a talented and determined student who may or may not be dealing with parental pressure.
The actual prodigies I've known absorb information and gain skills without significant effort - I knew someone who enrolled in a calculus class, skimmed through the book in a week or so, and then would only show up to class for tests (which they would ace).
So the article conclusion doesn't surprise me - inflict relentless training on a young talented person and yeah maybe they won't want to do that as an adult.
But as far as actual "prodigies"? There is no burn-out because there is no (or minimal) effort. The choice of whether to stick with an area of interest through adulthood is more of a personal preference than anything ingrained.
"Retro Game Engine owns the full frame lifecycle." - This is completely meaningless. Your engine controls whatever data is in the buffer that's sent to scanout, but the operating system and the GPU drivers and the scanout hardware in the GPU and the input processing and row/column drivers in the display control everything else.
The only actual screenshots this guy has are some "multiply the image with the subpixel mask" demos that.... don't look anything like a real CRT, and certainly nowhere near modern CRT shaders like CRT Royale.
The rest of the posts in the substack page are similarly devoid of actual content, but very heavy on the AI woo-woo this-is-important-and-deep stylings that I've come to find nauseating.
Hey Author, if you can see this - You're clearly a smart guy, but you need a basic grounding in 3D rendering if you're gonna do weird stuff - more than an AI can give you. In particular, the phrase "Light is linear" will be useful to you. Good luck.
reply