I had hoped for some more basic stuff. I struggle for 2 months now to implement a fast line draw with width for a embeed cpu. It only has a framebuffer no gpu
This book is quite old. I would argue that Realtime Rendering 4th edition is the better book. Bonus points if you pair it with some online resources to get a deeper understanding of the topics (but the textbook contains follow up material for all discussed topics).
"Basic" is a relative term. Modern graphics GPUs do not work the same way memory mapped graphics do, and working with them is different at a fundamental level.
You are probably better off searching for old graphics programming books from the 90s. The code they have likely won't work, the the algorithms should be what you're looking for, and shouldn't be hard to adapt.
It's non-trival though not that hard. Have you asked an LLM?
It depends on your needs
* You can compute a rectangle by expanding a line purpendicular to its direction
The problem with this is you'll get gaps between 2 lines if they are supposed to be connected. You can solve that by trying to connect the corners if the rectangles. Once you do this though you're no longer drawing rectangles. You might have to make a simple triangle rasterizer. Or a scanline rasterizer
* You can "drag a brush". You compute a single line, then at each pixel, draw a sprite/circle/rectangle around that pixel. That's slow because you'll draw every pixel more than once but it will work and might be fast enough
This has the issue with the ends will be different unless your brush is round. If that's ok then it works.
All of these are something you can ask Gemini, ChatGPT, Claude, and they'll spit out an exmaple in the language of your choice.
I personaly would shy away from binary formats whenever possible. For my column based files i use TSV or the pipe char as delimiter. even excel allowes this files if you include a "del=|" as first line
i reimplemented my Grandma in Rust. She was a real Safety and Security hazard to herself and her surounding. She forgot things and made unsound memory assumtion. Took me about 3 Days vibe coding with Claude Code and was a real fun time. Now my grandma is leaking anything and has some new comandline switches. To be fair i know best how to implement Grandmas and everybody should use my Grandma from now on. If this breaks your scripts just adapt. Sure this was very cynical but im so tired reading every week some new pet project where rust is seen a mesiah. It is a new language, it helps getting memory right more easy. It is like the new visual basic.
Years ago i bought a Device because i wanted to use it. It had cool features or provided a benefit to me. Today if i buy a device (just any device) i decide which Device will be the lease anoying one. Often this ends in not buying a device and use a old but reliant way of doing things.
After reading the comments here it boils down to: But my language is better then yours. mmap is not a feature of C. Some more modern languages try to prevent people form shooting in there feet and only allow byte wise access to such mmaped regions. The have a point doing this, but on the other hand also the C-Users have a valid point. Safety and Speed are 2 Factors you have to consider using the tools you use. From a Hardware point of view C might be more direct but it also enables you to make "stupid" errors fast. More Modern languages prevent you from the "stupid" errors but make you copy or transform the data more. Scotty from the Enterprise sayed once: Allways use the fitting tool
Single Treading is easy and hard at the same time. I Program MCU with only one core and no real hardware support for preemtive multi tasking. i sometimes have to resort to interupts to get a somewhat Multitasking but on the other hand my code runs as i have wrote it. It makes you think more about the problem. i see may programs nowerdays just throwing Threads, co-routines and memory on problems till the speed is acceptable. sorry my english no native speaker, and if i use AI to make the wording better i get complains using AI....
I've read that the (first?) preemptive multi-tasking was implemented in Apollo lander, to leave more processing power to more critical sensors. No one though of it in such general terms though.
reply