RTOS can be used a lot looser than you describe. Like a build system, scheduling, and interrupt framework that allows you to program an MCU like you describe. Zephyr RTOS and Free RTOS provide easy enough ways to write code that uses blocking APIs but probably runs your code according to the timing constraints if you hold it right. As an alternative, you could write for “bare metal” and handle the control flow, scheduling, interrupting, etc. yourself. If you are writing to “random” addresses according to some datasheet to effect some real world change, you are probably reaching for an RTOS or bare metal unless you are writing OS driversn. If you look at the linux drivers, you will see a lot of similarities to the Zephyr RTOS drivers, but one of them is probably clocking in the MHz while the other in the GHz
I kinda quit using it. The tab feature is useful when making minor or mundane changes, but I quite prefer the codex GUI if I am going to be relatively hands off with agents.
AI-assisted research is a solid A already. If you are doing greenfield then. The horizon is only blocked by the GUI required tooling. Even then, that is a small enough obstruction for most researchers.
Have you been in a self driving car? There are some quite annoying hiccups, but they are already very safe. I would say safer than the average driver. Defensive driving is the norm. I can think of many times where the car has avoided other dangerous drivers or oblivious pedestrians before I realized why it was taking action.
Do you mean the new default datetime resolution of microseconds instead of the previous nanosecond resolution? Obviously this will require adjustments to any code that requires ns resolution, but I'd bet that's a tiny minority of all pandas code ever written. Do you have a particular use case in mind for the problems this will cause?
I would describe it as the huge majority, reflecting on my pandas use over the years. Pretty much all of the data worth exploring in pandas over excel, some data gui, or polars involves timestamps.
Rust's compiler has six what it calls "lint levels" two of which can't be overridden by compiler flags, these handle all of its diagnostics and are also commonly used by linters like clippy.
Allow and Expect are levels where it's OK that a diagnostic happened, but with Expect if the diagnostic expected was not found then we get another diagnostic telling us that our expectation wasn't fulfilled.
Warn and Force-Warn are levels where we get a warning but compilation results in an executable anyway. Force-warn is a level where you can't tell the compiler not to emit this warning.
Deny and Forbid are levels where the diagnostic is reported and compilation fails, so we do not get an executable. Forbid, like Force-warn cannot be overriden with compiler flags.
simd was one I thought we needed. Then, i started benchmarking using iter with chunks and a nested if statement to check the chunk size. If it was necessary to do more, it was typically time to drop down to asm rather than worry about another layer in between the code and the machine.
I think you misinterpreted GP; he's saying that with some hints (explicit chunking with a branch on the chunk size), the compiler's auto-vectorization can handle the rest, inferring SIMD instructions in a manner that's 'good enough'.
reply