The Nightmare Course [1], so named because someone with that skillset (developing zero-days) is a nightmare for security, not because the course itself is a nightmare, and Roppers Academy [2] are both good for learning how to reverse engineer software and look for vulnerabilities.
The nightmare course explicitly talks about how to use Ghidra.
The first is certainly interesting, but it won't help you develop 0day. I would think of it like more of a collection of fun puzzles and esoterica. For example all the heap unliking/metadata attacks and House of X stuff is pretty antiquated. These will help you win ctfs but are certainly not a prerequisite or even all that relevant to contemporary vuln research. Most of the public research I see is probably at least a year behind the current meta (and I expect the public internet will only grow more quiet over time)
It’s surprising to me how much people seem to want async in low level languages. Async is very nice in Go, but the reason I reach for a language like Zig is to explicitly control those things. I’m happily writing a Zig project right now using libxev as my io_uring abstraction.
Using async in low level languages goes all the way back to the 1960's, became common in systems languages like Solo Pascal, Modula-2, with Dr.Dobbs and The C/C++ User's Journal having plenty of articles regarding C extensions for similar purposes.
When I look at historical cases, it seems different from a case today. If I’m a programmer in the 60s wanting async in my “low level language,” what I actually want is to make some of the highest level languages available at the time even more high level in their IO abstractions. As I understand it, C was a high-level language when it was invented, as opposed to assembly with macros. People wanting to add async were extending the state of the art for high level abstraction.
A language doing it today is doing it in the context of an ecosystem where even higher level languages exist and they have made the choice to target a lower level of abstraction.
People are putting C++20 co-routines to good use in embedded systems, in scenarios where the language runtime with some additional glue layers, is the OS for all practical purposes.
And GPGPUs as well, like senders/receivers whose main sponsor is NVidia.
Apparently there are lots of people who signed up just to check it out but never actually added a mechanism to get paid, signaling no intent to actually be "hired" on the service.
I've actually been experimenting with that lately. I did a really naive version that tokenizes the input, feeds the max context window up to the token being encoded into an LLM, and uses that to produce a distribution of likely next tokens, then encodes the actual token with Huffman Coding with the LLM's estimated distribution. I could get better results with arithmetic encoding almost certainly.
It outperforms zstd by a long shot (I haven't dedicated the compute horsepower to figuring out what "a long shot" means quantitatively with reasonably small confidence intervals) on natural language, like wikipedia articles or markdown documents, but (using GPT-2) it's about as good as zstd or worse than zstd on things like files in the Kubernetes source repository.
You already get a significant amount of compression just out of the tokenization in some cases ("The quick red fox jumps over the lazy brown dog." encodes to one token per word plus one token for the '.' for the GPT-2 tokenizer), where as with code a lot of your tokens will just represent a single character so the entropy coding is doing all the work, which means your compression is only as good as the accuracy of your LLM, plus the efficiency of your entropy coding.
I would need to be encoding multiple tokens per "word" with Huffman Coding to hit the entropy bounds, since it has a minimum of one bit per character, so if tokens are mostly just one byte then I can't do better than a 12.5% compression ratio with one token per word. And doing otherwise gets computationally infeasible very fast. Arithmetic coding would do much better especially on code because it can encode a word with fractional bits.
I used Huffman coding for my first attempt because it's easier to implement and most libraries don't support dynamically updating the distribution throughout the process.
Even that functions as a sort of proof of work, requiring a commitment of compute resources that is table stakes for individual users but multiplies the cost of making millions of requests.
It’s gotten easier of late because Bazel modules are nice and Gazelle has started support plugins so it can do build file generation for other languages.
I don’t like generative AI for rote tasks like this, but I’ve had good luck using generative AI to write deterministic code generators that I can commit to a project and reuse.
I’m not a fan of generative AI for the use case because it’s rote enough to do deterministically, but deterministic code generation is getting better and better.
Gazelle, the BUILD file generator for Go, now supports plugins and several other languages have Gazelle plugins.
I’ve used AI to generate BUILD file generators before, though. I had good luck getting it to write a script that would analyze a Java project with circular dependencies and aggregate the cycle participants into a single target.
I’ve been experimenting with this today. I still don’t think AI is a very good use of my programming time… but it’s a pretty good use of my non-programming time.
I ran OpenCode with some 30B local models today and it got some useful stuff done while I was doing my budget, folding laundry, etc.
It’s less likely to “one shot” apples to apples compared to the big cloud models; Gemini 3 Pro can one shot reasonably complex coding problems through the chat interface. But through the agent interface where it can run tests, linters, etc. it does a pretty good job for the size of task I find reasonable to outsource to AI.
This is with a high end but not specifically AI-focused desktop that I mostly built with VMs, code compilation tasks, and gaming in mind some three years ago.
Even with the ellipsized link I knew you were talking about one of a few things because the link shows up as `:visited` for me... had to be either BigTable, MapReduce, or Spanner. All good reads.
There's some real science there for a couple of reasons. Protein is a macronutrient you can be malnourished if you don't get enough of even if you eat enough calories and the right micronutrients, and if most of your calories are from protein then you're actually probably not getting as many "burnable" calories as you think you are because (1) the amount of protein you need to meet your daily protein needs never enters the citric acid cycle to oxidized for ATP regeneration, (2) protein is the macronutrient that feels the most filling, and (3) excess protein that goes to the liver to be converted into carbs loses around 30% of its net usable calories due to the energy required for that conversion.
The way we count calories is based on how many calories are in a meal vs the resulting scat, and that just isn't an accurate representation of how the body processes protein such that a protein-heavy diet doesn't have as many calories as you probably think it does, which makes it a healthy choice in an environment where most food-related health problems stem from overeating.
However I agree with your skepticism insofar as when they say "prioritizing protein" they probably mean "prioritizing meat," which is more suspect from a health standpoint and looks somewhat suspicious considering the lobbyists involved.
Most Americans get plenty of protein without trying. It's hard to see how eating more meat should help unless you think the amount of protein actually needed is much more than what the May Clinic thinks: https://www.mayoclinichealthsystem.org/hometown-health/speak...
The nightmare course explicitly talks about how to use Ghidra.
1: https://guyinatuxedo.github.io 2: https://www.roppers.org
reply