Neat trick. Though it seems unlikely to be very useful in practice. How often are you going to know the probably value of a pointer without knowing the actual value? I would guess it's pretty rare. Interesting anyway!
It's possible to have a "sparse" matrixes where most of the values are 0 and only a few are not null. So you can guess 0 and cross your fingers.
(There are libraries that implement sparse matrixes in a more memory efficient way. I needed them for a program in Python, but I'm not an expert in Python. I found a few ways, but they are only useful for big matrixes with very few coeficients and have other restrictions to get an improved speed. I finaly gave up and used a normal np matrixes.)
In the last case they were like 100x100, but only two rows or columns were not zero, so only 2% full. We were using einsums to multiply them with 4D arrays, and I solved the problem writing some code with @guvectorize.
In al old case they like 1000x1000, like a block checkboard where one of the colors had only 0, so like 50% full, but the blocks had different sizes. It was an old project in Fortran, and we used a "for"(do) for the blocks and another inside each block.
Neat. That’s fairly sparse. I’m 0% surprised to hear that small sparse matrices haven’t got as existing code out there, seems like a good excuse to roll your own :)
Bigger-picture, this method amounts to manually assisted speculative execution. And it's not about knowing the not-yet-loaded value, but about knowing what will (very likely) happen as a consequence of that value.
In a happy case, those will be laid out sequentially in memory so you can guess the value of the pointer easily.
(That said your comment still stands, since using linked lists in the first place is much more rare). But I suppose there's probably a lot of other domains where you might have a performance critical loop where some hacky guessing might work.
the optimization could be vaguely interesting if you are implementing a lisp (or some other list heavy language) and don't want to perform too aggressive non-local optimizations to optimize the layout of lists.
Memory pools are commonly allocated in a contiguous block and sliced into nodes placed onto a free list. They will be sequential until the pattern of allocations mixes them up.
It might be useful in cases where you pre-allocate a large array which you don't randomly access and whose structure doesn't change much but sometimes it does. Then you could either reallocate the array and pay a (large) one time cost or use this trick.