I mostly agree. Null pointer de-references aren't a dominant or hard-to-detect failure mode for my code either. Removing them does meaningfully complicate language design and ergonomics too. It's reasonable to call them a "problem" in the same sense that any invalid memory address is a problem. I also see value in making invalid states unrepresentable which would mean disallowing null pointers right? The real question is whether that gurantee is worth the cost in ergonomics, compiler surface area, and C compatibility. Every invariant a compiler enforces consumes complexity budget. I agree with the author that it's better to keep the core language small and spend that budget on higher-impact guarantees. And the thesis here seems to be that null pointers aren't high-impact enough (and at least for the code I've read and wrote, this seems to hold true). I suspect that part of the appeal is that null pointers are low-hanging fruit. They're easy to point out and relatively easy to "solve" in the type system, which can make the win feel larger than it actually is. The marginal benefit feels smaller than spending that complexity budget on harder, higher-impact guarantees that target the logical traps programmers fall into with clever, highly abstracted code, where bugs are more subtle and systemic rather than immediate and loud.
> I suspect that part of the appeal is that null pointers are low-hanging fruit. They're easy to point out and relatively easy to "solve" in the type system, which can make the win feel larger than it actually is.
I agree. I find that Options are more desirable for API-design than making function-bodies easier to understand/maintain. I'm kind of surprised I don't use Maybe(T) more frequently in Odin for that reason. Perhaps it's something to do with my code-scale or design goals (I'm at around 20k lines of source,) but I'm finding multiple returns are just as good, if not better, than Maybe(T) in Odin... it's also a nice bonus to easily use or_return, or_break, or_continue etc. though at this point I wouldn't be surprised if Maybe(T) were compatible with those constructs. I haven't tried it.
To make `Maybe(T)` feel like a multiple return, you just need to do `.?` so `maybe_foo.? or_return` would just the same.
But as you say, it is more common to do the multiple return value thing in Odin and not need `Maybe` for a return values at all. Maybe is more common for input parameters or annotating foreign code, as you have probably noticed.
To add, Go has nil pointers which lead to panics when de-referenced. No one would call Go memory unsafe for this behavior likely because it's a "panic". The thing is, the 0 address is a guaranteed panic too. It's just the OS is going to print "Segmentation fault" instead of "panic". There's nothing memory unsafe about this. The process will with 100% gurantee, reliably, and deterministically exit in the same way as a "panic" in any other language here. The unsafe aspect is when it's not a nil pointer but a dangling pointer to something that may still exist and read garbage. That is unsafe.
In this case the problem is more specifically the use of uint32_t. I never quite mentioned in the article that if you're going to use unsigned integers for this sort of thing to always use the native-word-size unsigned type, e.g size_t, if you actually change that to uint64_t (or size_t) the code generation is better as you can see here: https://godbolt.org/z/11nEz6EPT
Author here. I believe many of the complaints here are concerned about what is more common rather than what is universally better over all possible inputs which is actually the point of view this article is written from. The issue with the "general case" is that exploits are never actually found there, they're always found in the edge cases such as large, or "pathological" values as it were. Signed integer arithmetic has more of these edge cases than unsigned when it comes to sizes, indices, and offsets used in expressions controlling memory (either directly, or indirectly) which is the most common application of integers in a codebase. The native-word-size unsigned integer type covers the full numeric range for those operations, while signed simply cannot. At the same time, preventing a whole numeric-range that is simply incompatible with those operations (negative values), and a whole class of issues related to undefined operations. The only real edge-case unsigned has that is more error-prone is values close to zero under subtraction and that's relatively easy to account for which I go into great detail to explain.
On desktop NV and desktop AMD I've observed (at least on Linux) via /proc/self/maps, that the pointer returned by glMapBuffer is a shared memory mapping owned by libGL. Further inspection shows coalesced reads and writes happening by the kernel through a DMA transfer operation, only when GL_MAP_COHERENT_BIT is used. The mapping is entierly virtual and is not touching physical memory until a R/W occurs. The driver's shared-memory explicitly flushes contents of the read or write request to device memory in the same way CUDA/OpenCL does. This can be observed by the ioctls the driver generates after page-faults occur in the kernel.
With things like PaX the trampoline method won't work, gcc now creates thunks, which are essentially heap allocated thunks of memory (hence the name) with PROT_EXEC.
Lambdapp inserts #line directives into the source code, compilers are required to respect those directives and utilize them when producing debug sections in the binary. For instance in the case of gcc/clang on *nix the compiler will produce .debug_line sections as part of the DWARF debug format. Debuggers like gdb and even valgrind utilize this information to provide correct output. So to answer your question, a debugger would react to this code exactly how it should react, as if the lambda was called via a function and the file / line should be correct.
I'm more asking as in debugger integration with versioning system (p4, git, etc.) - Say you've got a crashdump, and was able to track it down to some specific source code release - now you should've also saved the intermediate generated files somewhere - but this means that these might have to go back in p4/git/svn/etc. or find alternative place for them... Generating them again won't be the same.
Similar problem is with say Qt's generated moc_Xxx source, Ui_xxx source, etc. files - unless you make the effort of storing these generated files somewhere you might have problems debugger later.
This is in general my "arrgh" against code generation, and "aargh" is not against it - it's simply when you had forgot to keep the files somewhere and the crashdump snaps fingers at you...
The problem with clang blocks is they're represented as a Objective-C object, this makes them unusable in APIs that expect a function pointer, the only way you can cast them to a function pointer is to define the structure which represents the block and mmap executable code pages to marshal the call. Such a library exists that binds them to libffi here https://github.com/mikeash/MABlockClosure
This feat alone makes blocks essentially useless unless your entire API is also block-based, as in T (^foo)(...) vs T(*foo)(...).
I don't think they could be an ObjectiveC object, because they're described as C, not ObjectiveC.
I think the problems you're describing are ones that are going to be faced in any attempt at C closures. Closures have memory attached, that's the appeal of them and also the source of all the problems.
An objective-c object is just a pointer to a struct.
You don't need the objective-c runtime to call a block.
Block_copy etc are implemented in libclosure which iirc does not require libobjc either. But of course if you ARE writing objc, they are valid objects and can be treated as such
Correct, the problem is to call the function you cannot treat it as a standard function since it's pointing to a struct, the struct does contain the actual function pointer but there is an implicit `this' first argument to that function pointer which has to be that struct itself. This means you cannot use the block in an API that explicitly requires a function pointer, instead the API must specifically be aware of the block and would need to support it.
Of course, there's no way around that except to write block variants for those functions. stdlib on osx for example has block variants of most functions that take function pointers (`man qsort_b` for an example)
But if you write your program to use blocks from the start, that's not a big problem.
If you look at Redroid (a project which makes extensive use of lambdapp) you'll see how map/list/etc can take advantage of this https://github.com/graphitemaster/redroid
hashtable.c and list.c look for [hashtable|list]_foreach. Another good example is config_save in config.c.