Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with this is that it limits the options of library writers. For example, I can't replace some function with a macro that does something smart, such as check format strings for printf at compile time, because that breaks all callers. It also fundamentally limits what you can build with macros, because you can't build language features that are basic -- they always appear tacked-on.


That is very true — and it's precisely why we have considered making macros callable with function syntax. But I feel like having something that looks like a function and is actually a macro is a bit of a dangerous lie, no matter how handy it sometimes is. One of our design goals is not to be too tricky — if something looks like a function call, it should be a function call. The @foo syntax for macro calls means that you know exactly what's going on. It also means you can't do stuff like try to pass a macro as an argument to a higher-order function — what does that even do? I.e. what does map(m,vec) mean where m is a macro?


If macros are marked as different to functions at the callsite, then do they need to be marked different at the definition site? Most functions return a non-Expr value, but could return a Expr if the program's job is manipulating them. Most macros return a Expr, but could return a literal for insertion into the code.

So I'm wondering does a programming language which marks macro expansions different to function calls (as Julia does with the @ prefix) really need to distinguish functions from macros in the definition (as Julia does with the 'function' and 'macro' keywords) ?


Template Haskell, which uses an explicit mark at macro calls (a $), doesn't require marking macro definitions explicitly, so it can certainly be done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: