I disagree - allowing for type inference on the closures will make for a much more pleasant language, whereas requiring type annotations on top level functions makes sense for lots of reasons.
I agree with this, I'm designing a similar language and have gone with this approach. Same syntax for all functions, but top-level functions require explicit type annotations (purely for documentation/sanity more so than a technical requirement).
JavaScript as an ecosystem and language became much more accessible than ever, with all the toolings and TypeScript, it's easier to build abstractions upon abstractions; and people love to have their own version of abstractions. It's human nature, it's inevitable.
By using comptime, the statement couldn't be runtime composed, right? That's currently the major holdback for me to spend more time on zig: if using comptime become more common in zig community, the libs could be less flexible to use. It feels sort of like function coloring to me, that the whole call chain also need to pass down the value as comptime variable. I've only spend 2 days with zig, so I would love to learn if I'm wrong on this subject.
Only the metadata of the statement is comptime, that is the type annotation for each bind parameters. So if you have this query
SELECT * FROM user WHERE age = $age{u16}
You _must_ provide a u16 bind parameter. However the value itself is of course not required to be comptime-known, that would make the whole thing unusable.
For what it's worth there are in zig-sqlite variants of the method which bypass the comptime checks; they're not documented properly yet but see all methods named `xyzDynamic`, for example https://github.com/vrischmann/zig-sqlite/blob/master/sqlite....
Generally I wouldn't call zig comptime function coloring. (I have written a prime sieve algorithm that uses the runtime code to precalculate some primes at comptime. Yes, I had to be very careful about what was in the prime number algorithm, but comptime supports that level of complexity and it was certainly possible to call runtime-intended code at comptime.
Can you call comptime-intended code at runtime? No? (Yes? B/c the call site is "in" the runtime code?) But just make it runtime code instead of comptime code?
Often, comptime code couldn’t be executed at runtime because the language features it accesses aren’t available then. But I agree that if a parameter could go either way, you shouldn’t have to write two versions.
> People use callback function to achieve almost the same thing
There's a QoL difference here with macros - writing out those lambdas can become annoying. That said, the "good style" rule in (Common) Lisp is to prefer lambda-forms in such cases - i.e. cases where the macro parameters are a block of code to be mostly run straight.
In fact, a common pattern for with-macros (of which doTexture would be an example), is the call-with pattern. Example from some random project of mine:
(defmacro with-logging-conditions (&body forms)
"Set up a passthrough condition handler that will log all signalled conditions."
`(call-with-logging-conditions (lambda () ,@forms)))
Which is then used like:
(with-logging-conditions
(blah blah)
(main game code))
All the macro does is, upon expansion, package the code block into a lambda, and passing it to a function call-with-logging-conditions, that does the actual work. So it's like your example, except I don't have to write the lambda myself. This is a trivial case; commonly, macros might accept additional arguments that they process, but eventually they'd still wrap their input body argument in a lambda and expand to a function call with said lambda as argument.
A better use of macros, which you can't replicate in JavaScript[0], would be if you wanted to do something like:
(do-texture texture
o O r R x2)
And have it expand - at compile time - to:
;; unwind-protect is Lisp's sorta-equivalent of try/finally in other languages.
(unwind-protect
(progn
(begin-texture-mode texture)
(draw-circle)
(draw-circle :big)
(draw-rectangle)
(draw-rectangle :big)
(draw-rectangle :big))
(end-texture-mode))
However silly this looks, this kind of code generation is (one of the main reasons) why you need macros.
--
[0] - Well, you can if you have a toolchain. Babel is essentially a macro engine for JavaScript, but you can only use it at build time.
This might be another reason why I find macros less appealing: macros introduce DSL in form of normal s-expression, but they don't actually behave like a function; macros introduce their own mini-language/syntax.
In the last example you provided, I bet the macro implementation would look like a little interpreter? If that's the case, having a function call like
doTexture(myTexture, ['op1', 'op0', 'opR'])
and let doTexture handle each cases(ops), might be able to achieve the same behavior, right?
I'm not trying to argue that macros are unnecessary, I really want to like them! Just most of the time, I find functions are sufficient enough.
Remember JavaScript before "async"? Lots of boilerplate there. You had to wait until some committee decided to incorporate it into the language.
So "modern" languages without macros have such patches every now and then. Still, it's easy to find boilerplate in programs.
The reason is that a lot of boilerplate is specific to the program's domain, and you're left with cumbersome syntactic patterns and no tools to abstract them.
So yes, you need to learn how to write macros. But, assuming good taste, the simplicity is in their use, not their definition. It's a good tradeoff because macros are used much more often than they are defined.
By the way, the language introduced by a macro often allows arbitrary Lisp code to be mixed with it. The example did not demonstrate this, and that's why you had an easy time thinking up a non-macro alternative (which still has some unfortunate implications, like needing to interpret at runtime).
I've picked a really silly example of a DSL just to illustrate the point in a few lines, but I feel the silliness is obscuring what I wanted to communicate. Next time I'll try to come up with something more useful.
> macros introduce DSL in form of normal s-expression, but they don't actually behave like a function; macros introduce their own mini-language/syntax*
That's the point. S-expressions are structure notation language. The semantics of code expressed as s-expression is something else. There's the "default" one (as provided by #'eval), but macros allow you to work on the s-expressions as data structures. Ultimately, the macro expansion is still evaluated normally, but the expansion might be wildly different from the macro invocation.
This is a feature, not a bug. It gives you the power to add new abstractions to the language, as if they were part of that language in the first place. It's not something you need often, but there are things you can't do any other way. For example, since we're talking JavaScript - think of JSX. In JavaScript, it's a mind-bending innovation, though committing you to use a big code generation tool (Babel). In Lisp, you can do half of it with a macro[0].
A common use of macros is removing conceptual repetition in code. Imagine you have a concept in your codebase - say, a plugin. Creating a plugin involves defining a class extending a common base class, defining a bunch of methods that are identical in 90% of the cases, and a bunch of free functions. Conceptually, that whole ensemble is "a plugin". Lisp macros let you define that concept in code, and reduce your plugin definitions to just:
> In the last example you provided, I bet the macro implementation would look like a little interpreter?
Such macros are more like compilers. Interpreters execute the code they read; compilers - like those macro - emit different code instead.
> might be able to achieve the same behavior, right?
Yes, except the macro does that at expansion time - i.e. ahead of execution. In practice, this is almost always "compilation time".
> Just most of the time, I find functions are sufficient enough.
Because they are! Even in Lisp, macros are not your default tool for solving problems. Functions are. Macros come out when the best way to do something involves code that writes code for you.
--
[0] - And the other half with a reader macro. Regular macros transform trees before they're evaluated. Reader macros alter the way text is deserialized into trees. Reader macros are very rarely used, because they're a bit hard to keep contained and tend to screw with editors, but if you really want to create a different syntax for your code, they're there for you.
I debated mentioning this explicitly, but decided it was just noise next to the rest of the post. But you should note that, while that macro can be easily replaced with lambdas without much change in ergonomics, there are lots of much more interesting things you can do with macros that do not have equivalent substitutes in a language like JavaScript. (e.g. JSX exists as its own weird pre-processor thing, but you could theoretically just do it with macros instead, and then it could compose with other syntax extensions.)
> we get the benefit of one thing less to learn
But learning programming language theory is the best part :)
Exactly, modern android flagships are at par or better than Iphone when it comes to touch latency. Most people who express the sentiment "OMG iphone is so smooth" went from $200 Moto G to $1000 Iphone X. Compare in the same class, and you will find that both OSes are comparable.
That link is old. There are similar measurements for modern devices available on many review websites. This link does identify the correct metric that people seem to respond to when they feel a phone is "faster".
But it isn't something that completely gets abstracted. There is a reason it has been so difficult to make a non-virtual Dom version of React. It isn't impossible, but has yet to fully flesh out despite attempts by a few projects including members of the React team.
The link has been posted in another comment too, here are some chunks that may be relevant to you:
> A virtual DOM is nice because it lets us write our code as if we were re-rendering the entire scene. Behind the scenes we want to compute a patch operation that updates the DOM to look how we expect. So while the virtual DOM diff/patch algorithm is probably not the optimal solution, it gives us a very nice way to express our applications. We just declare exactly what we want and React/virtual-dom will work out how to make your scene look like this. We don't have to do manual DOM manipulation or get confused about previous DOM state. We don't have to re-render the entire scene either, which could be much less efficient than patching it.