You brought up an important opportunity for optimization. If you know the distribution of your data, it may make more sense to implement it in terms of the odd numbers and leave even numbers as the fallback. It's important to profile with a realistic distribution of data to make sure you're targeting the correct parity of numbers.
Safari supports base64-embedding font files in a <style>’s @font-face {} (iirc it's something like `@font-face { src: url('data:application/x-font-woff;charset=utf-8;base64,...'); }`) that can then be referenced as normal throughout the SVG. I don't recommend this though, nobody wants to deal with 500KB SVGs.
The idea was that you can embed only the glyphs used in a text. For example, instead of embedding thousands of existing Chinese characters, embed only 20 of them. Embedding is necessary anyway because otherwise you cannot guarantee that your image will be displayed correctly on the other machine.
Also, allowing CSS inside SVG is not a great idea because the SVG renderer needs to include full CSS parser, and for example, will Inkscape work correctly when there is embedded CSS with base64 fonts? Not sure.
> Also, allowing CSS inside SVG is not a great idea because the SVG renderer needs to include full CSS parser, and for example, will Inkscape work correctly when there is embedded CSS with base64 fonts? Not sure.
For better or worse, CSS parsing and WOFF support are both mandatory in SVG 2.[0][1] Time will tell whether this makes it a dead spec!
You can also point to font files with @font-face. I use a small custom font that's only 16 KB. Although, when opening the file locally, you have to first disable local file restrictions in safari's settings before it works...
How well can LLMs reason about avoiding UB? This seems like one of those things where no matter how much code you look at, you can still easily get wrong (as humans frequently do).
Fair point on UB — LLMs absolutely do not reason about it (or anything else). They just reproduce the lowest-common-denominator patterns that happened to survive in the wild.
I’m not claiming the generated C is “safe” or even close. I am sure that in practice it still has plenty of time-bombs, but empirically, for the narrow WASM tasks I tried, the raw C suggestions were dramatically less wrong than the equivalent JavaScript ones — fewer obvious foot-guns, better idioms, etc.
So my original “noticeably better” was really about “fewer glaring mistakes per 100 lines” rather than “actually correct.” I still end up rewriting or heavily massaging almost everything, but it’s a better starting point than the JS ever was.
I'd love to use Orion but it's just too buggy for me. I downloaded the iOS app to try it out and immediately noticed that when typing in the URL bar, three quarters of it is covered by the toolbar above the keyboard.
I've seen similar issues with Japanese keyboards. If you can please share some details on your setup here or on orionfeedback.org so we can investigate further. Thank you.
I've only ever seen `a` and `d`. Personally I prefer `a`. The only time I've seen `c` is for trait methods like `<Self as Trait<Generic>>::func`. Noisy? I guess. Not sure how else this could really be written.
Fwiw, I didn't go looking for obscure examples to make HN posts. I've had three rounds of sincerely trying to really learn and understand Rust. The first was back when pointer types had sigils, but this exact declaration was my first stumbling block on my second time around.
The first version I got working was `d`, and my first thought was, "you're kidding me - the right hand side is inferring it's type from the left?!?" I didn't learn about "turbo fish" until some time later.
Rust’s inference is generally a strength. If there's a type-shaped hole to fill, and only one way to fill it, Rust will just do it. So for instance `takes_a_vec(some_iter.collect())` works even though `collect` has a generic return type — being passed to `takes_a_vec` implies it must be a Vec, and so that's what Rust infers.
> The first version I got working was `d`, and my first thought was, "you're kidding me - the right hand side is inferring it's type from the left?!?" I didn't learn about "turbo fish" until some time later.
Tbh d strikes me as the most normal - right hand sides inferring the type from the left exists in basically every typed language. Consider for instance the C code
Doing this inference at a distance is more of a feature of the sml languages (though I think it now exists even in C with `auto`) - but just going from left to right is... normal.
I see your point, and it's a nice example, but not completely parallel to the Rust/StandardML thing. Here, your RHS is an initializer, not a value.
// I don't think this flies in C or C++,
// even with "designated initializers":
f({ .flag = true, .value = 123, .stuff=0.456});
// Both of these "probably" do work:
f((some_struct){ .flag = true, ... });
f(some_struct{ .flag = true, ... });
// So this should work too:
auto a = (some_struct){ .flag = true, ... };
Take all that with a grain of salt. I didn't try to compile any of it for this reply.
Anyways, I only touched SML briefly 30 some years ago, and my reaction to this level of type inference sophistication in Rust went through phases of initial astonishment, quickly embracing it, and eventually being annoyed at it. Just like data flows from expressions calculating values, I like it when the type inference flows in similarly obvious ways.
reply