Currently that feature is unsupported or I just can't figure out how to do it. With the latest compiler version 0.14 any .typ file I try to compile will incur warnings about skipping the equations (skipping the main reason I'd want to compile a Typst file to HTML...).
As per their GitHub they haven't included MathJax or KaTeX support yet as they were more focused on semantic and structural accuracy of HTML output with this release.
I've encountered this several times and even though I found it frustrating it didn't occur to me it could be something that could/should be fixed. You're always going to have some quirks if you want a syntax without too much parentheses right?
I've been really pleased with Typst so far - fast rendering, less verbose than (La)TeX in many ways (backslashes hurt now!) and unicode/emoji support really seals the deal. (Disclaimer: only using for semi-formal slides and notes, not for papers and important presentations)
Pivot tables rock! I wouldn't be surprised if they were studied mathematically and proven to be somewhat capable of everything you might want to do in the context of tabular data processing.
Erratum: What I'm saying here only applies for cookies with the attribute SameSite=None so it's irrelevant here, see the comments below.
(Former CTF hobbyist here) You might be mixing up XSS and CSRF protections. Cookie protections are useful against XSS vulnerabilities because they make it harder for attackers to get a hold on user sessions (often mediated through cookies). It doesn't really help against CSRF attacks though. Say you visit attacker.com and it contains an auto-submitting form making a POST request to yourwebsite.com/delete-my-account. In that case, your cookies would be sent along and if no CSRF protection is there (origin checks, tokens, ...) your account might end up deleted. I know it doesn't answer the original question but hope it's useful information nonetheless!
SameSite=Lax (default for legacy sites in Chrome) will protect you against POST-based CSRF.
SameSite=Strict will also protect against GET-based CSRF (which shouldn't really exist as GET is not a safe method that should be allowed to trigger state changes, but in practice some applications do it). It does, however, also make it so users clicking a link to your page might not be logged in once they arrive unless you implement other measures.
In practice, SameSite=Lax is appropriate and just works for most sites. A notable exception are POST-based SAML SSO flows, which might require a SameSite=None cookie just for the login flow.
Yes, you're definitely right that there are edge cases and I was simplifying a bit. Notably, it's called SameSite, NOT SameOrigin. Depending on your application that might matter a lot.
In practice, SameSite=Lax is already very effective in preventing _most_ CSRF attacks. However, I 100% agree with you that adding a second defense mechanism (such as the Sec header, a custom "Protect-Me-From-Csrf: true" header, or if you have a really sensitive use case, cryptographically secure CSRF tokens) is a very good idea.
The part about AI being very sensitive to small perturbations of their input is actually a very active research topic (and coincidentally the subject of my PhD). Most vision AIs suffer from poor spatial robustness [1], you can drastically lower their accuracy simply by translating the inputs by well-chosen (adversarial) translations of a few pixels! I don't know much about text processing AIs but I can imagine their semantic robustness is also studied.
Really excited about this - we've recently been struggling with making imports lazy without completely messing up the code in DeepInverse https://deepinv.github.io/deepinv/
I keep hearing this exact same idea and it puzzles me a great deal. Is it a computer science thing? I'm doing a PhD in signal processing / engineering and people seem to care a lot about giving simple and clear explanations so I don't really relate!
In my experience in neuroscience it even differs widely across programs/universities. Some good professors care about giving good talks, and if you're lucky it becomes contagious in the program. Others think less of you if it's clear, some are too naive to realize obscurity is not a virtue.
The keyword you're looking for is time-frequency analysis and the main associated tool is the short-time Fourier transform(s). This is the theory underlying spectrograms and all those niceties!