Hacker Newsnew | past | comments | ask | show | jobs | submit | jeremyscanvic's commentslogin

I would assume you pretty much get that out of the box given Typst compiles to HTML natively?


I was more looking for things I can use with blogs with Markdown and frameworks like Astro.

But with the development of Typst, maybe the way to go is to use Typst rather than Markdown.


Currently that feature is unsupported or I just can't figure out how to do it. With the latest compiler version 0.14 any .typ file I try to compile will incur warnings about skipping the equations (skipping the main reason I'd want to compile a Typst file to HTML...).

As per their GitHub they haven't included MathJax or KaTeX support yet as they were more focused on semantic and structural accuracy of HTML output with this release.


Seems to be what those guys are up to: https://typst.app/universe/package/mitex/


I've encountered this several times and even though I found it frustrating it didn't occur to me it could be something that could/should be fixed. You're always going to have some quirks if you want a syntax without too much parentheses right?


I've been really pleased with Typst so far - fast rendering, less verbose than (La)TeX in many ways (backslashes hurt now!) and unicode/emoji support really seals the deal. (Disclaimer: only using for semi-formal slides and notes, not for papers and important presentations)


Pivot tables rock! I wouldn't be surprised if they were studied mathematically and proven to be somewhat capable of everything you might want to do in the context of tabular data processing.


Erratum: What I'm saying here only applies for cookies with the attribute SameSite=None so it's irrelevant here, see the comments below.

(Former CTF hobbyist here) You might be mixing up XSS and CSRF protections. Cookie protections are useful against XSS vulnerabilities because they make it harder for attackers to get a hold on user sessions (often mediated through cookies). It doesn't really help against CSRF attacks though. Say you visit attacker.com and it contains an auto-submitting form making a POST request to yourwebsite.com/delete-my-account. In that case, your cookies would be sent along and if no CSRF protection is there (origin checks, tokens, ...) your account might end up deleted. I know it doesn't answer the original question but hope it's useful information nonetheless!


The SameSite cookie flag is effective against CSRF when you put it on your session cookie, it's one of its main use cases. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/... for more information.

SameSite=Lax (default for legacy sites in Chrome) will protect you against POST-based CSRF.

SameSite=Strict will also protect against GET-based CSRF (which shouldn't really exist as GET is not a safe method that should be allowed to trigger state changes, but in practice some applications do it). It does, however, also make it so users clicking a link to your page might not be logged in once they arrive unless you implement other measures.

In practice, SameSite=Lax is appropriate and just works for most sites. A notable exception are POST-based SAML SSO flows, which might require a SameSite=None cookie just for the login flow.


This page has some more information about the drawbacks/weaknesses of SameSite, worth a read: https://developer.mozilla.org/en-US/docs/Web/Security/Attack...

You usually need another method as well


Yes, you're definitely right that there are edge cases and I was simplifying a bit. Notably, it's called SameSite, NOT SameOrigin. Depending on your application that might matter a lot.

In practice, SameSite=Lax is already very effective in preventing _most_ CSRF attacks. However, I 100% agree with you that adding a second defense mechanism (such as the Sec header, a custom "Protect-Me-From-Csrf: true" header, or if you have a really sensitive use case, cryptographically secure CSRF tokens) is a very good idea.


Thanks for correcting me - I see my web sec knowledge is getting rusty!


The part about AI being very sensitive to small perturbations of their input is actually a very active research topic (and coincidentally the subject of my PhD). Most vision AIs suffer from poor spatial robustness [1], you can drastically lower their accuracy simply by translating the inputs by well-chosen (adversarial) translations of a few pixels! I don't know much about text processing AIs but I can imagine their semantic robustness is also studied.

[1] https://arxiv.org/abs/1712.02779

Edit: typo


Is it fair to call this a robustness problem when you need access to the model to generate a failure case?

Many non-AI based systems lack robustness by the same standard (including humans)


Really excited about this - we've recently been struggling with making imports lazy without completely messing up the code in DeepInverse https://deepinv.github.io/deepinv/


I keep hearing this exact same idea and it puzzles me a great deal. Is it a computer science thing? I'm doing a PhD in signal processing / engineering and people seem to care a lot about giving simple and clear explanations so I don't really relate!


In my experience in neuroscience it even differs widely across programs/universities. Some good professors care about giving good talks, and if you're lucky it becomes contagious in the program. Others think less of you if it's clear, some are too naive to realize obscurity is not a virtue.


That's professional deformation, because signals are supposed to be clear and easily identifiable among noise.


I think there was a study at some point which showed that the worse the university, the more jargon the researchers use in their papers.


The keyword you're looking for is time-frequency analysis and the main associated tool is the short-time Fourier transform(s). This is the theory underlying spectrograms and all those niceties!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: