> The checksum is pointless because an entire 512 bit token still fits in an x86 cache line
I suppose it’s there to avoid round-trip to the DB. Most of us just need to host the DB on the same machine instead, but given sharding is involved, I assume the product is big enough this is undesirable.
The point of the checksum is to just drop obviously wrong keys. No need to handle revocation or do any DB access if checksum is incorrect, the key can just be rejected.
You don't have to use a publicly documented checksum.
If you use a cryptographically secure hashing algorithm, mix in a secret salt and use a long enough checksum, attackers would find it nearly impossible to synthesise a correct checksum.
I don't follow. The checksum is in "plain text" in every key. It's trivial to find the length of the checksum and the checksum is generated from the payload.
Others have pointed out that the checksum is for offline secret scanning, which makes a lot more sense to me than ddos mitigation.
But it's trivial to make a secret checksum. Just take the key, concatenate it with a secret 256-bit key that only the servers know and hash it with sha256. External users might know the length of the checksum and that it was generated with sha256. But if they don't know the 256-bit key, then it's impossible for them to generate it short of running a brute force attack against your servers.
But it does make the checksum pretty useless for other usecases, as nobody can verify the checksum without the secret.
> I suppose it’s there to avoid round-trip to the DB.
That assumption is false. The article states that the DB is hit either way.
From the article:
> The reason behind having a checksum is that it allows you to verify first whether this API key is even valid before hitting the DB,
This is absurdly redundant. Caching DB calls is cheaper and simpler to implement.
If this was a local validation check, where API key signature would be checked with a secret to avoid a DB roundtrip then that could see the value in it. But that's already well in the territory of an access token, which then would be enough to reject the whole idea.
If I saw a proposal like that in my org I would reject it on the grounds of being technically unsound.
Reading “hex” pointing to a clearly base62-ish string was a bit interesting :-)
Also, could we shard based on a short hash of account_id, and store the same hash in the token? This way we can lose the whole api_key → account_id lookup table in the metashard altogether.
While we’re making sure that modals are recorded in history so that you can close them with the back button on mobile (e.g. https://svelte.dev/docs/kit/shallow-routing), MSFT can’t be bothered. But when it comes to abusing the very same history API to grab the user’s attention for a bit longer...
With a reverse proxy, I don't see how this would work. The whole way the reverse proxy works is you use a subdomain name ("jellyfin.yourdomain.org") to access Jellyfin, rather than some other service on your server. The reverse proxy sees the subdomain name that was used in the HTTP request, and routes the traffic based on that. Scanning only the IP address and port won't get attackers to Jellyfin; they need to know the subdomain name as well.
The only tricky part here would be to make sure you’re doing a wildcard certificate, so that your subdomain doesn’t appear in Certificate Transparency logs.
If you expose Jellyfin on 443, have HTTPS properly set up (which Caddy handles automatically), your admin password is not pswd1234 (or you straight up disable remote admin logins), and use a cheap .com domain rather than your IP--what is the actual attack surface in that case?
As far as I can remember that is more or less what is usually suggested by Jellyfin's devs, and I have yet to see something that convinces me about its inadequacy.
The absolute worst thing I can see in there is that an third party who somehow managed to get a link to one of your library items (either directly from you or from one of your users--or by spending the next decade bruteforcing it I guess) could stream said item:
https://github.com/jellyfin/jellyfin/issues/5415#issuecommen...
Everything else looks to me like unimportant issues, that would provide someone who's already logged in as a user minor details about your server.
Unless I am misunderstanding the discussion on GitHub, the attacker would still need to know the exact path where the file is saved, and the name of the file itself. Even then, all they can do is download the file from your device--which they could just torrent themselves for a fraction of the effort.
RTF editor may be part of WYSIWYG solution but it also might be used to edit text that on publication will be looking entirely differently or will be used in various places.
Ohh, I think I get it now – is this about literally using RTF as an intermediate format? I really don’t think I’ve heard about it being used in the context of static sites, but I see how it might make sense as a way of storing, well, rich text.
Astro is great, and is what I prefer on new “static-y” projects (for more dynamic stuff, SvelteKit).
But 11ty really was so much simpler if all you need is to put together some templates, and don’t want to deal with component stuff. That said, the docs really are lacking in some parts.
reply