Hacker Newsnew | past | comments | ask | show | jobs | submit | notpushkin's commentslogin

I’ve started buying cheap self-adhesive hooks on AliExpress and placing them myself. Not sure if they last long but hopefully owners get the message.

https://pypy.org/

It lags behind CPython in features and currently only supports Python versions up to 3.11. There was a big discussion a month ago: https://news.ycombinator.com/item?id=47293415

But you can help! https://pypy.org/howtohelp.html

https://opencollective.com/pypy


> The checksum is pointless because an entire 512 bit token still fits in an x86 cache line

I suppose it’s there to avoid round-trip to the DB. Most of us just need to host the DB on the same machine instead, but given sharding is involved, I assume the product is big enough this is undesirable.


You need to support revocation, so I'm not sure it's ever possible to avoid the need for a round trip to verify the token.

The point of the checksum is to just drop obviously wrong keys. No need to handle revocation or do any DB access if checksum is incorrect, the key can just be rejected.

That sounds like it's only helpful for ddos mitigation, in which case the attacker could trivially synthesize a correct checksum.

You don't have to use a publicly documented checksum.

If you use a cryptographically secure hashing algorithm, mix in a secret salt and use a long enough checksum, attackers would find it nearly impossible to synthesise a correct checksum.


I don't follow. The checksum is in "plain text" in every key. It's trivial to find the length of the checksum and the checksum is generated from the payload.

Others have pointed out that the checksum is for offline secret scanning, which makes a lot more sense to me than ddos mitigation.


I'm not sure it's a good idea.

But it's trivial to make a secret checksum. Just take the key, concatenate it with a secret 256-bit key that only the servers know and hash it with sha256. External users might know the length of the checksum and that it was generated with sha256. But if they don't know the 256-bit key, then it's impossible for them to generate it short of running a brute force attack against your servers.

But it does make the checksum pretty useless for other usecases, as nobody can verify the checksum without the secret.


Ah that makes sense. I wouldn't call that a checksum though; that's a signature :)

I don't think it counts as a signature, because it can't be verified without revealing the same secret used to create it.

You're right, the correct term seems to be MAC (Message Authentication Code).

> I suppose it’s there to avoid round-trip to the DB.

That assumption is false. The article states that the DB is hit either way.

From the article:

> The reason behind having a checksum is that it allows you to verify first whether this API key is even valid before hitting the DB,

This is absurdly redundant. Caching DB calls is cheaper and simpler to implement.

If this was a local validation check, where API key signature would be checked with a secret to avoid a DB roundtrip then that could see the value in it. But that's already well in the territory of an access token, which then would be enough to reject the whole idea.

If I saw a proposal like that in my org I would reject it on the grounds of being technically unsound.


> I assume the product is big enough

Experience tells otherwise


Hey, welcome to HN!

Reading “hex” pointing to a clearly base62-ish string was a bit interesting :-)

Also, could we shard based on a short hash of account_id, and store the same hash in the token? This way we can lose the whole api_key → account_id lookup table in the metashard altogether.


Hello thanks for reading through my blog :D Coming to your question, yes! that is possible I mentioned it in my second approach!

But when I mentioned it to my senior he wanted me to default with the random string approach :)


The securify here comes from looking the key up in the DB, not from any crypto shenanigans.

Yeah, it’s the same feature, just two different gestures. (And long tap works with Firefox on Android, btw.)

While we’re making sure that modals are recorded in history so that you can close them with the back button on mobile (e.g. https://svelte.dev/docs/kit/shallow-routing), MSFT can’t be bothered. But when it comes to abusing the very same history API to grab the user’s attention for a bit longer...

> Reverse proxy itself will do barely any defense, what you need in combination is an authgate

What’s your threat model?


Same as anyones: random bots scanning IP/ports to find established services and trying exploits from the book.

With a reverse proxy, I don't see how this would work. The whole way the reverse proxy works is you use a subdomain name ("jellyfin.yourdomain.org") to access Jellyfin, rather than some other service on your server. The reverse proxy sees the subdomain name that was used in the HTTP request, and routes the traffic based on that. Scanning only the IP address and port won't get attackers to Jellyfin; they need to know the subdomain name as well.

The only tricky part here would be to make sure you’re doing a wildcard certificate, so that your subdomain doesn’t appear in Certificate Transparency logs.

true, but I kinda dont want hiding from plain sight to be my only line of defence.

If you expose Jellyfin on 443, have HTTPS properly set up (which Caddy handles automatically), your admin password is not pswd1234 (or you straight up disable remote admin logins), and use a cheap .com domain rather than your IP--what is the actual attack surface in that case?

As far as I can remember that is more or less what is usually suggested by Jellyfin's devs, and I have yet to see something that convinces me about its inadequacy.


He claims there are known exploits. Though I also want to know if this is really true.


The absolute worst thing I can see in there is that an third party who somehow managed to get a link to one of your library items (either directly from you or from one of your users--or by spending the next decade bruteforcing it I guess) could stream said item: https://github.com/jellyfin/jellyfin/issues/5415#issuecommen...

Everything else looks to me like unimportant issues, that would provide someone who's already logged in as a user minor details about your server.


Nearly everyone uses the *arr stack with the Trash guides.

Which means that the paths are pretty damn uniform.


Unless I am misunderstanding the discussion on GitHub, the attacker would still need to know the exact path where the file is saved, and the name of the file itself. Even then, all they can do is download the file from your device--which they could just torrent themselves for a fraction of the effort.

A DDoS is still a valid attack.

Very few consumer connections can manage, say, 100 clients downloading a massive video file simultaneously.


> RTF editor

Is that what they call WYSIWYG? :eyes:


RTF editor may be part of WYSIWYG solution but it also might be used to edit text that on publication will be looking entirely differently or will be used in various places.

Ohh, I think I get it now – is this about literally using RTF as an intermediate format? I really don’t think I’ve heard about it being used in the context of static sites, but I see how it might make sense as a way of storing, well, rich text.

Pandoc is your best friend in such cases: https://pandoc.org/


Astro is great, and is what I prefer on new “static-y” projects (for more dynamic stuff, SvelteKit).

But 11ty really was so much simpler if all you need is to put together some templates, and don’t want to deal with component stuff. That said, the docs really are lacking in some parts.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: