> If you are interested in how well we do compared to demucs in particular, we can use the MUSDB18 dataset since that is the domain that demucs is trained to work well on. There our net win rate against demucs is ~17%, meaning we do perform better on the MUSDB18 test set. There are actually stronger competitors on both this domain and our "in-the-wild" instrument stem separation domain that we built for SAM Audio Bench, but we either match or beat all of the ones we tested (AudioShake, LalalAI, MoisesAI, etc.)
So ~20% better than demucs, better than the ones they tested, but the acknowledge there are better models out there even today. So not sure "competes against SOTA models" is right, but "getting close to compete against SOTA models" might be more accurate.
I don't know if you wrote it as a form of satire, but obviously thete is no such thing as "YouTube's front-page". Everyone gets recommended different videos, based on various signals, even when not authenticated.
What privacy? Whoever is watching your traffic can see you accessed their website with HTTPS, they can guess with high accuracy which article you are reading based on the response size.
I'm guessing you're not storing the CLIP for every single frame, instead of every second or so? Also, are you using the cosine similarity? How are you finding the nearest vector?
BitTorrent v2 uses SHA-256, but in any case SHA-1 is still second-preimage resistant. And the BitTorrent piece hashes are included in the .torrent file, so you would need to find a double collision.
I'm guessing if you only calculate based on the digits, the probability is going to be slightly different than the real one, because you only have a finite number of plates you can choose from.
This is not a hypothetical, people already do it like that in my country (Argentina), you send your money to a person that buys tokens using cryptocoins, since these websites don't comply with the local regulation, even kids are addicted to gambling.
Third party root servers are generally used for looking up TLD nameservers, not for looking up domainnames registered to individuals publishing personal blogs^1
Fortunately, one can publish on the www without using ICANN DNS
An individual cannot even mention choosing to publish a personal blog over HTTP without being subjected to a kneejerk barrage of inane blather. This is truly a sad state of affairs
I'm experimenting with non-TLS, per packet encryption with a mechanism for built-in virtual hosting (no SNI) and collision-proof "domainnames" on the home network as a reminder that TLS is not the only way to do HTTPS
It's true we depend on ISPs for internet service but that's not a reason to let an unlimited number of _additional_ third parties intermediate and surveil everything we do over the internet
And this is why it's a good thing that every major browser will make it more and more painful, precisely so that instead of arguments about it, we'll just have people deciding whether they want their sites accessible by others or not.
Unencrypted protocols are being successfully deprecated.
People using the web can choose what software to use. This includes both client software and server software. Arguably the later ultimately determines whether HTTP is still available on the internet, regardless of whether it is used by any particular client software, e.g., a popular browser
One advertising company through its popular "free browser", a Trojan Horse to collect data for its own purposes, may attempt to "deprecate" an internet protocol by using its influence
But at least in theory such advertising companies are not in charge of such protocols, and whether the public, including people who write server software or client software, can use them or not
They're focused on the thing that'll get the most people up and running for the least extra work from them. When you say "push" do you just mean that's the default or are they trying to get you to not use another ACME client like acme.sh or one built in to servers you run anyway or indeed rolling your own?
Like, the default for cars almost everywhere is you buy one made by some car manufacturer like Ford or Toyota or somebody, but usually making your own car is legal, it's just annoyingly difficult and so you don't do that.
As a car mechanic, you could at least tune... until these days when tou can realistically tune only 10..15 years old models, because newer ones are just locked down computers on wheels.
It's actually not that bad in most states, some even have exceptions to emissions requirements for certain classes of self-built cars.
Now, getting required insurance coverage, that can be a different story. Btu even there, many states allow you to post a bond in lieu of an insurance policy meeting state minimums.
I counted by hand, so it might be wrong, but they appear to list and link to 86 different ACME client implementations across more than a dozen languages: https://letsencrypt.org/docs/client-options/
ISRG (Let's Encrypt's parent entity) wrote Certbot, initially under the name "letsencrypt" but it was quickly renamed to be less confusing, and re-homed to the EFF rather than ISRG itself.
So, what you've said is true today, but historically Certbot's origin is tied to Let's Encrypt, which makes sense because initially ACME isn't a standard protocol, it's designed to become a standard protocol but it is still under development and the only practical server implementations are both developed by ISRG / Let's Encrypt. RFC 8555 took years.
Yes, it started that way, but complaining about the current auto-update behavior of the software (not the ACME protocol), is completely unrelated to Let's Encrypt and is instead an arbitrary design decision by someone at EFF.
As far as I remember, since the beginning certbot/let's encrypt client was a piece of crap especially regarding the autodownload and autoupdate (alias autobreak) behavior.
And I couldn't praise enough acme.sh at the opposite that is simple, dependency less and reliable!
Honestly I abandoned acme.sh as it was largely not simple (it’s a giant ball of shell) and that has led to it not being reliable (e.g. silently changing the way you configure acme dns instance URLs, the vendor building their compatibility around an RCE, etc)
Onion websites also don't need TLS (they have their own built-in encryption) so that solves the previous commenter's complaint too. Add in decentralized mesh networking and it might actually be possible to eliminate the dependency on an ISP too.
Proton Mail burned CPU time until they found a public key that started the way they wanted it to.
So that is the public key for an HTTPS equivalent as part of the tor protocol.
You can ALSO get an HTTPS certificate for an onion URL; a few providers offer it. But it’s not necessary for security - it does provide some additional verification (perhaps).
If everyone who wants a human readable domain did this, it would environmentally irresponsible. Then 'typo' domains would be trivial. protonmailrmez31otcciphtkl or protonmailrmez3lotcciphtkl.
Its a shame these did put in a better built-in human readable url system. Maybe a free form text field 15-20 characters long appended to the public key and somehow be made part of that key. Maybe the key contains a checksum of those letters to verify the text field. So something like protonmail.rmez3lotcciphtkl+checksum.
But this being said, I think being a sort of independent 'not needing of third parties' ethic just isnt realistic. Its the libertarian housecat meme writ large. Once you're communicating with others and being part of a shared communal system, you lose that independence. Keeping a personal diary is independent. Anything past that is naturally communal and would involve some level of sharing, cooperation, and dependency on others.
I think this sort of anti-communal attitude is rooted in a lot of regressive stuff and myths of the 'man is an island' and 'great man' nonsense. Then leads to weird stuff like bizarre domain names and services no one likes to use. Outside of very limited use cases, tor just can't compete.
If you think about it the spirit of the internet is based on collaboration with other parties. If you want no third parties, there's always file: and localhost.
CAs are uniquely assertive about their right to cut off your access.
My hosting provider may accidentally fuck up, but they'll apologise and fix it.
My CA fucks up, they e-mail me at 7pm telling me I've got to fix their fuck-up for them by jumping through a bunch of hoops they have erected, and they'll only give me 16 hours to do it.
Of course, you might argue my hosting provider has a much higher chance of fucking up....
So what does "CA fixes the problem" look like in your head? Because they'll give you a new certificate right away. You have to install it, but you can automate that, and it's hard to imagine any way they could help that would be better than automation. What else do you want them to do? Asking them to not revoke incorrect or compromised certificates isn't good for maintaining security.
Imagine if, hypothetically speaking, the CA had given you a certificate based on a DNS-01 challenge, but when generating and validating the challenge record they'd forgotten to prefix it with an underscore. Which could have lead to a a certificate being issued to the wrong person if your website was a service like dyndns that lets users create custom subdomains.
Except (a) your website doesn't let users create custom subdomains; (b) as the certificate is now in use, you the certificate holder have demonstrated control over the web server as surely as a HTTP-01 challenge would; (c) you have accounts and contracts and payment information all confirming you are who you say you are; and (d) there is no suggestion whatsoever that the certificate was issued to the wrong person.
And you could have gotten a certificate for free from Lets Encrypt, if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't.
An organisation with common sense policies might not need to revoke such a certificate at all, let alone revoke it with only hours of notice.
You didn't answer my question. What would the CA fixing it look like? Your hosting example had the company fix problems, not ignore them.
And have you seen how many actual security problems CAs have refused to revoke in the last few years? Holding them to their agreements is important, even if a specific mistake isn't a security problem [for specific clients]. Letting them haggle over the security impact of every mistake is much more hassle than it's worth.
> if you had automatic certificate rotation in place - you paid $500 for a 12-month certificate because you don't
Then in this hypothetical I made a mistake and I should fix it for next time.
And I should be pretty mad at my CA for giving me an invalid certificate. Was there an SLA?
CAs have to follow the baseline rules set by Google and Mozilla regarding incident response timelines. If they gave you more time, the browsers would drop them as a supported CA.
The BRs already have a deliberate carve out where a CA can notify that their government requires them to break the rules and how they'll do that, and then the browsers, on behalf of relying parties can take whatever action they deem appropriate.
If you're required to (or choose to) not tell us about it, because of active monitoring when we notice it's likely your CA will be distrusted for not telling us, this is easier because there's a mechanism to tell us about it - same way that there's a way to officially notify the US that you're a spy, so, when you don't (because duh you're a spy) you're screwed 'cos you didn't follow the rules.
The tech centralization under the US government does mean there's a vulnerability on the browser side, but I wouldn't speculate about how long that would last if there's a big problem.
https://www.reddit.com/r/LocalLLaMA/comments/1pp9w31/ama_wit...
reply