Vivado has free and paid tiers. The free tiers ("WebPack") support their a myriad of their smaller devices (which includes the Kria boards). The larger devices (Virtex, etc.), however, are generally only supported by the paid versions of Vivado.
It's inconvenient for hobbyists, sure, but for enterprise uses the cost of Vivado for a team is largely inconsequential (which I suspect is why they get away with this).
The licensing scheme is not terribly robust, either; there are a number of cracked (full) licenses floating around if you know where to look. Given that AMD makes most of their money on hardware sales, and the software is only really useful in conjunction with that hardware, I suspect they don't care very much.
It is clear from the quality that all vendors of EDA software actively hate their paying customers.
Just wonder - why are EDA tools just now starting to get HiDPI support? I’m pretty sure Altera, Xilinx/AMD, etc haven’t bought HiDPI monitors for their own developers!
> avoid the hardest and most essential skill: translating from your language to the other.
The hardest and most essential skill, second only to: not translating from your language to the other :)
(or maybe it should be the other way around; translating is useful but a really hard crutch to kick. Keeping it around will make it hard to keep up while speaking/listening and make reading a slog)
> Keeping it around will make it hard to keep up while speaking/listening and make reading a slog
That's not something against translating, that just means you haven't done it enough yet, so you're still too slow.
The first time you translate a basic sentence it might take 10s. 2nd time 5s. And so on. The 100th time it takes 0.05s and you can just say it without thinking. If one just keeps translating, you automatically reach that point.
> especially given how typical allocators behave under the hood.
To say more about it: nearly any modern high performance allocator will maintain a local (private) cache of freed chunks.
This is useful, for example, if you're allocating and deallocating about the same amount of memory/chunk size over and over again since it means you can avoid entering the global part of the allocator (which generally requires locking, etc.).
If you make an allocation while the cache is empty, you have to go to the global allocator to refill your cache (usually with several chunks). Similarly, if you free and find your local cache is full, you will need to return some memory to the global allocator (usually you drain several chunks from your cache at once so that you don't hit this condition constantly).
If you are almost always allocating on one thread and deallocating on another, you end up increasing contention in the allocator as you will (likely) end up filling/draining from the global allocator far more often than if you kept in on just one CPU. Depending on your specific application, maybe this performance loss is inconsequential compared to the value of not having to actually call free on some critical path, but it's a choice you should think carefully about and profile for.
> The most common provider is cloudflare, but you need to install cloudflared on the machine you’re using to host
This is actually (strictly) true. You can use cloudflared on any system which can communicate with your host. This is useful in more realistic deployments as it means you can install cloudflared in a VM/container and then have it relay your services hosted in other VMs/containers/devices. It isn't helpful here as "I hosted my website on an iPad (but I now have to have this other real computer plugged in all the time so that iPad works)" is not as zesty :)
That's not necessarily true. Many modern CPUs have uop caches, and it's reasonable to assume implementations will eliminate NOPs before placing lines in the uop cache. A hit on the uop cache loses any slowdown you hoped to achieve since now the NOPs are neither fetched nor decoded.
That's an interesting question, because the uop cache is filled early in the pipeline, but zero-idioms are only detected around the rename stage.
They might be able to skip a plain 0x90, but something like mov rax, rax would still emit a uop for the cache, before being eliminated later in rename. So at best it would be a fairly limited optimization.
It's also nice because rename is a very predictable choke point, no matter what the rest of the frontend and the backend are busy doing.
This is one of the things which cheeses me the most about LLVM. I can't build LLVM on less than 16GB of RAM without it swapping to high heaven (and often it just gets OOM killed anyways). You'd think that LLVM needing >16GB to compile itself would be a signal to take a look at the memory usage of LLVM but, alas :)
The thing that causes you run out of memory isn't actually anything in LLVM, it's all in ld. If you're building with debugging info, you end up pulling in all of the debug symbols for deduplication purposes during linking, and that easily takes up a few GB. Now link a dozen small programs in parallel (because make -j) and you've got an OOM issue. But the linker isn't part of LLVM itself (unless you're using lld), so there's not much that LLVM can do about it.
(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).
Still worth avoiding having the HN thread be about whether OpenBSD is in general faster than Linux. This is a thing I've seen a bunch of times recently, where someone gives an attention-grabbing headline to a post that's actually about a narrower and more interesting technical topic, but then in the comments everyone ignores the content and argues about the headline.
As I understand it, OpenBSD is similar to Linux 2.2 architecturally in that there is a lock that prevents (most) kernel code from running on more than one (logical) CPU at once.
We do hear that some kernel system calls have moved out from behind the lock over time, but anything requiring core kernel functionality must wait until the kernel is released from all other CPUs.
The kernel may be faster in this exercise, but is potentially a constrained resource on a highly loaded system.
It was more that there were a couple of particularly frustrating recent examples that happened to come to my attention. Of course this has always been a problem.
Magic links can be used to authorize the session rather than the device. That is, starting the sign in process on your laptop and clicking the link on your phone would authorize your laptop's sign in request rather than your phone's browser. It requires a bit more effort but it's not especially difficult to do.
Wouldn't that be incredibly insecure? Attacker would just need to initiate a login, and if the user happens to click the link they've just given the attacker access to their account..
The reason why magic links don't usually work across devices/browsers is to be sure that _whoever clicks the link_ is given access, and not necessarily whoever initiated the login process (who could be a bad actor)
> and if the user happens to click the link they've just given the attacker access to their account
Worse: if the user's UA “clicks the link” by making the GET request to generate a preview. The user might not even have opened the message for this to happen.
> Wouldn't that be incredibly insecure?
It can be mitigated somewhat by making the magic link go to a page that invites the user to click something that sends a post request. In theory the preview loophole might come into play here if the UA tries to be really clever, but I doubt this will happen.
Another option is to give the user the option to transfer the session to the originating UA, or stay where they are, if you detect that a different UA is used to open the magic link, but you'd have to be carful wording this so as to not confuse many users.
That, or a background process that visits links to check for malware before the user even sees the message.
> Isn’t there a way to configure the `a` element so the UA knows that it shouldn’t do that?
If sending just HTML you could include rel="nofollow" in the a tag to discourage such things, bit there is no way of enforcing that and no way of including it at all if you are sending plain text messages. This has been a problem for single-use links of various types also. So yes, but not reliably so effectively no.
And we get back to the original point of the article (sort of). Opening a magic link should authenticate the user who opened the magic link, not the attacker who made the application send the magic link.
This is what makes securing this stuff so hard when you don't have proper review. What seems like a good idea from one perspective opens up another gaping hole somewhere else.
Off the cuff suggestions for improving UX in secure flows just make things worse.
Passwords are (or, rather, SHOULD be) cryptographically hashed rather than encrypted. It's possible to compute a hash over data which is longer than the hash input block size by feeding precious hashes and the next input block back in to progressively build up a hash of the entire data.
bcrypt, one of the more popular password hashing algorithms out there, allows the password to be up to 72 characters in length. Any characters beyond that 72 limit are ignored and the password is silently truncated (!!!). It's actually a good method of testing whether a site uses bcrypt or not. If you set a password longer than 72 characters, but can sign in using just the 72 characters of your password, they're in all likelihood using bcrypt.
It's not broken. It's just potentially less helpful when it comes to protecting poor guessable passwords. bcrypt isn't the problem, weak password policies/habits are. Like bcrypt, argon2 is just a bandaid, though a tiny bit thicker. It won't save you from absurdly short passwords or silly "correct horse battery staple" advice, and it's no better than bcrypt at protecting proper unguessable passwords.
Also, only developers who have no idea know what they're doing will feed plain-text passwords to their hasher. You should be peppering and pre-digesting the passwords, and at that point bcrypt's 72 character input limit doesn't matter.
Bcrypt alone is unfit for purpose. Argon2 does not need its input to be predigested.
It's easy for somebody who knows this to fix bcrypt, but silently truncating the input was an unforced error. The fact that it looks like and was often sold as the right tool for the job but isn't has led to real-world vulnerabilities.
It's a classic example of crypto people not anticipating how things actually get used.
Peppering is for protecting self-contained password hashes in case they leak. It's a secondary salt meant to be situated 1) external to the hash, and 2) external to the storage component the hashes reside in (i.e. not in the database you store accounts and hashes in). The method has nothing to do with trying to fix anything with bcrypt. You should be peppering your input even if you use Argon2.
Right, but peppering was not part of my comment. You can't always pepper, and there are different ways to do it. It's (mostly) orthogonal to the matter.
You do not have to do any transformations on the input when using Argon2, while you must transform the input before using bcrypt. This was, again, an unnecessary and dangerous (careless) design choice.
I don't understand your responses here. Clearly you are not familiar with what problem peppering solves, or why it's a recommended practice, no matter what self-contained password hashing you use. bcrypt, scrypt, Argon2; they are all subject to the same recommendation because they all store their salt together with the digest. You can always use a pepper, you should always use a pepper, and there's only one appropriate way to do it.
And no, you cannot always pepper. To use a pepper effectively, you have to have a secure out-of-band channel to share the pepper. For a lot of webapps, this is as simple as setting some configuration variable. However, for certain kinds of distributed systems, the pepper would have to be shared in either the same way as the salt, or completely publicly, defeating its purpose. Largely these are architectural/design issues too (and in many cases, bcrypt is also the wrong choice, because a KDF is the wrong choice). I already alluded to the Okta bcrypt exploit, though I admit I did not fully dig into the details.
The HMAC-SHA256 construction I showed above, and similar techniques, accomplishes both transforming the input and peppering the hash. However, the others don't transform the input at all or, in one case, transform it in a way even worse for bcrypt's use.
Using memorable passphrases online is always a bad option because they're easily broken with a dictionary attack, unless you bump the number of words to the point where it becomes hard to remember the phrase. Use long strings of random characters instead, and contain the use of passphrases to unlocking your password manager.
To wit, each word drawn from a 10,000-word dictionary adds about 13 bits of entropy. At 4 words, you have (a little over) 52 bits of entropy, which is roughly equivalent to a 9-character alphanumeric (lower and upper) password. The going recommendation is 14 such characters, which would mean you'd need about 7 words.
The average person will create a passphrase from their personal dictionary of most-used words, amounting to a fraction of that. An attacker will start in the same way. Another problem with passphrases is that you'll have a hard time remembering more than a couple of them, and which phrase goes to what website.
It's inconvenient for hobbyists, sure, but for enterprise uses the cost of Vivado for a team is largely inconsequential (which I suspect is why they get away with this).