The very first example on the page has a pretty good quote describing their usefulness:
>>> An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs O(n log n) bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits."
One interesting thing about working on numbers so large is that it becomes harder to assume that memory access is constant time. It's already not that easy now (cache miss vs hit, and external memory), but when you operate on data sizes that large, memory accesses matter by at least the square or cube root of N. If you require largely sequential access it is possible to outperform an algorithm that requires conditional random access by some function of those factors, which can make some algorithms look much worse or better compared to the classical analysis. Of course, it still does not matter whether your algorithm is going to finish in a couple of trillion years or a trillion trillion years.
The borrow checker exists to force you to learn, rather than to let you skip learning.
To make an analogy, I think it would be weird if I complained that I had to "memorize the rules" of the type checker rather than learning how to use types as intended.
If the borrow checker only errd on code with bugs you could call it learning. Or if it was only possible to express correct programs in the Rust language. But such a thing isn't possible in general so we accept the weaker condition, accepting a subset of all valid code that can be proven correct. The usability of the language goes with how big that subset is, and the OP is expressing frustration at the size of Rust's.
Rust isn't alone in this, languages with type hints are currently going through the same thing where the type-checker can't express certain types of valid programs and have to be expanded.
Fair enough, but the problem in this analogy is that this learning isn't always useful or productive in any way. This is more like doing arithmetic in a sort of maths notation where every result must be in base 12 and everything else must be in base 16. Sure, you can memorise the rules and the conversions but you aren't doing much useful with your life at that point.
Obviously, the borrow checker has uses in preventing a certain class of bugs, but it also destroys development velocity. Sometimes it's a good tradeoff (safety-critical systems, embedded, backends, etc.) and sometimes it's a square peg in a round hole (startups and gamedev where fast iteration is essential)
I think it destroys productivity if and only if you don’t roll with it and do things the Rusty way. If you write code with its idioms, it can be a huge productivity boost. Specifically, I can concentrate on fixing logic errors in my code instead of resource bookkeeping. When I refactor something, I know I didn’t accidentally forget to move alloc/free to the appropriate places for the new code: if my changes broke something, it’ll tell me.
Rust shouldn’t “destroy development velocity” once you’ve grasped the core concepts. There is some overhead to being explicit about how things are shared and kept, but that overhead diminishes with time as you internalize the rules.
Not if you're iterating and have to make fundamental changes. Just like certain advanced type systems, encoding too much at compile time means you have to change a lot of code in unnecessary mechanical ways when the design constraints change, or when you discover them.
This is not a bad thing by the way, it's an extremely plausible design chocie, and is one that Rust made very clearly: rejecting not-entirely-correct programs is more important than running the parts that do work. Languages that want to optimize for prototyping will make the opposite choice, and that's fine too.
Besides, if you still want to skip learning there are escape hatches like Rc<RefCell> but these hint pretty strongly (e.g. clones everywhere) that something might be wrong somewhere.
Obtaining a custom storage module seems to involving reverse engineering the PCB, printing it and sourcing the NAND modules, and then BGA soldering that all together:
https://youtu.be/HDFCurB3-0Q
No, but it is possible and so if it does break, you can repair / replace it. Whether that's affordable is another matter of course, but that's the flaw in the "right to repair" laws popping up now, as unless I'm mistaken it does not mandate reasonably priced replacement parts.
> Whether that's affordable is another matter of course,
They are the same course, imo. That it's been done in this way is typical behaviour we see from them, but expected/unsurprising behaviour does not necessarily make it a justification.
As I understand it, you also have to trick the OS into recognizing it in software. I don't know what's involved, but just getting the new NAND modules soldered isn't the whole of the job.
> Cats are also aided by their large and sensitive vibrissae, which are positioned on such locations of their head that the cat can detect nearby obstacles in closer encounters. Vibrissal sensation can compensate for the somewhat weaker vision in cats from closer distances or in poorly illuminated environments. Therefore, it is possible that cats approached the narrow openings in our experiment without differential hesitation, and they could use their vibrissae to assess the suitability of the apertures before penetrating them.
The next message in the thread gives specifics about the PS3 crack referenced in the parent:
> Disclaimer: I have personal experience with him when he took the PS3 ECDSA fail research I and the other fail0verflow folks presented at CCC, and a couple weeks later released the PS3 metldr keys obtained using our method, with zero credit or reference to us. Then Sony sued us all because they assumed we were working with him, even though we'd been careful not to actually release any crypto keys precisely to avoid giving the lawyers excuses to sue us.
> So yes, I was named as a defendant on a lawsuit with him, thanks to his antics. That wasn't a fun few months.
So, it sounds like the individual is complaining that he didn't mention them, but also complaining about Sony sueing them, and somehow manages to blame him for Sonys actions? Like Sony gave them credit they didn't want, but they want to complain about not getting the credit they didn't want?
It's as if you gave an academic lecture on a casino's vulnerability to being robbed, and then someone actually went and robbed the casino. You can be mad at the asshole who robbed the casino for two reasons; A: he's put you in hot water that you would have avoided without the robbery; B: he took credit for discovering the vulnerability (or at least didn't correct people who assumed he must be a real hotshot at casino security design)
The worst part of this critique is that it's not even true, and marcan knows better. He clearly dislikes me so much that he is willing to lie.
The symmetric half of the metldr key was obtained with a novel exploit I found. Then the asymmetric half was derived from that with the fail0verflow method.
But who really still cares about any of this, it was 13 years ago.
AltStore is a bad joke, you can only install 2 apps (three, minus the altstore app), and you have to connect your phone to a MacOS computer and run the app every week.
Just let me click on an IPA to install it, it's really not that deep!
That’s a limitation created by apple, not alt server. Granted you are using one of your three to use alt store, but you can skip using alt store if you don’t want auto refresh by just using the server to install your IPAs.
But Altstore does run on Windows, Mac and Linux (I use a helper script to run it on my home server)
These are all work around to Apple imposed limitations, it’s by no means perfect, but until Apple change their mind (or are forced to change their mind) your alternatives are jailbreaking (which I don’t think there are any public jailbreaks of the latest hardware/OS) or swap to Android (which I’m considering doing) or pay Apple $100 a year for your own dev cert with less restrictions.
For dark-net markets, server-side encryption is already seen as a failsafe rather than a feature, and smart users ALWAYS encrypt client side (PGP). That said, for activities like these, smart users are not necessarily the majority...
Could you link the change? At least a few days ago virt-manager still seemed to have scaling issues with guest displays, on nixos-unstable. I had viewer scaling on though as a workaround, so maybe I just didn’t notice.
It's not perfect. On older versions, it'd sometimes add a black border the size of your scaling factor. It still reports your window size times your scaling factor as internal resolution with guest additions though.
The very first example on the page has a pretty good quote describing their usefulness:
>>> An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs O(n log n) bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits."