> Users want responsiveness instead of page reloads between every action.
Then implement that. There are much more lightweight ways to achieve this than going full react. Especially with modern JS this can even be implemented from scratch.
Take HN as an example. Inline replying, voting etc. is undeniably convenient, but there is no way there shouldn't be a full page load when going through paginated results, or opening the comment thread to a post. That requires a couple lines javascript, no bloated js-framework-of-the-month. And the page stays completely usable without JS. A lot of stateful webapps out there aren't any more complex than this, structurally. Bonus: You get a fully working, predictable back button in your browser for free.
I would just like to chime in to say that users expect links in webpages and not in apps. I fully expect the webpage of hacker news to behave the way you describe, because it's a webpage.
I recently used the best webapp I've ever used - LucidCharts. That was an app, and, as such, I didn't have the expectation that everything I did generated a new link, or that I could even link to certain things. BUT, they also struck the balance of providing web behaviors for things where I expected them.
Even without using authoritative DNS, if we only have a blocklist of IP addresses and some application-level firewall solution, we can examine outgoing HTTP headers in a client-side proxy and filter accordingly.
I also do not use a popular browser that runs Javascript to send and retrieve to and from the internet. That is the root cause of most users problems. This is the most effective solution, bar none. The third parties users want to avoid are almost almost always depending on Javascript to accomplish their goals.
Connecting a powerful interpreter with potentially full control over the users computer to the open internet. Then believing this can be safe.
The user is granting use of this interpreter to third parties. In this thread we can see how users struggle to know which third parties can be trusted. All for the sake of keeping that interpreter open to "good" third parties to access at will over the internet. (Why is a good question.)
Early web browsers called on other, separate programs to do specific jobs outside of rendering HTML. Taking a cue from that history, I use simpler, limited programs with no built-in interpreter to do two specific jobs: sending and retrieving.
Third parties can return code in response to requests for content, but I am under no obligation to run the code, let alone run it from a popular browser with a powerful interpreter that is connected to the internet.
Cannot speak for others, but this approach has worked well for me as the www worsens.
This is one of the things that get all weird the more I think about it.
Trying to define an objective term like "simple" is of no use if you can't map it to something in human perception. Let's say we have some all-knowing oracle that can tell us which algorithm for problem X is the simplest one. If nobody would agree, what is the point? Or, if 51% of programmers would say A is simplest while the other 49% consider algorithm B more simple, should we just go "yay majority" and call it a day? In reality we have no way of knowing which one is objectively most simple, because at least regarding this domain, we as humans are unable to make an objective judgment.
Is the simplest solution always the one with the least lines of code? Least number of classes? Singletons vs static classes? Recursive approach vs. iterative approach? C style linked lists (next-pointer in data type) vs. "proper" list class? In general: Could we formalize this, ie. come up with an algorithm how to determine how simple code is?
Often times code that is easier to understand might be harder to work with, while more elaborated code with indirections and abstractions is harder to grasp (simply because it's more LoC/classes, just more reading), but once you got familiar with the code base, much easier to work with. So when you say objectively simple, do you mean "simple to understand so you can tell what it does", or "simple to understand wrt maintaining, extending and fixing"? And which kind of simple should I go for when writing code?
I read a post once from another (claimed?) ex-employee who said that around 2009 or so, Opera wasn't doing so great and they laid off a dev or two and then after they recovered half a year later, they never re-hired to fill the gap. The poster accounted the technical falling behind to that layoff. Do you share this impression?
As for me, I still use Opera 12 almost on a daily basis and the main issue I have is not broken websites but inaccessible websites because of HTTPS and Opera not supporting enough recent ciphers. The second most frustrating thing is that the JS engine shows its age performance-wise; pages that make heavy use of it for all kinds of dynamic shenanigans get rather sluggish. So my uneducated guess from these observations is that it should have been possible to keep up if they'd wanted to. It's probably that management simply thought using an engine that someone else maintains for them makes it possible to cut down on devs even more. But that part might just be my make-believe world...
It seems to me to be overly simplisitc to take a specific event and say "that's the decisive moment where it all went wrong".
Technically Presto had some unique features; for example the inturruptable script engine allowed the browser to feel performant and responsive without having to heavily invest in parallelism via multiple threads or processes. But it also had some architectural differences to other browsers, and never had the market clout to ensure that the Opera-unique features were reflected in platform features and so had to be implemented by the competition, or even to ensure that features that were hard/impossible to implement in Presto did not become required for web-compatibility. For example Presto was unable to implement beforeUnload without a significant rewite of the core document loading pipeline, but that omission from Presto wasn't enough to prevent sites depending on it when it worked in Gecko/WebKit/Trident. Similarly a lot of effort would have ben required to port Presto to multiple processess (similar to the multi-year "e10s" effort for Gecko).
Presto was also highly optimised for memory consumption and so was ideal for running in resource constraimed environments like early smartphones and games consoles. But I think the launch of the iPhone and Mobile Safari changed consumer expectations for the web experience on mobile and Opera didn't manage to respond in an effective way. I have no idea what the optimal response would have been, but if Presto could have achieved double-digit marketshare on high-end mobiles (as opposed to low-end devices running Mini), we might have avoided many of the compat issues that currently affect the mobile web.
Organisationally I think there were other issues; I already spoke about the focus on the particular use case of making a highly integrated, highly configurable, desktop browser product, which doesn't look much like the more successful mass-market browsers today. I think later there were other problems, but I was far away from the executive decision making, so maybe I'm not best placed to comment on what the actual company goal was.
Out of curiosity I recently installed Debian Stretch on my Pentium Pro 200 (dual CPU). The text based installer warned that my 128MB of RAM wouldn't be enough for it to finish, but luckily it still succeeded. The system boots within a minute or so and is fairly usable. Running X on s3fb with i3 as the WM, the selection of usable GUI programs is rather limited then admittedly. Still, I guess we really came a long way from 486 to PPro. :-)
Isn't the problem rather that we have instructions to control the cache at all? As far as my research goes, things like CLFLUSH were introduced at the same time as SSE2, which was what, the Pentium III era? We were apparently doing fine without them before that, even with 4+ CPU systems.
I'm currently trying to get the spectre.c PoC to work without using CLFLUSH or similar, but it doesn't look too promising yet. Then again, I only started an hour ago and lack the ability to think in those creative and twisted ways that lead to the discovery of this whole issue in the first place.
Are you able to easily detect whether your account is currently blocked and add some warning to the front page? Would make it easier than just spamming [email protected] until the mails don't bounce anymore...
I have one domain at hand that I don't use for mail at all, so people counter >= 1. I could set any MX/SPF records you want as long as I don't have to change them every two months or so... ;-)
The only thing really worse there is that the number of people who could audit the code is much smaller. That guy wrote a whole SSL library in assember, so he probably didn't just finish some assembler tutorial and decided this was a good first lil project.
Assember is usually just considered bad/dangerous by people who have no clue about it and consider it something magical. It's not. At least not significantly more dangerous than C, which is still the language the most fundamental components of everyday computing are based on.
No, I think portability is definitely the worst thing. It's a communication tool. It's perfectly reasonable to want to run this on your Pi or your phone or in your browser or your IoT device of the week. And you can't because of an implementation choice.
I'm certainly not afraid of machine code, I actually get paid to write it. But this just isn't a good choice technically. Though it's impressive and like I said was surely a lot of fun and worth showing off.
Ah yes, I'm probably too focused on just x86 desktop/server with my everyday work that this didn't even occur to me, so I assumed you were meaning to refer to maintainability/security.
This is some bloke's project and he decided to code it in the tool he was either most comfortable with or wanted to glean exercise on. If you want to take inspiration from his work and write an analogous program in ARM assembler for the amusement of running it on whichever device most amuses you, you're free to read his code, learn from it, and then go off to re-implement it in a steam-powered balanced-ternary analytical engine, if you so please.
Please. We are in a world where heartbleed happen, as well as an untold number of buffer over flow / memory management vulnerabilities through the years. Companies have spent millions developing languages like Rust and Go to replace C & C++ in more security sensitive applications. Assembly is definitely not just "just considered bad/dangerous by people who have no clue about it and consider it something magical".
Then implement that. There are much more lightweight ways to achieve this than going full react. Especially with modern JS this can even be implemented from scratch.
Take HN as an example. Inline replying, voting etc. is undeniably convenient, but there is no way there shouldn't be a full page load when going through paginated results, or opening the comment thread to a post. That requires a couple lines javascript, no bloated js-framework-of-the-month. And the page stays completely usable without JS. A lot of stateful webapps out there aren't any more complex than this, structurally. Bonus: You get a fully working, predictable back button in your browser for free.