Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's the 50% part that we're meant to pay attention to. For me that's a pretty drastic, "why would you ever put this on the client side now that you know this" kind of thing.


The caveat is that this is in reference to a _landing page_ only. If you try to write an advanced application with vanilla.js and you aren't some Javascript guru, then you are going to really hate yourself later on -- I guarantee it.

Also, that 50% is only measuring the time to interactive on what I imagine is the first page visit. After everything becomes cached, I would think that the the benefit is dramatically lower.


[flagged]


That site has been around a while, and is definitely a joke ;) No matter what you check off to include in the library, the generated file is 0 bytes.


Default choices on that page show the following file size:

  Final size: 0 bytes uncompressed, 25 bytes gzipped.


Gzip format has: 10 byte header, 8 byte footer, optional additional headers, and DEFLATE compressed content.

So, yeah, +25 bytes seems reasonable.


Yeah, but it bloats up considerably when you add in the other features.


Dude, it's a joke. Lighten up.


Because it provides a robust, consistent, isomorphic framework which many people have decided that certain benefits (for us common tooling, maintenance, consistent implementation across the org, write once run twice isomorphism) outweigh the costs (interactable time, download size, etc). Everyone has to make their own choices, but this news was not surprising and did not change our opinion.


Playing devil's advocate for a moment:

So, the developer experience trumps the user experience, in an industry where fractions of a second of load time can cost a company customers and conversions?


Not at all. The objective for a landing page is very different from all the other pages.

Landing Page: Make it load instantly so that a new user clicking on it doesn't get frustrated and close the window before it's ready to go. This is typically a static page. Note the tiny and heavily optimized google.com

Other Pages: It's sometimes necessary to load larger js libs gradually, but generally once they are loaded the browser cache makes their size a non-issue.

In summary: It's worthwhile to pay the cost of libraries, but that cost should not be paid by visitors to the landing page, and if the cost is high it might be necessary to load them in phases to preserve a good UX.


Life is full of trade-offs, gaining 1% more customers for a 10% increase in your engineering costs is not always a good deal. Being customer-centric is good, but not at all costs.


At 1 second, it's closer to a 5% loss. 5 seconds, 20%. 10 seconds is a whopping 50% loss.

And yes, I've seen a lot of landing pages which are not viewable, let alone interactable, for 5+ seconds, an effect exacerbated by mobile.


Better DX = more time to work on UX


This is a very salient point.

Ultimately many businesses will sacrifice polish and dev testing for features and deadlines on the skull-encrusted alter of short sprint cycles and rapid iteration.


Bad UX and buggy implementation can also cost a company customers and conversion. Sometimes the trade off is worthwhile.


You can't lose customers you never converted in the first place.

Does React really eliminate the potential for buggy implementations? Also I'm fairly certain that UX has more to do with design than your choice of frameworks.


You also have to consider what side benefits libraries provide. Eg, we're likely choosing React (first time in company using React) because we're choosing React Native for our mobile apps. This is because we're a very tiny shop, with very limited developers. We simply don't have the manpower to hire or learn both Android and iOS development.

Now, we reviewed a handful of iOS/Android abstractions, and React Native seemed competent - so we're using it for our first pass. Likely, we'll use it for the web frontend too, because we're using React Native. More familiarity can be a good thing.

As I discussed internally when we were choosing frameworks, if I was just choosing a web frontend it would never be React. It's not that I think React is bad, it's just that there are a dozen frameworks similar to it that are faster. It popularized a new good idea (vdom), but it's just not best in breed imo. Yet, having the same concepts and very similar codebase between mobile apps and frontend is a pretty big deal for us.


It may not eliminate them, but it's much better than my memory of developing interactive interfaces with lower level libraries like jQuery that force me to keep track of everything having to do with the DOM in my own logic.


If all their home page is is a bunch of images and hyperlinks, then I imagine removing React made a lot of sense.


'premature optimization', build it first, optimize later, react helps a lot with the first part


There is such a thing as premature pessimization: making choices based on intuition about productivity benefits or personal taste that force you down the road to poor performance.

I think it's smart to include performance requirements in your specifications before starting a project (or when taking on the maintenance of one too). Set an upper limit on your TTI or server response times. Turn it into a budget. That's not optimization that's just engineering.


If you can't gain traction in a market due to lost conversions, you'll never have a chance to optimize it.

Not to mention that some infrastructure choices make it really hard to optimize later, if not impossible (sans re-writes).

Also worth noting: the phrase "Premature optimization is the root of all evil" had a very different connotation when first uttered:

http://ubiquity.acm.org/article.cfm?id=1513451


"'Premature optimization is the root of all evil' is the root of evil"

https://medium.com/@okaleniuk/premature-optimization-is-the-...

http://ubiquity.acm.org/article.cfm?id=1513451

http://www.joshbarczak.com/blog/?p=580

http://scottdorman.github.io/2009/08/28/premature-optimizati...

It's basically an excuse to not think about the performance or implementation of your entire product until the end and then, when you use a profiler and chop away "10%" here and "20%" there, you still end up with a slow-as-piss program with every function taking less than 1% total time. Then what do you do?

Oh yeah, what you should have done in the first place. _Think_ about how your program actually functions so you can remove all of the insane systemic, architecture decisions you made that all bleed a fixed amount of time every single function call. That's the difference between an engineer, and script kid. If you're not reasoning about _the entire platform_ (all the way down to the cache lines), you're not engineering. You can abstract all you want, but those abstractions don't magically prevent the underlying hardware and architectural decisions from affecting the product. Abstractions tools (read: approximations!) for high-level reasoning. They're not magic "you don't have to think [about low level implementation]" spells.

Actual experts keep trying to warn you guys about abusing that quote, but nobody apparently ever listens. Stop quoting it like a bible verse, because just like bible verses, everyone forgets the entire context and just uses it like a bloody bumper-sticker slogan.

In the words of Saul Williams: "You have wandered too far from the original source, and are emitting a lesser signal."

The full Donald Knuth quote:

>The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.

That quote sounds much less like a law handed down by God, and more a guideline from experience, no?

Optimizing the wrong things (things that get thrown away when you change your program as you work toward your goal) ends up wasting time. But just because that was applicable _in the 70's_ doesn't mean modern programmers have the same mentality of "optimize optimize optimize." Modern programmers have the opposite. They're so afraid of having to learn how cachelines work, they use that quote to prevent having to do any optimization (or understanding of architectural layout tradeoffs). I can't count how many programmers I've met that hear "cache" and their eyes glaze over like it's some magical, incalculable thing.

Watch a lecture from Mike Acton, where on consoles, they don't have the luxury of writing poorly written, un-optimized code. They _do_ have to understand the entire platform to ship a competitive title. Because any optimizations they do, for a fixed platform, means they can pump out a better experience with those extra CPU cycles. An experience that elevates them above their competitors.

https://www.youtube.com/watch?v=rX0ItVEVjHc


Calm down. This is for a single, static landing page.


> Calm down.

A rather unnecessary statement, don't you think?

> This is for a single, static landing page.

Which at one point wasn't a static landing page. Netflix has something of a captive market at this point in time; but could a newer startup scrambling to gain users afford to make the same choices?


I would hope that a startup scrambling to gain users would be able to explain in a simple static html site why their service is worth bothering with.

If I'm not particularly interested, I'm not going to wait around for mountains of JS to download and execute, and the typical content-free startup landing page full of hero images, stock photos, and vague bullet points to load up.


> Calm down.

Please don't break the HN guidelines by being uncivil. Your comment would be much better without that bit.

https://news.ycombinator.com/newsguidelines.html


> and did not change our opinion.

If this comment section is any clue, clearly it's not going to change a React programmers opinion because they completely loose their shit if you mention a flaw in React.


This isn’t a flaw in React, this is using the wrong tool for the wrong job. React is great, but it doesn’t solve ALL problems.


I believe my case was pretty clearly articulated as are some of the other comments on a thumbnail sketch of the tradeoff comparison we have and continue to do. I'm sorry you believe the worst in this community.


What is the magnitude of the 50%, is what would be nice to know. If talking 50ms to 25ms, would be seem less big a deal than say 500ms to 250ms.


No this is the wrong conclusion to draw.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: