Looking back at XHTML I am reminded again and again that the best technical solution often loses out.
Instead we have HTML5 which contains nothing that couldn't have been easily expressed in XHTML but have none of it's strictness that would have enabled much simpler and better browser implementations.
The arguments against XHTML amounted to "but wahh it should still work even if the syntax is wrong". Thing is none of the major proponents of XHTML were advocating for exterminating the existing doctypes and their associated quirks modes. People could continue to use that but XHTML would be there, ready for the advent of non-shit programming to hit the web.
People are starting to come around the ideas of strictness in programming languages again with Typescript and co rapidly gaining in popularity. It's just such a massive shame the entire Web community forced this massive wheel-reinventing cycle on us for no damn reason, completely ignoring the lessons learnt -literally everywhere- else.
It's the cycle of innovation. Like a cat retching up a hair ball. It repeats over and over, until finally something is coughed up. Except in technology it takes years instead of seconds.
Just like MongoDB gaining mindshare - because webscale - only for people to realise those relationships and structured types were actually quite good at keeping your data clean.
HN folks whine about browser monoculture but it was this decision (and thousands more like it, but this one is pretty big) that meant you cannot have a simple browser implementation. Any 2nd year CS student could write an X(HT)ML parser. Writing an HTML5 parser requires FAANG resources.
> Writing an HTML5 parser requires FAANG resources.
No it doesn't. The HTML5 parsing algorithm is well-defined, and parsing it according to the spec is not an order of magnitude more difficult than properly handling XML according to its spec. What makes browsers difficult does not come from the part that involves ingesting the initial response payload and building it into a tree in a spec-compliant way—what makes things difficult is all the other stuff involved with creating and maintaining a browser.
Invalid XHTML: Throw an error, yell at the user, who should just be the author since broken XHTML shouldn't have been allowed make it to the end user.
"Invalid" HTML5: Please refer to section X.Y for the heuristics to apply in this scenario.
"Invalid" HTML4: Uhh, just do whatever? But probably you need to do whatever the other browsers do.
Implementing the other behaviours is more work than not implementing them, and may be more difficult if your internal data structures don't match those of the browser for which it was originally introduced as expedient or even emergent behaviour.
HTML5 is expressible as valid SGML with some exceptions such as ARIA and data-* attributes. And SGML, which is a super-set of XML, has rules for both optional elements - e.g. HEAD - and for elements with optional closing tags - e.g. P.
Part of the XML motivation was for something simpler than SGML.
SGML has no opinions on rendering, since it's just a generic markup language. Nor does it have opinions on "what do with a hr that's a direct descendant of the table despite not being permitted", to pick the first example from the spec (it should be treated as a preceding sibling).
From what I know and have read, disregard for the strictness that comes with XML and XHTML makes HTML5 parsers handle tons of edge cases (https://wiki.whatwg.org/wiki/HTML_vs._XHTML)
For example HTML has quite a few special cases around self/implicit closing elements and many browsers/websites around did not fully agree on how that should work. So not only do you have to implement and verify these closing rules, you have to gracefully handle every page that gets them wrong either because past browsers got them wrong or because these special cases are so pointless that no one sane actually memorizes them.
You know, as someone stuck using XHTML within the Java ecosystem for web apps, i'd say that it's reasonably easy and nice to work with. However, i'd also like to suggest that custom tags, at least when implemented in the ways that are popular in those particular technologies (JSF/PrimeFaces) are bad both from a usability and maintainability perspective - it took many years for React to come around and make components easy to create and utilize. XHTML with the semantics of React would be way better.
That said, i'm also stuck in the middle of migrating older versions of frameworks (Spring) to newer ones (Spring Boot) and it's proving to be extremely cumbersome and feels like it'll be impossible to get right without spending multiple months on the migration. Most of the older technologies are considered "dead" nowadays for a reason, though at the same time i agree that they can hold a certain wisdom.
"Actually pretty decent ideas, but completely botched on implementation elsewhere" is a story that we've heard many times in the Java world... I remember the OSGI folks elation as Eclipse turned to OSGI from their custom plug in system, turn to dismay as they saw how it was actually used.
Whereas TCL was a beautiful and perfect implementation of a very silly idea.
Remember in 1994 just before Java, when Sun hired TCL's developer John Ousterhout, then unilaterally announced that TCL would be the official scripting language of web browsers?
I remember working a lot with XML a decade ago. XML was large and clumsy and had problems like undefinable ordering of elements for instance. And I haven't actually encountered a single parser library that did not have some strange issue which made it behave outside the spec. Not to mention editors that happily show "<tag\>" while in reality the file reads "<tag></tag>". Good luck debugging the parser with that one. My bad experience with the XML standard is probably the reason why I never bothered with XHTML. Sure, cool idea, but XML, so meh.
>Not to mention editors that happily show "<tag\>" while in reality the file reads "<tag></tag>"
I believe you mean tag/ as every parse I know of would choke on tag\.
if it is tag/ the XML spec says there is no difference between these two scenarios, thus most tools and libraries will serialize as tag/ (I say most because maybe somebody silly somewhere did differently)
The difference between <tag></tag> and <tag/> is important if you're trying to write "polyglot" xml/html that can be parsed by either an xml parser or an html parser. For example <br></br> is not valid HTML, but a plain <br> is not valid XML. However <br/> works as either. On the other hand, an empty span has to be written <span></span>.
XHTML is supported, you can use it, major browsers did not deprecate it. And forcing it through developers throat - I'm not sure if it's a wise decision. In the end developers would just stay in HTML 4.
I'm not so sure they would stay in HTML4. Currently HTTPS is being "forced" by having features that are only available over HTTPS (such as, but not limited to, HTTP/2 and HTTP/3) and this has been fairly helpful. XHTML adoption could have been encouraged using similar tactics, and most developers would have gone along.
Custom elements, CSS Namespaces, client side templating, all things that we had decades ago and were just dumped. Abandoning XML and XHTML is and continues to be the webs biggest mistake. People can pretend there were good reasons to stick with HTML but having "On Error Resume Next" baked into your language was a known bad practice even then...
I am optimistic for XMLs future though, as the rise of stuff like TypeScript and Rust shows that people do appreciate strict languages nowadays, and WASM means that modern versions of XSLT can be utilized on browsers even if the browser vendors cannot be bothered implementing them.
I loved xHTML, but it failed for a very simple reason: people designing it were working on a DSL to structure a document. But the vast majority of users were actually trying to make things look pretty.
Most websites are currently blobs of div. Why ? Because the idea of a web page being mainly a document completely missed the part were artists fiddling with messy code would produce things that customers would like better.
It's the same reason we currently see giant images on home pages and 10mo static assets to load. Because in the end, the best technical decision doesn't win. The one that gives the result the customers end up preferring wins.
If you have 3 hours to get a design done, you have the choice between making it look great, or having a clean semantic and maintainable markup, you will have to drop one. And the market will select according to the result.
Another reason for div blobs is that HTML is not a language to describe GUI applications rather documents.
So one ends up with a soup of divs, made pretty with CSS and then replicate the behaviour of a beatiful dropdown fully in JavaScript, ignoring the browser builtin infrastructure.
Naturally because this breaks down in every browser, it is then extended with hacks for every corner case.
> If you have 3 hours to get a design done, you have the choice between making it look great, or having a clean semantic and maintainable markup, you will have to drop one. And the market will select according to the result.
Depends on whether accessibility is a priority. Depending on where you live it may be mandatory by law, for larger organizations.
Nah, what will happen is that accessibility will be dropped 9 times out of 10. And one day, many years from that decision, they will receive a notification from the state, stating the site is not accessible. And then the site will be made that way. Maybe.
The situation were a website is made accessible from the start is such a tiny rare occasion it cannot be used to explain why xhtml failed.
> The current CSS effort provides a style mechanism well suited to the relatively low-level demands of HTML but incapable of supporting the greatly expanded range of rendering techniques made possible by extensible structured markup. The counterpart to XML is a stylesheet programming language that is...Published in April, 1996, DSSSL is the stylesheet language of the future for XML documents.
I started building a custom JSON representation of HTML templates with embedded functions for importing fragments and generating markup etc and soon realized I was re-inventing XSLT [0] but kept going anyway. I haven't really seen another template language that can translate one DOM tree to another. It seems like XSLT is the eventual implementation of DSSSL in TFA, I'd be curious if anyone can speak to why it never caught on.
Having used XSLT professionally in the early 2000s I think it was that most developers found it hard to reason about due to the recursive nature of the language.
For me it was understanding the differences between creating output by digging down from a root or by transforming a document by matching parts of it and letting the default rules handle the rest. Which is the basis of my stylesheet that turns an HTML file into its own formatted and highlighted source code:
For scheme there is SXML which has SXSLT, SSAX and SXPath. Translating between xml and SXML is trivial, and SXML is just an S-expression representation of XML, meaning it is in the canonical list form.
Working with lists in scheme is pretty great. I stopped using any other xml tools for my own projects.
Going back to the late 90s, I think these are the only interesting bets taken by the industry:
- Ericsson's Erlang/OTP
- Sun's "The Network is the Computer" and "Utility Computing", IoT - "Jini" (Internet-connected toasters;), and non-PC devices - smartphones, set-top boxes, smart cards - JavaCard is still popular.
- Bell Labs/Lucent's Plan 9 and Inferno OS (neither is gained adoption)
- Semantic Web (which never took off).
IMO, XML and Enterprise Java were regression, but so is REST, JSON and dynamic scripting languages. The most recent example is GoLang.
EDIT: the XML itself is a good family of specs (especially XSD, XPath and XSLT), the problem was with abusing XML by forcing it onto everything: from configuration to logs to over-the-wire format like in XMPP. I remember IBM even sold XML Accelerator appliances ;)
From what? (especially XML as im not even sure what E-Java means)
I think (as you also mention yourself) that XML is quite good at some things (it's schema based, can mix schemas, has namespaces, etc). Just (as you mention as well) it was overused at some point.
Given we agree so much: what is it that XML was a regression from?
JSON is a regression from XML wrt schemas/strictness; but then it was very low overhead (which was needed as is it gain momentum used in places where the client (JS browser app) and the server (serverside of said browser app) are both maintained by the same organization).
XML abuse was a regression from using ASN.1 or custom binary protocols, which were a much better fit for the hardware at the day.
IMO Java was a dumbed-down C++ for Enterprises and an attempt by Sun to escape Wintel monopoly. We could've had a much better mainstream programming languages now than Java or C#. Better PLs means higher bar for software quality.
Sun was already doing that with OpenStep, hence why Java has interfaces (protocols), a dynamic model quite similar to Smalltalk (which got to profit form StrongTalk and SELF research) and J2EE started its life as an Objective-C framework.
It was the lack of understanding with NeXT that made it fall apart and eventually Oak made into Java.
XML is language-oriented, it's like a typical programming notation but without a parsing step (or, rather, with a standard parsing step). It's a very unusual thing in this aspect and, I guess, this is the source of all the misunderstanding.
E.g. it's very successful in things like layout/UI description, which is naturally language-like. And not that successful in data exchange applications, which is not language-like. (To exchange data with a language-like medium is like integrating programs by constructing their command lines; possible, but way less convenient than by using a library.)
If we are to compare it with something it would be Loglan or Lojban, not JSON. The difference is that it's a working and widely used Loglan. Something like this will have to be central to navigate a vast sea of documents, which is what Web was supposed to be. Web has changed quite a bit, but those documents didn't go anywhere and the problem is as acute as ever. More acute, I would say; in 1997 there wasn't that much content there.
It's so easy to look back at XML with ridicule, sitting on the hill of json pragmatism success. But take a moment to consider what we had before XML. Yes, XML was awesome. Back then you'd consider yourself lucky working with data available in a format that had implementations across multiple operating systems.
XML suffered from the verbosity of having both open and closing tags and the weird uncertainty around attributes vs. child tags, but the real problems came from consultant-driven programming wanting to put XML everywhere and use if for everything. Ant & Maven. SAML. XML for configuration and DI frameworks. *ML everything. Did you know there was a thing called VoxML[1]? Basically XML for phone menus (remember those? Press 1 for English, Press 2 for ... Press 9 to hear these options again).
XHTML was my preference, even had some of the semantics I wish weren’t lost (<h/>). It’s not a hill I want to die on, but I do very much wish there was a successful push for its strictness.
It’s h1-n but its hierarchy was determined by semantic parent structure rather than its own declaration. IIRC you could use similar semantics to the existing h-tags within a given hierarchy with an attribute. And I’m pretty sure this lived through some of the early HTML5 process, but ultimately didn’t make it through.
You can do that in HTML5 via the <section> and <article> tags. An h1 tag within one of these elements is equivalent to a subheading, and nesting <section> elements to create a deeper structure is also allowed.
Seeing this here, I can't help but feel that there were some good stuff in the vision for the future of xml. I'm not saying we should bring back xml, but maybe a few of the ideas should be salvaged.
I'm pretty comfortable saying we should bring back XML. Not for RPC-ish protocols; JSON is a much better fit. But certainly every single place that YAML is used.
XML Schema, unfortunately, won out over RelaxNG (https://relaxng.org/) which has a much more solid mathematical basis and (in my opinion) is much more direct and simple than XML Schema.
Well we certainly didn't build an xml semantic web rendered to xhtml using xslt stylesheets. And xmlhttprequest is mostly used to fetch json. And I don't think xml is used that much for new protocols or fileformats anymore either.
I’m working with it. It’s not really that advantageous for this project. US code is outputted in XML and I’m having to write a of code to find certain attributes to render on a page. Seems like it will break very easily on xml updates.
Idea of XML being superior to anything was the plague nearly everyone at the time caught. Evangelists would be the chosen one or so they thought. But they turned out to be a bunch of con men chasing easy fame. Having once been vaccinated with dynamic programming languages and coding-by-convention, who will ever go back there?
Everything XML wanted to be, JSON did better. Xml validation and namespaces were kind of half way there solutions that just made things more complex than they needed to be.
- JSON allows arbitrary integers to be represented as numeric values. That works fine on its own, but it causes serious problems if you trust the name of the format, "JavaScript Object Notation". You can't do that in JavaScript objects.
- JSON has no comments. Or version designators.
The big problems in XML are named closing tags and the weird distinction between children and attributes. JSON did those things better, by not having them at all. But XML is a long, long way from "like JSON, but worse in every way".
(Another big problem in XML-considered-as-an-ecosystem is the prevalence of people who want to deal with it by using regular expressions. As far as the technology goes, this is a non-issue -- it's a problem with the user, not the technology. But I do have to admit that the JSON equivalent appears to be loading the data into a parser that doesn't work, as opposed to loading the data into a non-parser that doesn't work.)
>JSON allows arbitrary integers to be represented as numeric values.
JSON doesn't mandate the use of binary numbers, parsers can keep number literals as strings and let the user choose whatever binary storage.
>The big problems in XML are named closing tags and the weird distinction between children and attributes. JSON did those things better, by not having them at all.
JSON has this design freedom too: should a collection of values be an object or an array?
> JSON doesn't mandate the use of binary numbers, parsers can keep number literals as strings and let the user choose whatever binary storage.
So it is a coin toss whether a perfectly valid JSON file can be processed (read/written - say pretty print) by a perfectly valid JSON library without its contents getting trashed? Quality software engineering right there.
The most obvious issue is when numbers in JSON are not parsed correctly by JSON.parse in browsers and you have to use a custom parser or keep numbers as strings, see this SO question https://stackoverflow.com/q/18755125/333777. I've encountered this several times when big numbers were serialized in Java, the Swagger said it's just a number and nothing about possible size limit and you eventually faced the issue in one of the bug reports when stuff broke.
Mostly they have to do with people believing you can parse JSON `number` elements as if they were JavaScript `Number` types. Like I said, this particular problem is not internal to the definition of JSON.
Don't confuse JSON with JavaScript. JSON is a string format. It allows arbitrary integers to be represented as numeric values. There is no such thing as implementation-defined behavior, because JSON has no behavior of any kind.
JSON is more than a string format; it's a data interchange format, and the relevant RFC (RFC8259, https://datatracker.ietf.org/doc/html/rfc8259) says that JSON allows implementations to set limits on the range and precision of numbers accepted. It also mentions explicitly that:
* good interoperability can be achieved by implementations that expect no more precision or range than IEEE754 double precision
* for such implementations, only numbers that are integers and are in the range [-(2*53)+1, (2*53)-1] are guaranteed to represent the same number on all of them.
RFC8259 also says that it allows implementations to reject strings that contain the character 'm', or to reject objects containing array values.
> An implementation may set limits on the maximum depth of nesting. An implementation may set limits on the range and precision of numbers. An implementation may set limits on the length and character contents of strings.
But those are not good ideas, and neither is rejecting numbers that are explicitly allowed by the grammar, but happen to be bigger than 9007199254740992.
There also appears to be a contradiction between these directives:
> A JSON parser MUST accept all texts that conform to the JSON grammar.
> An implementation may set limits on the size of texts that it accepts. An implementation may set limits on the maximum depth of nesting. An implementation may set limits on the range and precision of numbers. An implementation may set limits on the length and character contents of strings.
Helpfully, the RFC itself specifies which one should win:
> The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
The discussion of limiting number values glosses over a pretty big logical hole:
> This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
Of course, integer values of more than 54 bits are also generally available and widely used. A JSON number such as 36028797018963968 does not suggest that it expects receiving software to have any greater capability for numeric magnitude or precision than is widely available. The actual reason for the mention of integer value restrictions is not discussed, but it is mentioned earlier in the RFC:
> JSON's design goals were for it to be minimal, portable, textual, and a subset of JavaScript.
These goals were not achieved, but they are still corrupting the discussion of numeric values. This RFC is a dog's breakfast. Writing "arbitrary noncompliance will be allowed" makes a mockery of the idea of a standard, and according to its own terms, the document does not even allow the restrictions it claims to allow. This document doesn't do anything except attempt to legitimate any and all existing or future "JSON parsers". There is no JSON data which, according to the lowercased restriction allowances, can be guaranteed to be accepted by a "compliant" JSON parser.
It doesn't reject those numbers, but it doesn't preserve the value across encoding/decoding. And this is not limited to JSON parsers for Javascript, it's quite hard to find a C library that doesn't have the 2^53 limitation.
Regarding limitations on text size, you can always return an out of memory error. As long as it's not a _parsing_ error, it's technically fine, you are still accepting all texts that conform to the JSON grammar but telling the client that there's not enough memory to store the parsed output.
Which has proven to have zero real-world use case.
> - JSON allows arbitrary integers to be represented as numeric values. That works fine on its own, but it causes serious problems if you trust the name of the format, "JavaScript Object Notation". You can't do that in JavaScript objects.
That's not a problem with JSON, that's a problem with standardized JavaScript. I believe JSON was named before there was an official standard that required JavaScript implementations to be terrible.
A lack of comments is never good, especially when you want to provide a workable example of what an entity looks like, while annotating its contents, but at the same time allowing it to be parseable. Furthermore, i wouldn't scoff if i ever saw comments about non-trivial fields in web APIs actually being returned, to better explain how to use them. Of course, at the same time i believe that something like that would be better suited as a part of WSDL, WADL or XSD schemas, but JSON and the technologies around it have essentially done away with strict schemas, which makes using them about as reassuring as dynamic languages - i.e. unreliable.
Far too few people do properly versioned APIs and far too few people do OpenAPI specs that are generated from code automatically and are publicly available for the above to be a moot point. Whereas with XML based services, i could open the WSDL/XSD file for a 15 year old API and know what's going on within 10 minutes. It seems like the industry has lost that bit of wisdom somewhere along the way of chasing agility - the same way that knowing how to generate code and operate with metamodels and models has also been tossed aside.
You can't expect a number from an API and get a boolean back, without your system breaking. There should be contracts between any two parts of a system, or any interlinked systems.
It's inexcusable to have breaking changes without doing something to give the users the ability to react to them before breakage: be it changing the signatures and deprecating the old methods for libraries which would be picked up by CI and would not build the code before these being addressed, or changing a WSDL/WADL/OpenAPI service description, which would then propagate into failing integration tests before new versions would be deployed.
You'd get a JSON schema of v14 and then another of v15 which would introduce breaking changes, but all of the downstream systems would see the changes and essentially figure out that they cannot use this new API (ideally, in a scheduled and automated process).
So essentially, it would be like this:
- you have an old API version that is used
- for example: your-app.com/api/v14/pictures/cats/bambino?size_x=640&size_y=480&page=5
- the new API version would get released, which would change paging semantics
- your-app.com/api/v15/pictures/cats/bambino?size_x=640&size_y=480&count=10&offset=40
- the CI system would pick up changes from /api/v14.json and /api/v15.json service descriptions
- it would detect breaking changes and developers would be alerted (either automatically as a part of integration tests, or when trying to update integrations)
- they'd update the API integration to address these issues, before the old would eventually be sunsetted
Of course, sadly most companies out there aren't interested in versioning their APIs or even providing any sorts of service descriptions, because all of that costs money and time. So in the moniker of "Move fast and break things" the focus ends up being on the second part.
> I believe JSON was named before there was an official standard that required JavaScript implementations to be terrible.
Nope. The ECMAScript standard released in 1999 already specifies:
> In ECMAScript, the set of values represents the double-precision 64-bit format IEEE 754 values including the special “Not-a-Number” (NaN) values, positive infinity, and negative infinity.
Care to explain more? I though they were over engineered compared to bare bones JSON. For e.g. JSON schema is still catching up to XML schema and the ecosystem it had back then (XSLT, XPATH) etc.
XML is of course heavy weight, but we should compare YAML against XML, not JSON - JSON doesn't have for e.g. comments, CDATA etc.
DTD validation and XML schema were overly complicated solutions that didn't cover a lot of validation cases and should have just been implemented via libraries in the target language instead of making developers maintain yet another half-baked DSL. In the end, everything is JavaScript was the better way.
Take the (rhetorical) question in context. The claim was that "Everything XML wanted to be, JSON did better." Of course it can be _done_. The point to defend is that doing it like this in JSON is _better_.
It's self-evident that XML is better for the mixed content problem (which makes sense, given that mixed content is something that it was consciously designed for). Anyone arguing otherwise is, ironically, approaching the issue by narrowly considering only the things that JSON is good for and then evaluating XML against that—e.g. without it ever occurring to them that they should consider mixed content, as in the example given—or they're deluding themselves/being dishonest.
Of course we agree that the JSON representation is verbose. The question is: what is harder to use in a program? A verbose JSON representation that is just dictionaries and lists, or a concise XML one that is represented and manipulated using the DOM API?
> The question is: what is harder to use in a program?
In your imagination, maybe, but otherwise, no, it's not. The claim being prosecuted is that "Everything XML wanted to be, JSON did better." (It would be bad enough to lose sight of this once, but I explicitly repeated it for your benefit in the comment that you you are directly replying to. This iteration makes #3.)
Douglas Crockford explicitly refused to allow comments in JSON because he saw people using them to faithfully encode XML metadata in them. So actually you could blame XML for that too.
Now YAML is mostly replacing JSON in places where you want human readability, and JSON is only used for data exchange.
XML is the finest example of cargo cult engineering.
"The Web was a huge success, there must be a reason. Oh I know, it must have been HTML. We need more of that, and bigger, extensible! Behold, XML!"
I remember shaking my head in wonder during the height of the XML hype. What were all those supposedly very smart people thinking?
To me, HTML was already a huge mistake, the Web succeeded in spite, not because of it. What could you do in HTML/XML that you couldn't in s-expression, only simpler, more readable, and more writable? How many man-years have we collectively wasted having to read/write/generate/process/validate/... the mess that is HTML/XML, as opposed to a simpler, more sensible format such as s-expression?
Instead we have HTML5 which contains nothing that couldn't have been easily expressed in XHTML but have none of it's strictness that would have enabled much simpler and better browser implementations.
The arguments against XHTML amounted to "but wahh it should still work even if the syntax is wrong". Thing is none of the major proponents of XHTML were advocating for exterminating the existing doctypes and their associated quirks modes. People could continue to use that but XHTML would be there, ready for the advent of non-shit programming to hit the web.
People are starting to come around the ideas of strictness in programming languages again with Typescript and co rapidly gaining in popularity. It's just such a massive shame the entire Web community forced this massive wheel-reinventing cycle on us for no damn reason, completely ignoring the lessons learnt -literally everywhere- else.