Just musing along with you here but I think it's really hard for anything like that to happen. What seems at least halfway likely is that Valve won't be the same post-Gabe. But there will be other companies that end up with a similar ethos, and we can support those companies as best we can.
I'm a huge fan of the OSS model of keeping your core business fully unrelated to OSS but allowing and encouraging the use and contribution to OSS by people on your payroll because it really is a rising tide effect. There are just too many stories of a cool project becoming a company only to eventually reverse-robinhood the project into a closed source for-profit product.
well we don't know exactly how involved Gabe Newell is with the actual running of the company now a days or how do they going about their governance.
From what I see it seems like the culture of the company is shared between the leadership roles so it might be possible for the company to continue doing as it has been doing after Gabe.
I think the people at valve are smart and they understand their business and the company very well and that this issue is being taken seriously too.
Good governance exists, it's just that for most companies there's not really an interest in having that because it gets in the way of personal interests of people that are already entrenched in power.
Objectively, I'm not sure we can reliably say any longer, given how much pressure has been put on formerly objective reporting agencies to conform to this administration's narrative.
I think it would be pretty hard to argue against that point of view, at least thus far. If DOS/Windows hadn't become the dominant OS someone would have, and a whole generation of engineers cut their teeth on their parents' windows PCs.
There are some pretty zany alternative realities in the Multiverses I’ve visited. Xerox Parc never went under and developed computing as a much more accessible commodity. Another, Bell labs invented a whole category of analog computers that’s supplanted our universe’s digital computing era. There’s one where IBM goes directly to super computers in the 80s. While undoubtedly Microsoft did deliver for many of us, I am a hesitant to say that that was the only path. Hell, Steve Jobs existed in the background for a long while there!
I wish things had gone differently too, but a couple of nitpicks:
1.) It's already a miracle Xerox PARC escaped their parent company's management for as long as they did.
3.) IBM was playing catch-up on the supercomputer front since the CDC 6400 in 1964. Arguably, they did finally catch up in the mid-late 80's with the 3090.
Yeah, I'm absolutely not saying it was the only path. It's just the path that happened. If not MS maybe it would have been Unix and something else. Either way most everyone today uses UX based on Xerox Parc's which was generously borrowed by, at this point, pretty much everyone.
If Microsoft hadn't tried to actively kill all its competition then there's a good chance that we'd have a much better internet. Microsoft is bigger than just an operating system, they're a whole corporation.
Instead they actively tried to murder open standards [1] that they viewed as competitive and normalized the antitrust nightmare that we have now.
I think by nearly any measure, Microsoft is not a net good. They didn't invent the operating system, there were lots of operating systems that came out in the 80's and 90's, many of which were better than Windows, that didn't have the horrible anticompetitive baggage attached to them.
Alternatively: had MS Embraced and Extended harder instead of trying to extinguish ASAP we’d have a much better internet owned to a much higher degree by MS.
A few decades back Microsoft were first to the prize with asynchronous JavaScript, Silverlight really was flash done better and still missed, a proper extension of their VB6/MFC client & dev experience out to the web would have gobbled up a generation of SaaS offerings, and they had a first in class data analysis framework with integrated REPL that nailed the central demands of distributed/cloud-first systems and systems configuration (F#). That on top of near perfect control of the document and consumer desktop ecosystems and some nutty visualization & storage capabilities.
Plug a few of their demos from 2002 - 2007 together and you’ve got a stack and customer experience we’re still hurting for.
Silverlight is only “Flash Done Better” if we had the dystopia of Windows being the only desktop operating system. Silverlight never worked on Linux, and IIRC it didn’t work terribly well on macOS (though I could be misremembering).
In fact all of your points are only true if we accept that Windows would be the only operating system.
Microsoft half-asses most things. If they had taken over the internet, we would likely have the entirety of the internet be even more half-asses than it already is.
What’s funny is that we were some bad timing away from IBM giving the DOS money to Gary Kildall and we’d all be working with CP/M derivatives!
Gary was on a flight when IBM called up the Digital Research looking for an OS for the IBM-PC. Gary’s wife, Dorothy, wouldn’t sign an NDA without it going through Gary, and supposedly they never got negotiations back on track.
I'm not sure I understand this logic. You're saying that the gap would have been filled even if their product didn't exist, which means that the net benefit isn't that the product exists. How are you concluding that whatever we might have gotten instead would have been worse?
And how does it follow that microsoft is the good guy in a future where we did it with some other operating system? You could argue that their system was so terrible that its displacement of other options harmed us all with the same level of evidence.
I'm not convinced of your first point. Just because something seems difficult to avoid given the current context does not mean it was the only path available.
Your second point is a little disingenuous. Yes, Microsoft and Windows have been wildly successful from a cultural adoption standpoint. But that's not the point I was trying to argue.
My first comment is simply pointing out that there's always a #1 in anything you can rank. Windows happened to be what won. And I learned how to use a computer on Windows. Do I use it now? No. But I learned on it as did most people whose parents wanted a computer.
The comment you were replying to was about Microsoft.
Even if Windows weren't a dogshit product, which it is, Microsoft is a lot more than just an operating system. In the 90's they actively tried to sabotage any competition in the web space, and held web standards back by refusing to make Internet Explorer actually work.
I'd just like to point out that you've said OTel has fundamental problems, and then you pointed out a couple examples of one-time-fixable transient problems.
These are issues you'd experience with anything that spans your stack as a custom telemetry library would.
There is very much an alternative. Looking at the execution of your code should never alter its fundamental performance the way otel is built to do. This was a solved problem at least a decade and a half ago, but the cool kids decided to reinvent the wheel, poorly.
dtrace was meant for entirely different use, and it's not a replacement for otel
Otel was made to basically track the request execution (and anything that request triggers) across multiple apps at once, not to instrument an app to find slow points
To OP’s credit though the latter is exactly what every single piece of otel documentation pushes you to do. Using only the manual spans api is an exercise in api docs spelunking and ignoring “suggested best practices” and “only do this if everything else has failed for you”.
It's more than a couple. The fundamental issue is not the bugs themselves (these are expected) but that, from my perspective, otel is at odds with the observability business because these actors have little interest to contribute back to telemetry agents since anyone can reap the rewards of that. So instead they'd focus more on their platforms and the agents/libraries get neglected.
It's a great idea, in principle, but unless it gets strong backing from big tech, I think it'll fail. I'd love to be proven wrong.
> otel is at odds with the observability business because these actors have little interest to contribute back to telemetry agents since anyone can reap the rewards of that.
That's kind of how open source works, though. Of course the backend vendors won't care about anything that doesn't affect the backend somehow. But the people, i.e. users, who do want to be able to easily switch away from bad vendors, have incentives to keep things properly decoupled and working.
The license is the key enabler for all of this. The vendors can't be all that sneaky in the code they contribute without much higher risk of being caught. Sure, they will focus on the funnel that brings more data to them, but that leaves others more time to work on the other parts.
This argument doesn't work with such commoditized software. It's more like comparing an oil change for $100 plus an hour of research and a short drive against a convenient oil change right next door for $2,500.
Nobody is forced to go to the expensive one. If they are still in business then enough people apparently consider it a reasonable deal. You might not, but others do. Whether I'm being downvoted or not.
> It's current year, why are JWTs only supported in the closed source/enterprise versions of Varnish, NGINX, and Traefik?
I've found auth at the proxy to be a major antipattern. It adds a semblance of your backend being secure without adding the real user authentication and authorization it should have directly.
VPN is the better tool if you want to keep certain projects hidden from the general public and your application should be handling the JWT (hopefully in current year we're talking OIDC or some additional open standard on top of JWT) itself in order to properly enforce access controls.
With JWTs I don't do anything at the proxy beyond "This is a protected route. Is there a JWT? Is it valid? No to either? 403." This is one of the primary use cases for JWTs and it takes a majority of the load off of my application servers.
The route is open to the public for authenticated and authorized users. You wouldn't use a VPN here.
That's really just added work, IMO, and likely room for security misconfiguration between backend and proxy. You should still be validating and everything on the application server to inspect identity and possibly attributes like roles, so in the cases where you have invalid tokens you do the work once, just in the proxy instead of the backend, and with valid tokens you will do the signature validation work twice.
Have you used JWTs in production? Better to bounce a bad JWT with a server written in C/C++/Rust/Go at the edge than to pass it back and have it tie up a Python or Node process.
Even in Python the time to validate a small JWT is negligible. At the edge it's nearly imperceptible.
I'm a huge fan of the OSS model of keeping your core business fully unrelated to OSS but allowing and encouraging the use and contribution to OSS by people on your payroll because it really is a rising tide effect. There are just too many stories of a cool project becoming a company only to eventually reverse-robinhood the project into a closed source for-profit product.
reply