Y'all both are making great tools but consider giving a read-only view of the page when javascript is disabled. After disabling JS this gives me a blank white screen: https://athensresearch.github.io/athens/ Same thing when i tried it with roam.
Nim's defer just wraps the current scope in a try/finally, with the deferred code running in the finally. It is probably better just to use try/finally directly because it's more explicit about what is in the try block. It's not worth it to obscure that just to avoid a new level of indentation...
To me, the benefit of defer over try/finally is code locality: You can easily see whether some action has an associated cleanup, because the 'defer' statement is just before/after the action. With try/finally, the cleanup (which has to be in sync with the action) might be outside your editor window.
Came here to note this exact difference with respect to finally
Consider this code
fp = os.open(x) // imagine file open
defer fp.close()
fp.read()
With defer, one would think I can simply wrap this in a for-loop if I want to open and read a bunch of files. Go doesn’t promise this, but not clear until linter complains. In languages were it “ends at scope”, this is still wrong. If we wrote it as finally, dev would know finally is outside the loop or they need to wrap in another sub-scope inside the loop {}
The first one is obviously wrong, which I mean in the sense that anyone who knows what defer is will know that, not in the sense that you've posted an obviously bad post (it is a fair question!). defer is a statement, not a declaration, and does not take effect until it is executed like any other statement. It follows standard structured programming rules; line 2 does not execute until line 1 is done. (A rule so simple and obvious we often don't think about, especially since structured programming has basically won and everything we use nowadays is structured, but it is still a rule that we use in programming.)
for /* whatever */ {
fp = os.open(x) // imagine file open
defer fp.close()
fp.read()
}
This is wrong. Go linter might tell you it is wrong, but
a) the defers are pilling up and you might run into too-many-files open
b) I can never remember if `fp` is saved in defer closure by reference or value. That is, at the end, even if inefficient, are all pending defers closing the same pointer?
Now in a language where "it ends in scope", the scope hasn't ended until the loop exists. Now in the RAII world, the `fp` being overwritten would have saved the day by automatically closing it, but we are not talking about RAII world.
Now in a language where "it ends in scope", the scope hasn't ended until the loop exists. (I assume you mean exits there?)
No, I don’t think that’s correct, in any mainstream language. Each iteration of the loop body is a separate scope. If you declare a new variable inside the loop body, it lives until the end of that iteration then falls out of scope. Next time around the loop, you declare a new, separate variable.
There is a slight grey area around the loop header -- exactly when does the scope start and end? Older C compilers used to disagree about this, but the rules were firmed up in C++ (and I assume in recent versions of C too) and now loop headers use the tightest scope they can.
So I would expect “defer” to run at the end of each iteration, exactly the same as the C++-style RAII case, and that is in fact how it works in every modern language except Go.
Another way to think about it that might be helpful: most languages try to implement defer in a completely static way, where just looking at the syntax, you can figure out exactly where and when defer handlers are going to run. You can allocate all the storage you need at the start of the function, and nothing tricky is required at runtime. If defer handlers are queued up and run as a batch later on, that’s dynamic behavior that needs some extra runtime support, and that’s why most languages don’t do it.
That looks neat but why would you need a react-style reconciler to render a webgl scene? It's immediate mode...every frame is rendered according to the latest state available. What is even being reconciled?
Three.js has an imperative/stateful API for constructing and updating objects, not dissimilar to the DOM. So if your state lives in a separate place, then just like the DOM, you'd have to imperatively patch the view state to keep it in sync. Adding a layer that does this syncing automatically makes a lot of sense to me.
Doing it through React seems a little bit odd... but I haven't looked closely enough to understand why/whether this coupling is actually necessary
That makes sense, but it seems like a case of building an abstraction to solve a problem caused by another abstraction. If a scene graph creates a new chore for me that necessitates yet another dependency, i think it'd be simpler to not fuss with these layers at all. That's a choice i don't have with the DOM.
> i think it'd be simpler to not fuss with these layers at all
You should check out some example code for react-three-fiber (e.g. https://codesandbox.io/embed/r3f-bones-3i7iu). If you have any experience with writing raw webgl, it will become abundantly clear that something is gained by going through three.js before returning to React's more declarative style.
Most of the savings in the example are coming from three.js itself (not r3f), which is maybe the key point: three.js offers much more than a scenegraph, which is why it's worthwhile (check out the official examples to get an idea: https://threejs.org/examples/). Webgl as an API doesn't even really talk about 3d objects for the most part; the language it provides is primarily about moving data around in buffers. The difference in possible time savings in three.js vs webgl is on the level of writing in C vs an assembly language.
And once you're building an application with a non-trivial amount of state mutation (on which the view depends)—you're faced with the same dilemma as traditional web dev and the DOM, hence the desirability of react-three-fiber.
That said, I think it would be super interesting to see a three.js alternative that was 'natively' reactive/declarative, because I'll readily admit the tower of abstractions involved in writing a react-three-fiber app has its downsides. (Then again, I consider three.js to be a rare gem, know of nothing comparable in terms of simplicity/quality for building 3d apps, and would be [pleasantly] surprised to see anything like the above anywhere in the near future.)
Because of react's popularity, it's attractive to make a project that applies react's style of development to threejs.
Similarly, around 2013 threeQuery (threejs + jquery) was becoming somewhat popular too (it had a jquery "chain" api like syntax). It's good to see people experiment and attempt to improve developer efficiency. Who knows what new tricks will be discovered and what kind of benefit and new approaches will be created! However, I mostly agree with you.
I find threejs to be one of the most enjoyable libraries to work with (and its codebase is simple and beautiful too). I also highly recommend to everybody that wants to go into 3d graphics to dedicate some effort and learn the actual fundamentals (eg webgl, opengl, metal, matrix math, quaternions, etc).
This way you gain domain specific knowledge that is applicable across platforms and across time. Abstractions are not future-proof, they are recycled and change according to the latest trends in development. Domain specific knowledge stays with you forever! If there is an intermediate ground on which people can meet (eg start with react-three or three.js and then dive deeper) that's a win too. Recently I have been advocating that web devs can start learning 3d graphic concepts by just playing with... CSS to familiarize with some of the concepts and then move from there. This way one can avoid all the overhead around the gl statemachines or various libs and focus on the basic concepts first
You don't really have the choice with graphics either. The GPU keeps mesh and texture data in memory between frames, and I would imagine something similar happens for lighting, etc at some layer in the pipeline. Reconstructing the entire scene every time would not be feasible.
You can still hold on to references to textures and meshes you uploaded to the GPU without using a full-blown scene graph. Some state is necessary no doubt, but this seems more like unnecessary state that could be replaced by something more direct. But i don't know, i'm not familiar with three.js; the click handlers seem useful.
Regardless: the reason for having a base API be imperative is usually because it's maximally flexible and performant. By your own admission, any reactive API for 3D rendering will at least have to hold on to references to GPU objects in-between iterations, which means it could never be a truly stateless API; it would always be an abstraction over something stateful.
So given that constraint, I think it's better for the base API not to hide that statefulness behind an abstraction, and to leave the abstracting to a higher-level API
it allows you to create self-contained components. that alone will eradicate so much boilerplate and complexity. it has a real pointer event system. it takes care of managing the scene reactively, it disposes of objects it does not need any longer.
you use this for the same reason you would use react for the dom. r3f is not a wrapper that duplicates the threejs export catalogue, it is a renderer/reconciler.
Describing WebGL as immediate mode is a little misleading. The rendering is immediate but the API is definitively not. There is a ton of state that needs to be allocated up front, mutated rather than re-created, and eventually torn down. Buffers, textures, shaders, and uniforms are all retained state. There's also the whole OpenGL state machine.
OpenGL used to have a true immediate mode where you called a function for each triangle vertex, and you didn't need shaders or buffers. That mode is not present in WebGL.
I believe React developers think in terms of Components now as we used to think of Object/modules in the past. I personally find it easier to encapsulate logic into a Component because it seems more tangible than a plain JS file/module. You can also nest React components to compose logic, for example: composing multiple shaders.
> That looks neat but why would you need a react-style reconciler to render a webgl scene? It's immediate mode...every frame is rendered according to the latest state available. What is even being reconciled?
You don't, it might be a case of VRML cargo-culting or something.
While the DOM has some issues which makes React handy, Three.js and its "display list" have none of these problems, but I guess shoving React somewhere will translate in a better conversion for the authors of that article...
Crawford says nobody truly followed in his footsteps but I think Jason Rohrer qualifies. The two even shot a documentary together, and the scene where Crawford showed off his Storytron project to Jason was pretty revealing. Jason called it baroque and Crawford responded that he'd consider his life a failure if the project fails:
Crawford definitely is not doing enough introspection. I hope the man resets and makes an inspiring project without the self-romanticizing or self-pitying.
Crawford's travails remind me of the story of the pottery class (https://excellentjourney.net/2015/03/04/art-fear-the-ceramic...), in which half a studio is graded on the sheer quantity of pots they produce, while the others are graded on making one perfect pot. The "quantity" group ends up making higher quality pots because they've practiced and learned from their failures.
Yeah it's a bit bifurcated. Internally there's a lot of coupling -- HTML embedded directly in the C code and whatnot. This could be resolved if tools could be built on top of fossil, but the CLI code uses `exit` everywhere which makes it impossible to use as a library. I think they made an incomplete attempt at a JSON API, but what you really need is a proper linkable library like libgit2.
You really should look to other ecosystems and see what lessons they've learned. In java, packages are normally "namespaced" by the author's reverse domain name, like `org.lwjgl/lwjgl`.
Since clojure uses maven as well, the same applies, but clojure tools like leiningen decided to create a shortcut: if the group and artifact name are the same, like `iglu/iglu`, they can be collapsed into one name: `iglu`
Well, that just encouraged everyone to choose collapsible names. In retrospect, this didn't buy us much. Who cares about saving a few characters of typing? Most now seem to agree it wasn't a good idea.
When the "collapsed" name falls out of maintenance, the forks will all seem somehow less "official", even if they are much higher quality. Forks are inevitable; why would you want to discourage them?
I finally decided to start using the reverse of my personal domain for my future libraries. The java folks were right all along.
Agree, Go went a similar route as Java and I think that's good as well.
The new tools.deps in Clojure actually is moving to disallow collapsed names for similar reason and will force iglu/iglu.
Here's a rationale from them:
> The groupId exists to disambiguate library names so our ecosystem is not just a race to grab unqualified names. The naming convention in Maven for groupId is to follow Java’s package naming rules and start with a reversed domain name or a trademark, something you control. Maven itself does not enforce this but the Maven Central repository does all new projects.
> In cases where you have a lib with no domain name or trademark, you can use a third party source of identity (like github) in combination with an id you control on that site, so your lib id would be github-yourname/yourlib. Using a dashed name is preferred over a dotted name as that could imply a library owned by github.
Can I get a link to the quoted document? The quote raises more questions than it answers. Who determines that someone applying for a qualified name is the owner of that trademark (presumably this is a full-time employee; who pays their salary?), and what is the process? Trademarks are not a universal namespace--even within a single legal jurisdiction you can have the same name legally owned by different people due to different contexts--so who decides who wins?
That said, ya this is best intentions unfortunately. I'm guessing if you own a real trademark, you could actually sue people using your trademark as their group-id.
Otherwise in general they recommend using a registered web domain name. Someone else could take over your domain name as theirs, but I think the registry owner, like maven-central, if you contacted them and could show you own that domain, they might be able to take actions against the impersonator. Same for a github user.
Actually, thinking about this, I feel it be great if the repository owner like maven-central required a form of proof of ownership of the domain or the github id. That could add a lot of trust to the whole process.
The quote was a guideline, not a requirement. Cognitect (who makes the clojure CLI tool) doesn't even control clojars, the main clojure maven repo, so they wouldn't be able to enforce that even if they wanted to.
>Forks are inevitable; why would you want to discourage them?
The epitome of this mindset I think is "hostile fork" -- the entire notion is nonsensical. The whole point of being FOSS/OSS is the freedom to fork -- by all damn rights, you should fork as you please, and be pleased to fork!
The actual problem is not forking.. its community fragmentation, and more importantly loss of a "source of truth". Of course, maintaining that source of truth is otherwise known as centralization, with all the problems that brings, but there's nothing inherently wrong with forking.. that's just the natural specialization and evolutionary processes at work.
The solution is to make it easier to find those top-tier libraries, and this is orthogonal to forking; mainly handled by blog posts and "official" library listings/recommendations, and things like This Week in Rust.... namespacing or not doesn't really get you anything there.
> "hostile fork" -- the entire notion is nonsensical.
It is not. It is based in experience. See the xMule/aMule fork. A hostile fork is when the fork project starts bad-mouthing the original project and its maintainers.
The notion that forking is by itself hostile is non-sense.
If I'm remembering right, something similar kind of happened with uBlock and uBlock Origin, but it was the original maintainer who came back and forked after the new maintainer became hostile, or something like that.
This was discussed in detail in Homesteading the Noosphere of Eric S Raymond.
I think it's in there somewhere that he compares the right to fork with the right to bear arms: Good to have, but the situation must have gotten really shitty if a fork is a good solution.
I have less experience with CPAN and RubyGems but npm's namespacing system has two very serious problems:
1. It was introduced very late, meaning the community had already formed patterns of contribution around a flawed flat system. This is a problem of the flat system, not of the namespaced one.
2. It is still to this day entirely optional (for understandable backward compat. reasons). This gives namespaceless packages a misplaced position of authority over namespaced ones, which erodes the value of namespacing.
These are tough problems to get around if you start with a flat structure, but they really just outline the urgency of switching to namespaces for a relatively young project.
I agree with a lot of this perspective. It's also directly relevant to our situation, because we are basically in exactly that place now, and dealing with these problems is something that proponents of adding namespaces need to navigate.
This is the only good argument I've heard yet for not adding namespace. And maybe it's a defeating argument, maybe Crates is doomed to not have namespace due to the cost of putting them in after the fact.
I'm not sure you followed the above 2 points, or perhaps read them through tinted glasses.
I wasn't arguing that npm's namespacing system is worse than their initial system, nor that their switch to namespacing was a mistake.
The current npm namespaced system, with flaws, is head-and-shoulders better than the previous flat system.
You're saying you did "look and learn". If by that you mean you looked at the end product (npm's is seriously flawed) without looking at the journey to that product (npm's is still a huge improvement over what they started with), then you're not going to learn much from that kind of "looking".
I highlighted Composer/Packagist in a sibling comment as a system you should look and learn from (w.r.t. namespaces).
Choosing to only look at flawed systems that started flat seems like you're just being selective to support your own thesis.
PHP and it's ecosystem has a lot of problems, but I think Composer/Packagist is as surprisingly exemplary example of how to go about structuring package management.
Add a "legacy" namespace and move all existing packages there. Allow for a transition period where tooling will add "legacy" to instances where no namespace is given. Add a mechanism for legacy packages to indicate their new namespace so that transitioning could be mostly automatic for package users.
Not effortless, but not necessarily very costly either.
This is not the first time typosquatting attacks of this kind have been uncovered.
Popular repository platforms such as Python Package Index (PyPi) and GitHub-owned Node.js package manager npm have emerged as effective attack vectors to distribute malware.
"Orthogonal" suggests no connection but what I see above is a list of package managers that don't have namespacing.
They didn't make the claim that no namespaces had anything to do with this, that's an inference you're making from the specific list, when it could be for any number of reasons. For example, these are some of the largest package management ecosystems in the world, so they're more likely to be attacked than smaller ones. (You can of course come back and say that there are other massive ecosystems too, but that's kind of my point: there's more to a discussion than a random article listing a few ecosystems.)
I stated my reasoning in my comment: you can typo squat a namespace, just as easily you can any identifier. I don't see any inherent difference between the two.
Maybe. Regardless of what my parent meant, a lot of people in these discussions imply that we never looked at prior art because we did not make the choices around the tradeoff that they wanted us to make. And we did look at many, many approaches. We just decided to not go in those directions.
It is not a comprehensive list of things considered, it is a list of successful ecosystems that we decided to pick something closer to, than others that we decided not to.
And even grouping those together in terms of downsides is not really doing justice to the individual problems that each of those systems deal with.
Java was my introduction to namespacing, so I only suspected but didn't know for a long time that Java overdid namespacing.
Companies change names, they merge. Sometimes they go out of business but stick around as a foundation stewarding their old projects, and you might be going to example.org for years for documentation on a com.example module.
And the namespaces weren't enforced (who is going to stop me from publishing a com.example.foo module?), so it expected much and delivered little.
No namespaces is bad. Five level namespaces are better, but still bad for different reasons. Two might be good. Some might prefer three. But zero is right out.
I agree, the Java namespace system isn’t that good. In fact I hate it. First because it uses reverse DNS while common use of URL are in the opposite order. Second because the package sbu-namespace is enforced with the file system structure, which makes for crazy long names.
On the other hand I really like how C# and dotnet in general handle the matter. Package namespace are separated from logical (in-code) namespace. Package namespace are usually two/three dotted term, making ownership clear while not bloating the names.
> First because it uses reverse DNS while common use of URL are in the opposite order.
Well, that's more a bad thing about the DNS though.
"toplevel.domainname.subdomain/path" should really be how it should be structured. SUN improved this and made the hierarchy proper.
> Forks are inevitable; why would you want to discourage them?
The conclusion that we don't want to discourage forks may be valid, but this doesn't seem to be good reasoning. Lots of things are inevitable that we want to discourage or delay.
> Well, that just encouraged everyone to choose collapsible names. In retrospect, this didn't buy us much. Who cares about saving a few characters of typing?
This appears to be using evidence to prove the opposite conclusion; if everyone voluntarily chose to use shorter names, then it means that everyone cares about having shorter names. If there is a more substantial argument for why people have decided that the collapsing was mistake, I'd like to read it.
I didn't choose the shorter names because i "care[d] about having shorter names", i did so defensively, because i figured if i chose `net.sekao/iglu`, someone else would choose `iglu/iglu` which would imply that theirs was the original or official version.
Another point i didn't mention is that maven was designed from the start to be decentralized; many companies run their own private maven repos, but also pull artifacts from maven central. Having group names reduces the chances of collisions between their private servers and a public maven server.
That's a better rationale, although I don't think that really solves your stated problem; as an uninformed user I am still more likely to think that iglu/iglu is the more authoritative source there. Given this, any project that wants to authoritatively own its identifier should probably also register its own top-level namespace... which unfortunately brings us back around to where we started.
It would at least be far less of an issue. I don't see anyone being confused that https://github.com/facebook/react is the official repo, and not https://github.com/react/react. It's the fact that a collapsed name is a shortcut that imbues it with this special stature. And i believe maven central doesn't even allow one-segment group names for new libraries, though clojars obviously does.
I think the instrumenting and generator stuff gets disproportionate attention. For me by far the biggest win from spec has been with parsing. This completely changes how you'd write a library that takes a data structure and parses it into something meaningful (for example, what hiccup does for html or what honeysql does for sql).
In the past, this required a lot of very ugly parsing code and manual error-checking. With spec, you write specs and call s/conform. If it failed, you get a nice error, especially if you pair it with expound. If it succeeded, you get a destructured value that is really easy to pull data out of. I've done this in a half dozen different libraries and i'm pretty sure i wouldn't have even written them without spec.
I started playing with spec because of the idea of automated test generation, but the reality of it is that I use it as a super-charged validation library.
I think this emphasis actually does the library a disservice in that I see new users ask questions along the lines of "Should I use s/valid? to manually check inputs to my API"? The answer to that, in my usage, is "Yes! Of course!", but many people seem to think that they are using Spec wrong if they use it for something other than instrumentation and generation.
I remember doing just that - writing some ugly parsing code, thinking that I should be a good team member and add some specs for what I was doing, and when I tried calling conform... Oh, it did the parsing for me!
How would you pattern match with Clojure without using core.match? Do you mean using another library?
Edit: oh I understand what you mean, you were thinking of skipping the s/conform part, and use core.match directly. Personally, I consider spec an excellent library to describe the shape of data, and core.match allows for better describing the actions you want to take based upon that.
For example, with spec you can define a bunch of alternatives using s/or, and then use core.match to then easily traverse the results.
It’s more a matter of separation of concerns to me. I don’t use core.match for validation or describing shapes of data.
You could call it on every input change, though I'd say Spec conforming is not the most performant parsing library in the world, so not sure if it be fast enough for running it on each input.
Ocaml's strong static typing with type inference and pattern-matching (with exhaustivity checking) suite this kind of work quite well, there's plenty of literature around for it.
It's things like this that make me not worry about sticking with opengl. It's supported everywhere, fast enough for my uses, and is exactly the level of abstraction i want to be at. I completed the vulkan triangle tutorial and i cannot imagine needing all those knobs for the games and other things i make. I'm pretty confident that by the time opengl is no longer natively supported, software layers like this will be stable and fast enough.
I'm far from an expert in the field but what annoyed me deeply with OpenGL was the implicit global state and, in particular, the fact that it's very difficult and error-prone to make applications that interact with OpenGL from multiple threads.
In a way I find OpenGL too complicated sometimes, some of its abstractions don't really make a whole lot of sense on modern hardware. Having a lower level API can make some code simpler to write, at least in theory. When I use OpenGL I often find myself thinking "I sure hope the driver will manage to understand what I'm trying to do and not end up doing something silly".
Note that my exposure to OpenGL comes mainly from console emulators though, and that's obviously very biased since emulators need to mimic very quirky behaviors of the original hardware, and that often results and very un-idiomatic GL code.
> in particular, the fact that it's very difficult and error-prone to make applications that interact with OpenGL from multiple threads.
These days I rarely ever bother with threaded OpenGL, and I stick to one context per thread, using OS surface sharing primitives (DXGI, IOSurface, etc.) to communicate.
> the implicit global state and, in particular, the fact that it's very difficult and error-prone to make applications that interact with OpenGL from multiple threads
These things are obviously related.
From my work in OO langs, I believe it is possible to wrap OO code in functional code to some degree without rewriting, assuming that global references can be intercepted and resolved to local ones somehow, and that the state reference that encompasses all the globals is an immutable data struct
That sounds much worse. That's still global state, just encapsulated global state. Any time you say glBindWhatever(), you're assigning global state. And the state obj should be an opaque handle so you don't end up with stale pointers. Should be glEnable(state, GL_BLEND); glDepthTest(state, GL_LEQUAL);.
I recently wrote a tiny game[1] and rendering with WebGPU was reasonably easy as it was mostly clear why/where things were going wrong and there was little state to manage, while WebGL was enough of a hassle that I didn't even manage to get instancing working.
Something like `glBindState()` would encapsulate the global state, but it wouldn't eliminate it. A better approach would be to pass in the state to the draw command like WebGPU/Metal/Vulkan do, which would also allow the driver to amortize the cost of state validation.
The problems of global state persist in WebGL, because (1) if you need to save and restore state inside a single context, WebGL provides no help; (2) the driver is unable to amortize validation as the state is not tied to the individual render pass/pipeline descriptors.
Interesting. When you say "save and restore state", do you mean actually serializing it and loading it later? In what situation would you want to do this? Not doubting that there is one but i haven't heard of this before.
Typically you use this feature to associate state with an object in your render graph. For example, all of the render state used to render the sky, including shader program, vertex buffers, textures, blend state, etc. can be attached to the sky node in your scene graph. Then you can (basically) have a generic "draw an object in your scene graph" function that just performs the appropriate drawcall with the right state. The performance advantage of this is that the driver can do all the validation once, when you construct the sky object, instead of every frame.
> Pity that they decided to just bother with C99 though.
Isn't this the trend nowadays? I've seen recently that simple languages are trending again, languages like C, Go, Zig, Erlang, Lua. I think we hit the ceiling with mammoths like C++ and Scala.
C99 ensures it can be used with basically every other language and can be compiled for almost every platform. Bindings on top of that for other languages would be nice but can be provided by the community too. If it was C++, Rust, Nim, etc you'd have to write it in a way that allows exposing a C API anyway so the only advantage would be for people creating an implementation of the API getting to use a better language internally.
How do atomics help a high-level graphics API? The implementation can use them, but threading probably shouldn't be part of the interface.
Generics are cool, but auxiliary to any core part of the api since it's not really possible to use them outside of in a macro that wraps over a real function.
Can't you (ie the programmer) still use them though? You can write C11 code which freely calls the C99 API. By using C99 the API is compatible with a wider range of compilers and callable from a broader range of languages.
(Also where would the API itself have benefited in any significant way from the use of atomics or generics? Recall that compiler support for atomics is optional in C11.)
They are beginning to exist but it is very unlikely that these will have the stability that OpenGL has. They will likely be more convenient, but if you are building something today that doesn't have very high performance needs sticking to the OpenGL API is probably a good way to ensure that you won't have to migrate to something else for a very, very long time.
> if you are building something today that doesn't have very high performance needs sticking to the OpenGL API is probably a good way to ensure that you won't have to migrate to something else for a very, very long time.
That is, in fact, precisely the wrong advice.
Vulkan runs on Windows, Linux, and OS X (via MoltenVk). Nothing else does.
OS X runs OpenGL 3.Ancient and has now dropped OpenGL. OpenGL drivers for Linux tend to be laughably worse than the Vulkan drivers.
All of the major gaming companies have basically said "We have no OpenGL jobs. We have a ton of unfilled Vulkan jobs."
If you aren't using DirectWhatever, Vulkan is going to be the only useful 3D API very shortly.
OS X runs OpenGL 4.1 and is still available even if deprecated. If you are considering MoltenVK as a viable option then perhaps Zink over MoltenVK would work in the future.
Note that most of the newer OpenGL features are largely about sending stuff faster to the GPU, not enabling new GPU features - you can do a lot of stuff with OpenGL 4.1. IMO if you are struggling for CPU performance with OpenGL then you might be better moving to Vulkan. But if this isn't your bottleneck then there isn't a reason to not stick with OpenGL.
If OpenGL drivers on Linux are "laughably worse" (though in practice i haven't much of a difference) then the solution is to improve those drivers. It'll be better for all the thousands of existing applications too.
> OS X runs OpenGL 4.1 and is still available even if deprecated.
You are correct. I misspoke. I meant to say 4.Ancient since 4.1 is 10 years old now.
> Note that most of the newer OpenGL features are largely about sending stuff faster to the GPU, not enabling new GPU features
For shaders, I certainly find that not true. There are a lot of shader features that got added over 10 years.
And, I believe things even as important as Uniform Buffer Objects are later than 4.1.
OpenGL 4.1 just ... isn't good in this day and age.
> If OpenGL drivers on Linux are "laughably worse" (though in practice i haven't much of a difference) then the solution is to improve those drivers.
I totally disagree. There simply aren't enough people in the Linux ecosystem to maintain those drivers and Vulkan drivers. I'd rather those developers all work on the newer and better API.
Vulkan runs on Windows , only runs outside sandbox model and doesn't support RDP sessions, because ICD OpenGL driver model isn't supported in such scenarios.
ICD OpenGL drivers are also not supported on Windows ARM variants.
Microsoft is driving the effort of OpenGL and related technology on top of DirectX instead.
Also so far there is no Vulkan on PlayStation, and while Switch does support Vulkan/OpenGL 4.5, most titles are either using middleware like Unity (ca 50% of Switch titles) or NVN.
I think the person you responded to was comparing OpenGL to Vulkan middleware wrappers from the perspective of API usage. From that angle, whether or not OpenGL drivers are provided at all is completely irrelevant. The API itself will presumably remain available, implemented under the hood via Zink (or similar) on top of Vulkan (or DirectX, or Metal, etc). Such an arrangement is likely to be more stable than middleware that can change overnight at the whims of it's developers.
As verbose as Vulkan may be, it's a well-designed vanilla C API. Some of the future helper libraries may also be in vanilla C, in which case it's not difficult to version and ensure relative stability.
The OpenGL API is standardized and has multiple implementations so it isn't susceptible to some layer/library developer waking up one morning and deciding to break the API to make it "cleaner" or "easier" or whatever.
Hell, even though Khronos tried to do exactly that with the core/compatibility schism, pretty much all existing implementations decided breaking people's code is a stupid move and mostly ignored it (Apple being an exception but Apple hates developers[0] and their OpenGL implementation was always awful - even then, Apple's implementation is still an implementation of a stable API).
In practice it means that your currently working code will remain working in the future and there are good chances that you'll be able to port it in other places with either official or unofficial implementations.
[0] OK, OK, i know, Apple doesn't "hate" developers, they just do not care about them at all and they'd happily break things like a hippo dancing in a glassware shop if they believe that would make their own developers feel better.
Multiple implementations suck because implementors rarely aim for full compliance, and now you have to have as many code paths as there are implementations to account for the differences between them. Best to have a single implementation that everybody agrees on -- like the winner of the graphics API wars, Direct3D.
I don't know what you are talking about. Direct3D is Windows only. Metal is iOS and Mac only. Vulkan is in theory supported on all platforms but UWP doesn't allow it and Mac requires a compatibility wrapper.
If anything the graphics API war is still going on.
> Best to have a single implementation that everybody agrees on -- like the winner of the graphics API wars, Direct3D.
Ah yes, D3D, well known for its broad cross platform support! In all seriousness, would a Linux D3D driver even be legal? I have to assume that major legal or technical barriers exist, otherwise why wouldn't a GPU vendor have developed one at some point?
Mesa actually does support D3D9 natively for AMD (and, sort of, Intel) GPUs via the Gallium Nine project, and there is a branch of Wine that uses it.
But these days that's mostly superseded by DXVK (which implements D3D9 through 11 over Vulkan, kind of like Zink in the OP) and VKD3D (D3D12 over Vulkan).
I don't think a compatibility layer implemented in software is the same as a driver implemented by the vendor that interacts directly with the hardware.
If you use it, then you'll get no help from the platform on getting it to run. Random parts may begin to break, and the intention is likely an eventual removal.
You can do that if you want. You can also just sleep the render thread until inputs are received, if you are sure that only user input can cause changes in the UI. See glfwWaitEvents for example: