I like to tell people that all the AI stuff happening right now is capitalism actually working as intended for once. People competing on features and price where we arent yet in a monopoly/duopoly situation yet. Will it eventually go rotten? Probably — but it's nice that right now for the first time in a while it feels like companies are actually competing for my dollar.
Aaahh the beautiful free market where the energy prices keep increasing and if it all fails they will be saved by the government that they bribed before. Don't forget the tax subsidies. AKA your money. Pure honest capitalism....
Wanted to chime in here with some thoughts/clarifications as I'm the Enginnering Manager of the VM team at Unity, aka the team that is leading the charge on .NET Modernization and the CoreCLR transition (and also owns IL2CPP). Also speaking here as myself and obviously not on behalf of the company.
First thing is that CoreCLR is _very_ much an active development effort and we're committed to the roadmap we presented at Unite, with at least a CoreCLR-backed Player (aka, what you "Build" when you build your game with Unity) being delivered as a technical preview around the 6.7 release timing. This would basically mean being able to select "CoreCLR" as a scripting backend option (similar to IL2CPP) and see some of the upside and benefit the author mentions here.
That said, Unity isn't a pure C# environment. As lots of people know, there is a robust native layer underlying a lot of the managed (C#) code operating a pseudo-ECS design (not literally DOTS/Entites but an engine architecture thing). This means that a lot of the load-bearing code Unity-the-engine is running every frame is notably _not_ C# code, but instead native code that is, in a lot of cases, already very fast. This means that for tight loops of certain systems in the engine, moving to modern .NET isn't going to carry an implict performance increase of those systems. Said differently, CoreCLR isn't a magic performance bullet for Unity. What we like to say though is that "CoreCLR will make C# code faster", so if your game (or general scripting architecture like the author brings up, with lots of "loose" C#) is very C# dependent, you _will_ see a lot of benefit.
One thing we starting to investigate is how much performance there is to gain in Unity-the-engine by migrating legacy native C++ code to C# backed by CoreCLR. C# code can be a lot more maintainable and I'd be lying if I said that we really need _every_ managed->native->managed jump we do in the engine, especially with the performance benefit CoreCLR gives us. There are additional things as well like getting intrinsic-backed (or JIT'd) SIMD code for any C# we write with apis like Span<T>, covering for plenty on places in native code where we aren't directly checking for processor arch at runtime or the compiler misses some optimization. This is especially relevant as we also move the editor to CoreCLR, and obviously want it to be as fast as possible, and represents some of the attendant benefits of really focusing on .NET modernization.
Regardless, CoreCLR is very much the future of Unity and we're all very excited about it and the progress we're making. The player in 6.7 is the first step and there are lots of other parts (like modern C# lang versions) that will be major boons to development writ large. I (personally, as a game developer) see a lot of awesome things possible downstream of the shift and am (again, selfishly) very excited about what's possible.
This is the first time they've done this in a long time fwiw. So the answer is "they usually never worry about this because it never happens".
That said, they will also throw compiler warnings in console during build if you are using an all lowercase word with some small number of characters. I don't remember the exact trigger or warning, but it says something like "such words may be reserved for future use by the compiler" to disincentivize use.
This is the most boomer response. The idea of a long horizon to build your wealth against implies some faith in the future, which a lot of younger people no longer (righfully) believe in. But for a lot of the working public it seems a lot like they will not get to benefit from a future promised to them, and so have no stock in trying to work with the system to believe in it.
Basically, you can now write scripts in C# without the ceremony of a solution or project file — writing some code in a cs file and running `dotnet run myFile.cs`will execute the file directly.
You can also shebang to make it directly executable!
Hoping this inspires more people to give C# a go — it's incredible these days. Come in, the water is fine.
That's how I learned C in the 80s. Just compile the C file into an EXE. It's a good way to get started.
That said, I'm certain you've always been able to simple compile a .cs to an .exe? When I ran guerilla C# programming classes in jail, I couldn't get anything from the outside, so I was stuck with the .Net v2 csc.exe which is squirreled away in a subfolder of Windows on a default install of Visa.
What .Net 10 adds though is the ability to even scrap main() and just write code like it was Basic.
You've needed to have a project file in the past to compile .cs files, and this gets rid of that need. There are things that are part of more esoteric corners of Roslyn like .csx files that have allowed similar behavior in the past, but this fronts .cs directly as a scripting solution.
Scraping main() has been a thing for a while in dotnet — so called "Top-level programs" have be in since C# 9/.NET 5, aka about 5 years ago.
I don't think C# really has bloat — there is generally very little overlap between things they add, and each release they don't add a lot. This release's big thing was better extension method syntax and the ability to use "field" in properties. Each release is about that big, and I feel like the language is largely very easy to internalize and work in.
New features are often more likely to be semantic sugar instead of some new big thing.
Incredibly disappointing release, especially for a company with so much talent and capital.
Looking at the worlds generated here https://marble.worldlabs.ai/ it looks a lot more like they are just doing image generation for a multiview stereo 360 panoramas and then reprojecting that into space. The generations exhibit all the same image artifacts that come from this type of scanning/reconstruction work, all the same data shadow artifacts, etc.
This is more of a glorified image generator, a far cry from a "world model".
To be fair, multiview-consistent diffusion is extremely hard - it's an accomplishment of it's own right to get right, and still very useful. "World model" is probably a misnomer though (what even is a world model?). Their recent work on frame gen models is probably a bit closer to an actual world model in the traditional sense (https://www.worldlabs.ai/blog/rtfm).
They have $230m in funding and some of the best CS/AI researchers in the world. People like Skybox labs have already released stuff that is effectively the same as this with far less capital and resources. This is THE premiere world model company, and the fact their first release is a far cry from the promise here feels like a bit of a bellweather.
I agree RTFM is in more of the "right" direction here, and what is presented here is a bit of a derivative of that. Which makes this release so much more crass, as it seems like a ploy to get platform buy in from users more so than a release of a "world model".
Yeah, I'm likewise a bit underwhelmed by the results.
If you go in with the expectation that you give it a single image and it's doing gaussian splatting from a single image and a prompt it's phenomenal. If you deviate too far from the image viewpoint it breaks down, but it looks decent long enough to be very usable. But if you go in with the expectation that it's generating "worlds" it's not very good. This only passes as a world in a 20 second tech demo where the user isn't given camera controls
My best guess is that they are forced (by investors, lack of investors, fear of the AI bubble, or whatever) to release something, and this was something they could polish up to production quality and host with reasonable GPU resources
I assume this is definitely the case, with a drive to create platform economics on their sharing platform so that there is platform lock-in when any better thing releases. This is more of a platform launch than any notable model launch imo.
reply