Hacker Newsnew | past | comments | ask | show | jobs | submit | vinkelhake's commentslogin

I recently had my Framework Desktop delivered. I didn't plan on using it for gaming, but I figured I should at least try. My experience thus far:

    * I installed Fedora 43 and it (totally unsurprisingly) worked great.
    * I installed Steam from Fedora's software app, and that worked great as well.
    * I installed Cyberpunk 2077 from Steam, and it just... worked.
Big thanks to Valve for making this as smooth as it was. I was able to go from no operating system to Cyberpunk running with zero terminals open or configs tweaked.

I later got a hankering to play Deus Ex: Mankind Divided. This time, the game would not work and Steam wasn't really forthcoming with showing logs. I figured out how to see the logs, and then did what you do these days - I showed the logs to an AI. The problem, slightly ironically, with MD is that it has a Linux build and Steam was trying to run that thing by default. The Linux build (totally unsurprisingly) had all kinds of version issues with libraries. The resolution there was just to tell Steam to run the Windows build instead and that worked great.


> I later got a hankering to play Deus Ex: Mankind Divided. This time, the game would not work and Steam wasn't really forthcoming with showing logs. I figured out how to see the logs, and then did what you do these days - I showed the logs to an AI. The problem, slightly ironically, with MD is that it has a Linux build and Steam was trying to run that thing by default. The Linux build (totally unsurprisingly) had all kinds of version issues with libraries. The resolution there was just to tell Steam to run the Windows build instead and that worked great.

I've heard it said in jest, but the most stable API in Linux is Win32. Running something via Wine means Wine is doing the plumbing to take a Windows app and pass it through to the right libraries.

I also wonder if it's long-term sustainable. Microsoft can do hostile things or develop their API in ways Valve/Proton neither need nor want, forcing them to spend dev time keeping up.


MS _can_ do that, but only with new APIs (or break backwards compatibility). Wine only needs to keep up once folks actually _use_ the new stuff… which generally requires that it be useful.


Or MS does deals with developers causing them to use the new APIs. I still haven't forgotten when they killed off the Linux version of Unreal Tournament 3. Don't for a second forget they are assholes.


Plus if it does happen, folks need to laern a bunch of new hostile stuff, given how linux is taking off, why not just move to treating linux as the first class platform.


> why not just move to treating linux as the first class platform

This is where the argument goes back to Win32 is the most stable API in Linux land. There isn't a thing such as the Linux API so that would have to be invented first. Try running an application that was built for Ubuntu 16.04 LTS on Ubuntu 24.04 LTS. Good luck with that. Don't get me wrong, I primarily use and love Linux, but reality is quite complicated.


stability has critical mass. When something is relied on by a small group of agile nerds, we tend not to worry about how fast we move or what is broken in the process. Once we have large organisations relying on a thing, we get LTS versions of OS's etc.

The exact same is true here. If large enough volumes of folks start using these projects and contribute to them in a meaningful way, then we end up with less noisy updates as things continue to receive input from a large portion of the population and updates begin more closely resembling some sort of moving average rather than a high variance process around that moving average. If not less noisy updates, then at least some fork that may be many commits behind but at least when it does update things in a breaking way, it comes with a version change and plenty of warning.


Sounds similar to macOS.


MacOS apps are primarily bound to versioned toolchains and SDK's and not to the OS version. If you are not using newer features, your app will run just fine. Any compatibility breaks are published.


> Try running an application that was built for Ubuntu 16.04 LTS on Ubuntu 24.04 LTS. Good luck with that.

Yea, this is a really bad state of affairs for software distribution, and I feel like Linux has always been like this. The culture of always having source for everything perhaps contributes to the mess: "Oh the user can just recompile" attitude.


I'd love to see a world were game devs program to a subset of Win32 that's known to run great on Linux and Windows. Then MSFT can be as hostile as they like, but no one will use it if it means abandoning the (in my fantasy) 10% of Linux gamers.


That's basically already happening with Unity and Unreal's domination of the game engines. They seem dominate 80% of new titles and 60% of sales on Steam [1], so WINE/Valve can just focus on them. Most incompatible titles I come across are rolling their own engine.

[1] PDF: https://app.sensortower.com/vgi/assets/reports/The_Big_Game_...


Same with Godot. I'm writing a desktop app, and I get cross-platform support out-of-the-box. I don't even have to recompile or write platform-specific code, and doesn't even need Win32 APIs.


One aspect I wonder about is the move of graphics API from DX11 (or OpenGL) to DX12/Vulkan, while there have been benefits and it's where the majority of effort is from vendors they are (were?) notoriously harder to use. What strikes me about gaming is how broad it is, and how many could make a competent engine at a lower tech level, but fits their needs well because their requirements are more modest.

I also wonder about the developer availability. If you're capable of handling the more advanced APIs and probably modern hardware and their features, it seems likely you're going to aim at a big studio producing something that big experience, or even an engineer at the big engine makers themselves. If you're using the less demanding tech it will be more approachable for a wider range of developers and manageable in-house.


I believe it's already happening to a minor degree. There is value in getting that "steam deck certified" badge on your store, so devs will tweak their game to get it, if it isn't a big lift.


I am seeing that number increasing soon with The SteamDeck and SteamMachine (and clones/home builds). Even the VR headset although niche, is linux.

The support in this space from Valve has been amazing, I can almost forgive them for not releasing Half Life 3. Almost.


There are strong indications that Half Life 3 (or at least a Half Life game) is coming soon. Of course, Valve might decide to pan the project, but I wouldn't be surprised seeing an announcement for 2026.


> Microsoft can do hostile things or develop their API in ways Valve/Proton neither need nor want, forcing them to spend dev time keeping up.

If they decide to do this in the gaming market, they don't need to mess up their API. They can just release a Windows native anti-cheat-anti-piracy feature.


> They can just release a Windows native anti-cheat-anti-piracy feature.

Unless it's a competitive game and it's a significant improvement on current anticheat systems I don't see why game developers would implement it. It's only going to reduce access to an already increasing non-windows player base, only to appease Microsoft?

Also in order to circumvent a Windows native version wouldn't that be extremely excessive and a security risk? To be mostly effective they would need to be right down the 0 ring level.. just to spite people playing games outside of Windows?


Existing anticheat software on Windows already runs in ring 0, and one of the reasons that competitive games often won't work on Linux is precisely that Wine can't emulate that. Some anticheat softwares offer a Linux version, but those generally run in userspace and therefore are easier for cheaters to circumvent, which is why game developers will often choose to not allow players that run the Linux version to connect to official matchmaking. In other words, for the target market of developers of competitive games, nothing would really get any worse if there was an official Microsoft solution.

On the other hand, using an official Microsoft anticheat that's bundled in Windows might not be seen as "installing a rootkit" by more privacy-conscious gamers, therefore improving PR for companies who choose to do it.

In other words, Microsoft would steamroll this market if they chose to enter it.


Also Microsoft closing the kernel to non-MS/non-driver Ring 0 software is inevitable after Crowdstrike, but they can't do that until they have a solution for how anti-cheat (and other system integrity checkers) is going to work. So something like this is inevitable, and I'm very sure there is a team at Microsoft working on it right now.


> just to spite people playing games outside of Windows?

These things are always sold as general security improvements even when they have an intentional anti-competitive angle. I don't know if MS sees that much value in the PC gaming market these days but if they see value in locking it all down and think they have a shot to pull it off, they'll at least try it.

In theory a built in anti-cheat could framework have a chance at being more effective and less intrusive than the countless crap each individual game might shove down your throat. Who knows how it would look in practice.


>I've heard it said in jest, but the most stable API in Linux is Win32.

Sometimes the API stability front causes people to wonder if things would be better if FreeBSD had won the first free OS war in the 90s. But I think there's a compromise that is being overlooked: maybe Linux devs can implement a stable API layer as a compatibility layer for FreeBSD applications.


Steam has versioned Steam Linux Runtimes in an attenpt to address that. Curently it's only leveraged by proton but maybe it will help in the future.

It's like a flatpack, stabilized libraries based on Debian.


Hear me out:

Containers. Or even just go full VM.

AFAIK we have all the pieces to make those approaches work _just fine_ - GPU virtualization, ways to dynamically share memory etc.

It's a bit nuts, sure, and a bit wasteful - but it'd let you have a predictable binary environment for basically forever, as well as a fairly well defined "interface" layer between the actual hardware and the machine context. You could even accommodate shenanigans such as Aurora 4X's demand to have a specific decimal separator.

We could even achieve a degree of middle-ground with the kernel anti-cheat secure boot crowd - running a minimal (and thus easy to independently audit) VM host at boot. I'd still kinda hate it, but less than having actual rootkits in the "main" kernel. It would still need some sort of "confirmation of non-tampering" from the compositor, but it _should_ be possible, especially if the companies wanting that sort of stuff were willing to foot the bill (haha). And, on top of that, a VM would make it less likely for vulnerabilities of the anti-cheat to spread into the OS I care about (a'la the Dark Souls exploit).

So kinda like Flatpak, I guess, but more.


Check out the Steam Linux Runtime. You can develop games to run natively on Linux in a container already.

Running the anti-cheat in a VM completely defeats the point. That's actually what cheaters would prefer because they can manipulate the VM from the host without the anti-cheat detecting it.


There is no "real" GPU virtualization available for regular consumer, as both AMD and NVIDIA are gatekeeping it for their server oriented gpus. This is the same story with Intel gatekeeping ECC ram for decades.

Even if you run games in container you still need to expose the DRM char/block device if you want vulkan,opengl to actually work.

https://en.wikipedia.org/wiki/GPU_virtualization#mediated


I try to get as many (mostly older, 2D) Windows games as possible to run in QEMU (VirtualBox in the past). Not many work, but those that do just keep working and I expect they will just always work ("always" relative to my lifetime).

WINE and Proton seems to always require hand holding and leaks dependencies on things installed on the host OS as well as dependencies on actual hardware. Used it for decades and it is great, but can never just relax and know that since a game runs now it will always run like is possible with a full VM (or with DOSBox, for older games).


GPU sharing for consumers is available only as full passtrough, no sharing. Have to detach from host.


MS has supported doing gpu virtualization for years in hyper-v with their gpu-pv implementation. Normally it gets used automatically by windows to do gpu acceleration in windows sandbox and WSL2, however it can be used with VMs via powershell.


12th gen and later intel iGPU can do sr-iov.


>I also wonder if it's long-term sustainable. Microsoft can do hostile things or develop their API in ways Valve/Proton neither need nor want, forcing them to spend dev time keeping up.

Not while they continue to have the Xbox division and aspire to be the world's biggest publisher.


Yeah that's been my experience with native Linux builds too. Most of them were created before Proton etc got good, and haven't necessarily been maintained, whereas running the Windows version through Proton generally just works.

Unfortunately it seems supporting Linux natively is pretty quickly moving target, especially when GPUs etc are changing all the time. A lot of compatibility-munging work goes on behind the scenes on the Windows side from MS and driver developers (plus MS prioritizing backwards compat for software pretty heavily), and the same sort of thing now has a single target for peoples efforts in Proton.

It's less elegant perhaps than actual native Linux builds, but probably much more likely to work consistently.


Sometimes you see developers posting on /r/linux_gaming and generally the consensus from the community is mostly "just make sure proton works" which is pretty telling.

It's sort of a philosophical bummer as an old head to see that native compatibility, or maybe more accurately, native mindshare, being discarded even by a relatively evangelical crowd but,

- as a Linux Gamer, I totally get it - proton versions just work, linux versions probably did work at some point, on some machines.

- as a Developer, I totally get it - target windows cause that's 97% of your installs, target proton cause that's the rest of your market and you can probably target proton "for free". Focus on making a great game not glibc issues.

I mostly worry about what happens when Gabe retires and Valve pivots to the long squeeze. Don't think proton fits in that world view, but I also don't know how much work Proton needs in the future vs the initial hill climb and proof-of-success. I guess we'll get DX13 at some point, but maybe I'll just retire from new games and just keep playing Factorio until I die (which, incidentally does have a fantastic native version, but Wube is an extreme outlier.)


1. I think targeting compatibility is 99% as good as targeting native.

2. You’re discarding the shifting software landscape. Steam OS and Linux are trending towards higher PC gaming market share. macOS has proven you don’t need much market share to force widespread (but not universal) compatibility.

3. I don’t see the value in a purist attitude around Linux gaming. The whole point of video games is entertainment. I’m much less concerned with if my video game is directly calling open source libraries then if my {serious software} is directly calling open source libraries.


On point 3, I guess my views are different because my {serious software} is usually work, and if it stops working that's kind of a B2B problem and part of doing enterprise. It's just business as they say.

Gaming is much more meaningful to me as a form of story and experience, and it is important to me that games keep working and stay as open and fair as possible. In the same way it is important I can continue to read books, listen to music or watch movies I care about.


> Unfortunately it seems supporting Linux natively is pretty quickly moving target

With the container-based approach of the Steam Linux Runtime this should no longer be a problem. Games can just target a particular version and Steam will be able to run it forevermore.


I would hope Vulkan also does a lot of work here for linux native builds but I must admit I am only now starting my journey into that space.


A lot of those Linux native builds will have been using Vulkan.

Parity between DX12 and Vulkan is pretty high and all around I trust the vkd3d[0] layer to be more robust than almost anything else in this process since they're such similar APIs.

The truth is that it's just a whole lot harder to make a game for Linux APIs and (even) base libraries than it is to make it for Windows, because you can't count on anything being there, let alone being a stable version.

Personally I don't see a future where Linux continues being as it is (a culture of shared libraries even when it doesn't make sense, no standard way of doing the basics, etc.) and we don't use translation layers.

We'll either have to translate via abstraction layers (or still be allowed to translate Win32 calls) to all of the nasty combination of libraries that can exist or we'll have to fix Linux and its culture. The second version is the only one where we get new games for Linux that work as well as they should. The first one undeniably works and is sort of fine, but a bit sad.

0 - vkd3d is the layer that specifically translates D3D12 to Vulkan, as opposed to vkdx which is for lower D3D versions.


It's not really harder to make a good native Linux port that will keep working, it's just not something most game developers have much experience with.


I have a slightly different view. The former scenario is essentially having our cake and eating it too. I'd rather not "fix" Linux culture.


Too late for an edit now, but `vkdx` in the note here is supposed to say `dxvk`.


A wise man once said "The most stable ABI on linux is win23". It sounds like a joke, but it is actually true.


In my experience, Steam client and most games work great on Debian and Ubuntu, but you should know for GNU/Linux systems, it's only officially supported on Ubuntu (maybe SteamOS is implicit), I can't find the information on Steam's website or support pages, but this is a response I got from Steam Support when reporting a Steam client UI bug on Debian with GNOME, a while ago.


Which is funny because Ubuntu is also the only distro that wants you to install the Steam snap instead, which is then again unsupported.


When I run into issues with running games on Linux (Steam or otherwise), I found it useful to consult protondb.com to see what others have gotten to work. You can filter by OS or keyword etc.

https://www.protondb.com/app/337000 for Deus Ex: Mankind Divided


I wiped windows 10 from desktop. Installed cachyos and steam Installed path of exile 2

and it worked surprisingly, also i see people joking about how win32 is the only stable api on linux xD. Also heard red dead redemption 2 also works well on linux that might be the next game i will check out.


I can confirm I finished RDR2 in story mode in Bazzite, zero issues. Never played the multiplayer part, though.


Story mode is good enough for me


It's the same with Dying Light. They have a neglected Linux version and I downloaded 16GiB before i realised to switch to the Windows version and start again.


I run Fedora 43 and all games (single tickbox in settings) are running through "compatibility mode" (wine/proton). Works great!


This felt like a blast from the past. At a few times reading this article, I had to go back and check that, yes, it's actually a new article from the year 2025 on STM in Haskell and it's even using the old bank account example.

I remember 15 or 20 years (has it been that long?) when the Haskell people like dons were banging on about: 1) Moore's law being dead, 2) future CPUs will have tons of cores, and 3) good luck wrangling them in your stone age language! Check out the cool stuff we've got going on over in Haskell!


Yeah, remember when we used to care about making better programming languages that would perform faster and avoid errors, instead of just slapping blockchains or AI on everything to get VC money. Good times.


Only half-joking: maybe Java was a mistake. I feel like so much was lost in programming language development because of OOP...


Java is most of why we have a proliferation of VM-based languages and a big part of why WASM looks the way it does (though as I understand it, WASM is the shape it is in some measure because it rejects JVM design-for-the-language quirks).

I would also not blame Java for the worst of OO, all of that would have happened without it. There were so many OO culture languages pretending to that throne. Java got there first because of the aforementioned VM advantage, but the core concepts are things academia was offering and the industry wanted in non-Ada languages.

I would say Java also had a materially strong impact on the desire for server, database and client hardware agnosticism.

Some of this is positive reinforcement: Java demonstrates that it's better if you don't have to worry about what brand your server is, and JDBC arguably perfected ODBC.

Some of it is negative: a lot of the move to richer client experiences in the browser has to do with trying to remove client-side Java as a dependency, because it failed. It's not the only bridged dependency we removed, of course: Flash is equally important as a negative.


> I would also not blame Java for the worst of OO, all of that would have happened without it. There were so many OO culture languages pretending to that throne. Java got there first because of the aforementioned VM advantage, but the core concepts are things academia was offering and the industry wanted in non-Ada languages.

Really? Were there other major contenders that went as far as the public class MyApp { public static void main( nonsense?


OOP is very useful/powerful, don't throw the good parts out. Java messed up by deciding everything must be an object when there are many other useful way to program. (you can also argue that smalltalk had a better object model, but even then all objects isn't a good thing). Functional programing is very powerful and a good solution to some problems. Procedural programing is very powerful and a good solution to some problems. You can do both in Java - but you have to wrap everything in an object anyway despite the object not adding any value.


> everything must be an object when there are many other useful way to program.

Perhaps you would prefer a multi-paradigm programming language?

http://mozart2.org/mozart-v1/doc-1.4.0/tutorial/index.html


Java was derived from C++, Smalltalk, and arguably Cedar, and one of its biggest differences from C++ and Smalltalk is that in Java things like integers, characters, and booleans aren't objects, as they are in C++ and Smalltalk. (Cedar didn't have objects.)


Right. Everything a user can do is object, but there are a few non-object built ins. (they are not objects in C++ either, but C++ doesn't make everything you write be an object)


In C++ integers and characters are objects. See https://en.cppreference.com/w/cpp/language/objects.html, for example, which explicitly mentions "unsigned char objects", "a bit-field object", "objects of type char", etc.


I feel this is a case of using the same word to mean something different. C++ “object” here seems to mean something more akin to “can be allocated and stuffed into an array” than a Smalltalk-type object.

i.e. C++ primitive types are defined to be objects but do not fit into a traditional object-oriented definition of “object”.


Yes, many people believe that C++ isn't really "object-oriented", including famously Alan Kay, the inventor of the term. Nevertheless, that is the definition of "object" in C++, and Java is based on C++, Smalltalk, and Cedar, and makes an "object"/"primitive" distinction that C++, Smalltalk, and Cedar do not, so "Java [did something] by deciding everything must be an object" is exactly backwards.


I'm not sure who invented "object oriented", but objects were invented by Simula in 1967 (or before, but first released then?) and that is where C++ takes the term from. Smalltalk-80 did some interesting things on top of objects that allow for object oriented programming.

In any case, Alan Kay is constantly clear that object oriented programming is about messages, which you can do in C++ in a number of ways. (I'm not sure exactly what Alan Kay means here, but it appears to exclude function calls, but would allow QT signal/slots)


The specific thing you can do in Smalltalk (or Ruby, Python, Objective-C, Erights E, or JS) that you can't do in C++ (even Qt C++, and not Simula either) is define a proxy class you can call arbitrary methods on, so that it can, for example, forward the method call to another object across the network, or deserialize an object stored on disk, or simply log all the methods called on a given object.

This is because, conceptually, the object has total freedom to handle the message it was sent however it sees fit. Even if it's never heard of the method name before.


You can do that in C++ too - it is just a lot of manual work. Those other languages just hide (or make easy) all the work needed to do that. There are trade offs though - just because you can in C++ doesn't mean you should: C++ is best where the performance cost of that is unacceptable.


No, in C++ it's literally impossible. The language provides no way to define a proxy class you can call arbitrary methods on. You have to generate a fresh proxy class every time you have a new abstract base class you want to interpose, either by hand, with a macro processor, or with run-time code generation. There's no language mechanism to compile code that calls .fhqwhgads() successfully on a class that doesn't have a .fhqwhgads() method declared.


you don't call fhqwhgads() on your proxy class though. You call runFunction("fhqwhgads") and it all compiles - the proxy class then string matches on the arguments. Of course depending on what you want to do it can be a lot more complex. That is do manually what other languages do for you automatically under the hood.

Again, this is not something you should do, but you can.


That doesn't provide the same functionality, because it requires a global transformation of your program, changing every caller of .fhqwhgads() (possibly including code written by users of your class that they aren't willing to show you, example code in the docs on your website, code in Stack Overflow answers, code in printed books, etc.). By contrast, in OO languages, you typically just define a single method that's a few lines of code.

You're sinking into the Turing Tarpit where everything is possible but nothing of interest is easy. By morning, you'll be programming in Brainfuck.


C++ standard gets the definition of object from the C standard, i.e. a bunch of memory with a type.

Nothing to do with the objects of OOP.


To be clear, I’m not trying to pick at whether or not C++ is “really object oriented”.

What I’m saying is that the discrepancy between primitives in C++ and Java is entirely one of definition. Java didn’t actually change this. Java just admitted that “objects” that don’t behave like objects aren’t.


On the contrary, Java objects are very different from C++ objects, precisely because they lack a lot of the "primitive-like" features of C++ objects such as copying, embedding as fields, and embedding in arrays. (I'm tempted to mention operator overloading, but that's just syntactic sugar.)


Java differs from C++ in an endless number of ways.

What I’m saying is that in both C++ and Java, there are a set of primitive types that do not participate in the “object-orientedness”. C++ primitives do not have class definitions and cannot be the base of any class. This is very much like Java where primitives exist outside the object system.

If the C++ standard used the term “entities” instead of “objects” I don’t think this would even be a point of discussion.


It's not some minor point of terminology.

The entire design of C++ is built around eliminating all distinctions between primitive "entities" and user-defined "entities" in a way that Java just isn't. It's true that you can't inherit from integers, but that's one of very few differences. User-defined "entities" don't (necessarily) have vtables, don't have to be heap-allocated, can overload operators, can prevent subclassing, don't necessarily inherit from a common base class, etc.

C++'s strange definition of "object" is a natural result of this pervasive design objective, but changing the terminology to "entity" wouldn't change it.


> The entire design of C++ is built around eliminating all distinctions between primitive "entities" and user-defined "entities"

If the intent was to erase all distinction between built-in and user-defined entities then making the primitive types unable to participate in object hierarchies was a pretty big oversight.

But at this point I think we’re talking past each other. Yes, in Java objects are more distinct from primitives than in C++. But also yes, in C++ there is a special group of “objects” that are special and are notably distinct from the rest of the object system, very much like Java.


You can read Stroustrup's books and interviews, if the language design itself doesn't convey that message clearly enough; you don't have to guess what his intentions and motivations were. And, while I strongly disagree with you on how "special and notably distinct" primitive types are in C++, neither of us is claiming that C++ is less adherent to the principle that "everything is an object" than Java. You think it's a little more, and I think it's a lot more.

But we agree on the direction, and that direction is not "Java [did something] by deciding everything must be an object," but its opposite.


I don’t actually think it’s any more adherent to that notion. This is exactly why I tried to point out the discrepancies in definitions. You have to define what an “object” is or the discussion is meaningless.

If the definition of object is something like “an instance of a class that has state, operations, and identity” then C++ primitives are fundamentally not objects. They have no identity and they are not defined by a class. If “participates in a class hierarchy” is part of the definition, then C++ is way less OO than Java.

I don’t quite understand what your definition is, but you seem to be arguing that user-defined entities are more like primitives in C++, so it’s more object-oriented. So maybe “consistency across types == object orientedness”? Except C++ isn’t really more consistent. Yes, you can create a user-defined type without a vtable, but this is really a statement that user defined types a far more flexible than primitives. But also if “consistency across types” is what makes a language OO then C seems to be more OO than C++.


I don't think C++ is object-oriented, and it is certainly way less OO than Java in most ways. Its "classes" aren't the same kind of thing as classes in OO languages and its "objects" aren't OO objects.

In part this is because by default even C++ class instances don't have identity, or anyway they only have identity in the sense that ints do, that every (non-const) int has an address and mutable state. You have to define a destructor, a copy constructor, and an assignment operator to give identity to the instances of a class in C++.

With respect to "participates in a class hierarchy", that has not been part of the definition of OO since the Treaty of Orlando. But, in Java, all objects do participate in a class hierarchy, while no primitives do, while, in C++, you can also create class instances (or "class" "instances") that do not participate in a class hierarchy (without ever using inheritance, even implicitly). So, regardless of how OO or non-OO it may be, it's another distinction that Java draws between primitives and class instances ("objects" in Java) that C++ doesn't.


I think we are in agreement.

In C++, everything is an object as defined by the C++ spec, but a lot of things are not objects in an OO sense.

In Java, almost everything is an object in an OO sense, but some stuff is definitely not.


Just like Java, you cannot inherit from integers or characters. Depending on what you want to do with them that might or might not matter.


That's true, and in Smalltalk it's not true. In Cedar there is no inheritance. At any rate it's not a case of Java making more things objects than its forebears.


> At any rate it's not a case of Java making more things objects than its forebears.

There are other cases though. E.g. in C++ you could always have function pointers which were not objects and functions which were not methods. In Java you had to use instances of anonymous classes in place of function pointers, and all of your functions had to be (possibly static) methods.


Function pointers in C++ are "objects", and the difference between static methods and C-style functions which are not methods seems purely syntactic to me, or at best a question of namespacing. Regardless, static methods are not objects either in Java or in C++, so that is also not a case of something being an "object" in Java and not in C++.


> Function pointers in C++ are "objects"

In the C++ sense, but they don't have classes and don't participate in inheritance. Whereas the early-Java equivalents do.


Yes, I agree.


And it's all still true, although I would offer the usual argument that concurrency!=parallelism, and if you reach for threads&STM to try to speed something up, you'll probably have a bad time. With the overhead of GC, STM-retries, false-sharing, pointer-chasing, etc you might have a better time rewriting it single-threaded in C/Rust.

STM shines in a concurrent setting, where you know you'll multiple threads accessing your system and you want to keep everything correct. And nothing else comes close.


To be maximally fair to the Haskell people, they have been enormously influential. Haskell is like Canada: you grow up nicely there and then travel the world to bring your energy to it.


Yeah, I wrote an essay on STM in Haskell for a class back in 2005 I think.


I think Intel x86 had some hardware support for STM at some point. Not sure what's the status of that.


Disabled on most CPUs, plagued by security issues. I haven't used it but I assume debugging would be extremely painful, since any debug event would abort the transaction.


That's not software transactional memory, it's hardware transactional memory, and their design was not a good one.


Well, HTM was not useful per se, except accelerating an STM implementation.


It isn't very useful for that, but you can use it to implement other higher-level concurrency primitives like multi-word compare and swap efficiently.


True. At some point in the now distant past, AMD had a proposal for a very restricted form of HTM that allowed CAS up to 7 memory locations as they had some very specific linked list algorithms that they wanted optimize and the 7 location restrictions worked well with the number of ways of their memory.

Nothing came out of it unfortunately.


I'd like to see what kind of hardware acceleration would help STMs without imposing severe limitations on their generality.

To me, the appealing things about STMs are the possibility of separating concerns of worst-case execution time and error handling, which are normally pervasive concerns that defeat modularity, from the majority of the system's code. I know this is not the mainstream view, which is mostly about manycore performance.


Not an expert, but my understanding is that HTM basically implements the fast path: you still need a fully fledged STM implementation as a fallback in case of interference, or even in the uncontended case if the working set doesn't fit in L1 (because of way collision for example) and the HTM always fails.


I'm no expert either, but maybe some other hardware feature would be more helpful to STMs than hardware TM is.


That would depend on your idea of "good". It would be an upstream swim in most regards, but you could certainly make it work. The Asahi team has shown that you can get steam working pretty well on ARM based machines.

But if gaming is what you're actually interested in, then it's a pretty terrible buy. You can get a much cheaper x86-based system with a discrete GPU that runs circles around this.


Besides the box art, I miss the days when 1) the graphics card didn't cost more than the rest of the components put together, 2) the graphics card got all of its damn power through the connector itself, and 3) MSRP meant something.


> 3) MSRP meant something

I'm not in the market for a 5090 or similar, but the other day I was looking at a lower-end model, an AMD 9060 or Nvidia 5060. What shocked me was the massive variation in prices for the same model (9060 XT 16 GB or 5060 Ti 16 GB).

The AMD could be had for anywhere from 400 to 600 euros, depending on the brand. What can explain that? Are there actual performance differences? I see models pretending to be "overclocked", but in practice they barely have a few extra MHz. I'm not sure if that's going to do anything noticeable.

Since I'm considering the AMD more and it's cheaper, I didn't take that close a look at the Nvidia prices.


> What can explain that?

Looks. I'm not joking. The market is aimed at people with a fish bowl PC case that care about having a cooler with a appealing design, a interesting PCB colour and the flashiest RGB. Some may have a bit better cooling but the price for that is also likely marked up several times considering a full dual tower CPU cooler costs $35.


I was thinking about cooling, but basically they all have either two or three fans, and among those they look the same to my admittedly untrained eye.


The manufacturer can use better fans that move more air and stay more silent. They can design a better vapor chamber, lay out the PCB in a way that VRMs and RAM gets more cooling. But still all that stuff should not account for more than $30-50 markup.


Hey, c'mon now - some of that is flooding the market so hard that it's ~8:1 nVidia:AMD on store shelves, letting nVidia be the default that consumers will pay for. That's without touching on marketing or the stock price (as under-informed consumers conflate it with performance, thinking "If it wasn't better, the stock would be closer to AMD").


>What shocked me was the massive variation in prices for the same model [AMD v. nVidea]

I am not a tech wizard, but I think the major (and noticeable) difference would be available tensor cores — that currently nVidea's tech is faster/better in the LLM/genAI world.

Obviously AMD jumped +30% last week from OpenAI investment — so that is changing with current model GPUs.


They were talking about within one model, not between AMD and Nvidia.


I just bought a RTX 5090 at MSRP. While expensive, it's also a radically more complicated product that plays a more important role in a modern computer than old GPUs did years ago.

Compared to my CPU (9950X3D), it's got a massive monolithic die measuring 750mm2 with over 4x the transistor count of the entire 9950X3d package. Beyond the graphics, it's got tensor and RT cores, dedicated engines for video decode/encode, and 32GB of DDR7 on the board.

Even basic integrated GPUs these days have far surpassed GPUs like the RTX 970, so you can get a very cheap GPU that gets power through the CPU socket, at MSRP.


Do yourself/me a favor, and give your 5090's power plug/socket a little jiggle test.

I'm a retired data center electrician, and my own GPU's has been "loose" at least more than once. Really make sure that sucker is jammed in there/latched.


Yeah, 12VHPWR is a mess unfortunately.


...12VHPWR alone justifies purchasing a thermal camera.


> the graphics card didn't cost more than the rest of the components put together

In fairness, the graphics card has many times more processing power than the rest of the components. The CPU is just there to run some of the physics engine and stream textures from disk.


4) games come in a retail box accompanied by detailed manuals, booklets printed with back stories, and some swags.


4) scalpers only existed for sports and music venues


The existence of scalpers rather shows that the producer set the price of the product (in this case GPU) too low [!] for the number of instances of the product that are produced.

Because the price is too low, more people want to buy a graphics card than the number of graphics cards that can be produced, so even people who would love to pay more can't get one.

Scalpers solve this mismatch by balancing the market: now people who really want to get a graphics card (with a given specification) and are willing to pay more can get one.

So, if you have a hate for scalpers, complain that the graphics card producer did not increase its prices. :-)


Which format are you saying the Chromium team made and wants to push in favor of jxl?


webp


Which is superseded by AVIF anyway.


Not for lossless; webp is a fantastic replacement for PNG there. Not so much with AVIF, it's sometimes even heavier than PNG.


How will AV2 based AVIF compare to PNG?


You can write a WASM program today that touches the DOM, it just needs to go through the regular JS APIs. While there were some discussions early on about making custom APIs for WASM to access, that has long since been dropped - there are just too many downsides.


But then you need two things instead of one. It should be made possible to build WASM-only SPAs. The north star of browser developers should be to deprecate JS runtimes the same way they did Flash.


That is never going to happen until you create your own browser with a fork of the WASM spec. People have been asking for this for about a decade. The WASM team knows this but WASM wants to focus on its mission of being a universal compile target without distraction of the completely unrelated mission of being a JavaScript replacement.


It's also too early to worry about DOM apis over wasm instead of js.

The whole problem with the DOM is that it has too many methods which can't be phased out without losing backwards compatibility.

A new DOM wasm api would be better off starting with a reduced API of only the good data and operations.

The problem is that the DOM is still improving (even today), it's not stabilized so we don't have that reduced set to draw from, and if you were to mark a line in the sand and say this is our reduced set, it would already not be what developers want within a year or two.

New DOM stuff is coming out all the time, even right now we two features coming out that can completely change the way that developers could want to build applications:

- being able to move dom nodes without having to destroy and recreate them. This makes it possible so you can keep the state inside that dom node unaffected, such as a video playing without having to unload and reload a video. Now imagine if that state can be kept over the threshold of a multi-page view transition.

- the improved attr() api which can move a lot of an app's complexity from the imperative side to the declarative side. Imagine a single css file that allows html content creators to dictate their own grid layouts, without needing to calculate every possible grid layout at build time.

And just in the near future things are moving to allow html modules which could be used with new web component apis to prevent the need for frameworks in large applications.

Also language features can inform API design. Promises were added to JS after a bunch of DOM APIs were already written, and now promises can be abortable. Wouldn't we want the new reduced API set to also be built upon abortable promises? Yes we would. But if we wait a bit longer, we could also take advantage of newer language features being worked on in JS like structs and deeply immutable data structures.

TL;DR: It's still too early to work on a DOM api for wasm. It's better to wait for the DOM to stabalize first.


I am oversimplifying it, why should anything be stable?

That is the trend we face now days, there is too less stable stuff around. Take macOS, a trillion dollar company OS, not an open source without funding.

Stable is a mirage, sadly.


On the contrary, it’s something that solid progress is being made towards, and which has been acknowledged (for better or for worse) as something that they expect to be supported eventually. They’re just taking it slow, to make sure they get it right. But most of the fundamental building blocks are in place now, it’s definitely getting closer.


Aside from everything WASM has ever said, the security issues involved, and all the other evidence and rational considerations this still won’t happen for very human reasons.

The goal behind the argument is to grant WASM DOM access equivalent to what JavaScript has so that WASM can replace JavaScript. Why would you want that? Think about it slowly.

People that write JavaScript for a living, about 99% of them, are afraid of the DOM. Deathly afraid like a bunch of cowards. They spend their entire careers hiding from it through layers of abstractions because programming is too hard. Why do you believe that you would be less afraid of it if only you could do it through WASM?


Sounds to me like they forgot the W in WASM.


I agree with the first part, but getting rid of JS entirely means that if you want to augment some HTML with one line of javascript you have to build a WASM binary to do it?

I see good use cases for building entirely in html/JS and also building entirely in WASM.


getting rid of javascript entirely means to be able to manipulate the DOM without writing any javascript code. not to remove javascript from the browser. javascript will still be there if you want to use it.


Most of the time your toolchain provides a shim so you don’t need to write JS anyway. What’s the difference?


right, that's exactly the point. those that have a shim already successfully got rid of javascript, and that's enough.


Fair enough, I misunderstood what he meant by "deprecate JS runtimes".


You can use a framework that abstracts all the WASM to JS communication for DOM access. There are many such framework already.

The only issue is that there’s a performance cost. Not sure how significant it is for typical applications, but it definitely exists.

It’d be nice to have direct DOM access, but if the performance is not a significant problem, then I can see the rationale for not putting in the major amount of work work it’d take to do this.


Which framework is the best or most commonly used?


Yew, Leptos and Dioxus are all pretty decent with their own advantages and disadvantages if you like Rust. It's been a year or so since I last looked at them, to me the biggest missing piece was a component library along the lines of MUI to build useful things with. I'd be surprised if there weren't at least Bootstrap compatible wrapper libraries for them at this point.


I was under the impression that this very much still on the table, with active work like the component model laying the foundation for the ABI to come.


I have slightly different question than OP - what's left until it feels like javascript is gone for people who don't want to touch it?

Say I really want to write front end code in Rust* does there just need to be a library that handles the js DOM calls for me? After that, I don't ever have to think about javascript again?


> Say I really want to write front end code in Rust* does there just need to be a library that handles the js DOM calls for me? After that, I don't ever have to think about javascript again?

yes, e.g. with Leptos you don't have to touch JS at all


Isn't going through the JS APIs slow?


used to be, in the early days, but nowadays runtimes optimized the function call overhead between WASM and JS to near zero

https://hacks.mozilla.org/2018/10/calls-between-javascript-a...


It did improved a lot, but unfortunately not near zero enough.

It is managable if you avoid js wasm round trips, but if you assume the cost is near zero you will be in for a unpleasant surprise.

When I have to do a lot of different calls into my wasm blob, I am way way faster batching them. Meaning making one cal into wasm, that then gets all the data I want and returns it.


Could you list some of these downsides and what are the reason of their existence?


For starters, the DOM API is huge and expansive. Simply giving WASM the DOM means you are greatly expanding what the sandbox can do. That means lower friction when writing WASM with much much higher security risks.

But further, WASM is more than just a browser thing at this point. You might be running in an environment that has no DOM to speak of (think nodejs). Having this bolted on extension simply for ease of use means you now need to decide how and when you communicate its availability.

And the benefits just aren't there. You can create a DOM exposing library for WASM if you really want to (I believe a few already exist) but you end up with a "what's the point". If you are trying to make some sort of UX framework based on wasm then you probably don't want to actually expose the DOM, you want to expose the framework functions.


> you probably don't want to actually expose the DOM, you want to expose the framework functions.

Aren't the framework functions closely related to the DOM properties and functions?


I first watched it back when it came out. At the time I was living in a different country and San Francisco was just another US city to me. I just happened to re-watch it yesterday (it still holds up) for the first time since moving to the bay area.

It was interesting hearing the names of the locations and bridges that previously meant nothing to me (except the golden gate).

It's free to watch on youtube at the moment: https://www.youtube.com/watch?v=Qy9XYQBBIJ4


I live in the bay and occasionally ride Waymo in SF and I pretty much always have a good time.

I visited NYC a few weeks ago and was instantly reminded of how much the traffic fucking sucks :) While I was there I actually thought of Waymo and how they'd have to turn up the "aggression" slider up to 11 to get anything done there. I mean, could you imagine the audacity of actually not driving into an intersection when the light is yellow and you know you're going to block the crossing traffic?


Semi-related, but just once in my life, I want to hear a mayoral candidate say: “I endorse broken windows theory, but for drivers. You honk when there’s no emergency, block the box, roll through a stop sign — buddy that’s a ticket. Do it enough and we’ll impound your car.”

Who knows, maybe we’ll start taking our cues from our polite new robot driver friends…


This always astounds me about cities who have a reputation for people breaking certain traffic laws. In St. Louis, people run red lights for 5+ seconds after it turns red, and no one seems to care to solve it, but if they'd just station police at some worst-offender lights for a couple months to write tickets, people would catch on pretty quickly that it's not worth the risk. I have similar thoughts on people using their phones at red lights and people running stop signs.


It’s amazing how effective even a slight amount of random law enforcement can be.

Several of the hiking trails I frequent allow dogs but only on leash. Over time the number of dogs running around off leash grows until it’s nearly every dog you see.

When the city starts putting someone at the trailhead at random times to write tickets for people coming down the trail with off-leash dogs suddenly most dogs are back on leash again. Then they stop enforcing it and the number of off-leash dogs starts growing.


Random sampling over time is substantially as effective as having someone enforce the law 100% of the time. It's something like how randomized algorithms can be faster than their purely-deterministic counterparts, or how sampling a population is quite effective at finding population statistics.


It feels less fair though. When everyone is driving x mph over the limit but only you get pulled over, it sucks. So I agree for efficiency of enforcement, but I'd rather see 100% enforcement (automated if possible), with more warnings and lower penalties.


It's only unfair if the innocent are punished. Lot of murders go unsolved. Does that mean the murderers that do get caught are treated unfairly?


That's a pretty extreme example, maybe the idea doesn't hold as much there. But yeah, if 99% of murders weren't prosecuted, the 1% who get charged might feel like they were singled out (and maybe they were, because of some bias or discrimination). Again, 100% enforcement is better.


It doesn't just "feel" less fair, it often is -- bc it's not truly random, it's selective enforcement which leads to things like "driving while black".


Unpopular opinion, but I actually like traffic enforcement cameras. They don't know what race you are, and they never end up escalating to using lethal force.


The problem with 100% enforcement is it doesn't allow law enforcement any discretion, and then you end up having to actually officially change the speed limit which would probably never happen


Definitely true in practice, but I don't think we want discretion. What I mean though is as a deterrent, you can either have a "fair" fine that's enforced 100% of the time, or 2x the "fair" amount with 50% enforcement, etc. When it's 100x the "fair" amount with 1% enforcement, and you see everyone else not being enforced, it feels unfair.


Traffic rules do require some discretion though - if eg you don’t allow crossing a double yellow line but a car is broken down blocking the lane, does that mean that the road is now effectively unusable until that car is towed? Lots of examples.

But I’m with you on more enforcement. I’m totally fine with automated traffic cameras and it was working great when I was in China - suddenly seemingly overnight everyone stopped speeding on the highways when I was in Shanghai, as your chances of getting a ticket were super high.


I completely agree. Here in DC, we have sporadic enforcement of things like fare evasion, reckless moped driving, unlicensed food trucks, and speeding on the shoulder of the highway. It definitely helps somewhat.


In europe we use traffic cameras for this. Going through red light? A bill is in your mailbox automatically. No need for a whole police station.


In Massachusetts, USA, red light cameras were illegal until very recently, due to a 70s era law specifying that a live policeman had to issue a citation for something like that. From well before traffic cameras were common.


Put a single live policeman in front of 100 camera screens


Before they were common, yes, but they existed in active use back in the 1960s in the Netherlands: https://en.m.wikipedia.org/wiki/Traffic_enforcement_camera


We had a pilot program in NJ for them, they were universally hated. People would slam brakes on and be hanging over the edge into intersection and throw their car into reverse panicking to avoid the ticket, ended up causing a ton of new accidents so the program was never continued. In newark people shot at the cameras: https://www.nj.com/news/2012/08/shoot_out_the_red_lights_2_t...


That's an insufficient yellow phase rather than a camera problem. Not sure why NJ would think their population are special snowflakes that can't deal with red light cameras otherwise.


Italians?


Hitting the brakes and getting rear ended is barely even a crash compared to T-boning someone or plowing over pedestrians


I didn't say that. I said they'd panic and throw their vehicle into reverse. Cars/trucks can take the hit, motorcycles/bicycles not so much.


Huge skepticism that bicycles and motorcycles were getting backed into in any appreciable quantities.


If not that then rear-ended, here's the state's report on how red light cameras increased accidents: https://dot.nj.gov/transportation/about/publicat/lmreports/p...


How many people you willing to put in the hospital to prevent people from technically running reds on the yellow-red transition?


> in 2023, 1,086 people were killed in crashes that involved red light running

https://www.iihs.org/research-areas/red-light-running

So let's use that as an upper bound.

How many people were killed by these backups you're talking about?


Sounds like NJ has some terrible drivers


Thankfully sawzalls are cheap and plentiful so people can use much safer practices to disable/remove them:

https://www.cbc.ca/news/canada/toronto/parkside-drive-speed-...


The comments is written as if you specifically advocate for this. Why?


I bet if you come back after they've removed the old one but before they install the new one you can wreck the threads on the threaded anchors by impacting the wrong size higher grade nut on.


Sweden: Their locations are public. There is even an official API.

They are mostly located in sane places.

Apps like Waze consume this API and warn drivers if they’re at risk of getting caught. It’s the deterrence/slowdown at known risky spots they’re after, not the fine, I guess.

I heard that apps warning drivers this way are illegal in Germany?


Aside: what's up with the traffic speed cameras in Sweden? It feels like they're not designed to catch anybody. In my recent drive there it seemed like most of the cameras were in an 80 zone just before it switch to 50 for a tiny town. They wouldn't catch a typical driver who does something like 10 over everywhere -- they would likely have already started slowing down for the 50.

In my city in Canada, that camera would be in the 50 zone.


The typical driver who does something like 10 over everywhere is probably not the biggest safety hazard.

When I lived in a small town in Sweden, the problem was that at night some drivers would blow down the country roads and straight through the small towns at crazy speeds assuming that there was nobody around. On some nights/weekends there were also zero police on duty in the whole municipality, they would have to be called in from a neighboring, larger, municipality.


Because the point is to slow the traffic down, not to extract revenue from the peasantry.

Same as the difference between an obvious speed trap and a "gotcha" speed trap.


But does it slow people down? I doubt it has significant effect. Very few people are going to be going over 80 a few meters in front of a 50 sign. You essentially are only catching people who are doing close to double the upcoming speed limit. Those who are willing to do that should be getting much more severe punishment than a speeding ticket.


They do.


I think the general idea is strategic speed shaping before spots where lethal accidents are likely.

So nudging, sort of. There’s a lot of public support for that.


We have them in the US too, but it varies widely by jurisdiction because they're regulated at the state level and policed at the local level.

Oh and it's not a bill, it goes through the legal system so people have the right to argue it in court if they want.


NYC is ramping up on this as well.



Here in my country they removed the cameras in the second largest city after a trial period. It took too much effort to filter out police colleagues running a red (in police or civilian vehicles).


Ah that is easy here. 1) civilian vehicles never get leeway 2) we know the license plates of all police cars so we just filter it. Or actually only do so when they use proper permission to run a light


Simpler

Plate lookup returns state/municipal as the owner -> ticket gets discarded.


In most the USA, or at least Arizona, you have to serve someone. Just dropping something in a mail box doesn't mean dick. The very people that invented the traffic cameras up in Scottsdale were caught dodging the process servers from triggers from their own camera.

Another words, you have to spend hundreds of dollars chasing someone down, by the time you add that on to how easy it is to jam up the ticket in court by demanding an actual human being accuse you, it's not the easy win some may think. You're basically looking at $500+ to try and prosecute someone for a $300 ticket.


NY is not Arizona. They have the plate and send the fine to whomever the vehicle is registered to. If the fine isn't paid they flag the plate and impound the car if it's driven in their state.


In FL, a speed camera can give a car's owner can a ticket without needing to know he was the driver. Your perspective is not true nation wide.

"The registered owner of the motor vehicle involved in the violation is responsible and liable for paying the uniform traffic citation issued for a violation"

http://www.leg.state.fl.us/Statutes/index.cfm?App_mode=Displ...


That seems completely fucked to me. Charging people who aren't guilty of any crime with a crime because somebody else was driving their car?


What would be the alternative? Just get who was driving your car to pay you back for the fine. If they are not accountable/honorable enough to back you back, then why were you letting them drive your car in the first place?


The same "alternative" that there is to every other crime in existence, proving the person you charged with a crime actually committed the crime. The default is suppose to be innocence, not guilty. It is the state's responsibility or problem to prove someone is guilty beyond a reasonable doubt, not a citizen's responsibility to prove their continued innocence at all times.


I mean, the state obviously has photo evidence. So you need to show that either the photo was taken in error, that it misidentified your vehicle or that you weren't the legal owner at the time.


They have a photo of a car, but the car cannot commit a crime all on its own, someone has to be driving it. And if you have no idea who is driving when you charge them you are inevitably going to be charging innocent people.


When the police come across a car that's parked illegally, do you think they should need to wait around and figure out exactly who left it before issuing a ticket? Of course not; the vehicle owner is responsible for ensuring it's parked legally.

In the same way, it's the vehicle owner's responsibility to make sure their car is not driven through a red light. If they abdicate that responsibility, they aren't innocent!


I got a couple of them like 20 years ago. Picture was terrible. I just through the ticket in the trash and never thought about it again.


That's absolutely hilarious. They take a photo of something approximating your vehicle that shows your plate number, toss it in a mail system that loses more than 0.5% of the class of mail used, then according to another poster in NY they impound your car after all this.

Anyplace with the slightest adherence to the rule of law requires the state to positively identify an actual person, not a vehicle owned by a person, that is responsible for a moving violation. And then personally serve that person rather than just coming up with this absolute bullshit excuse that an unreliable mail system with a letter dropped god knows where somehow is legal service.


A couple things wrong here:

1. Camera-issued tickets are not moving violations

2. Your car will not be impounded for failure to pay (maybe unless you have many, many unpaid tickets)

If the photo is bad, you can dispute it! That isn't presumption of guilt, it's the legal system working exactly as intended: one side presents their evidence, and the other side has a chance to respond.

Even if USPS loses 0.5% of mail (I am skeptical; that seems crazy high) the state sends at least three notices, so the chances of you missing every notice of your infraction is something like one in a million.


Only by the most ridiculous fiction is running a red light or speeding not a moving violation. They've intentionally pretended like it's not to get around the due process involved.


What do you mean by people who aren't guilty? The infraction here is allowing your vehicle to run a red light.


How do you allow a vehicle to run a red light that you aren't driving?


Easy:

1. Allow someone else to drive your vehicle

2. That person runs a red light

Your responsibility as the vehicle owner is to either not do step 1, or only do it for people whom you trust will not do step 2.


You don't even have to 'allow' them. Either you could live in a community property state, where your spouse, even a spouse who has initiated divorce against you, legally also owns the vehicle that is in your name. Or someone could steal it. Or someone could steal or duplicate your plates and put it on a nearly identical car, which happened to a friend who had to spend years fighting all the tickets that were mailed to him when an entirely different car (same make/model) used his same plate numbers.


If you can show that your vehicle or plates were stolen, you won't have to pay the ticket; in NYC that is explicitly listed as a possible defense [1].

The spouse thing honestly seems fine — it just means that you're both responsible for paying the ticket, rather than you alone — but if you have an issue it's with the property laws, not the red light cameras.

[1] https://www.nyc.gov/site/finance/vehicles/red-light-camera-v...


My friend "showed" the plates were not his (he couldn't prove the car wasn't stolen because it wasn't -- they only copied his plate) but they kept sending him tickets because apparently it only counts for one ticket. They wanted him to go through a laborious process every time. I think he finally just stopped challenging them because it took too much time, and probably can't go to that state again unless he wants his car seized.


Sounds like the issue here is that the police aren't doing their job!


Arizona also did stakeouts to try and catch this guy:

https://www.nbcnews.com/id/wbna32806142


In CO we have automatic traffic cameras, and to my knowledge they just mail you the ticket, which is usually only a fine (and no license points). Its one of those “automatic plea” tickets where if you fight it, you fight (and risk conviction on) the actual offense, while if you just pay the ticket it will automatically get downgraded to a less serious offense (IE parking outside the lines).


Not in New Jersey. I visited my parents and didn’t stop for a full three seconds before making a right on red on a deserted road at night and they fined my dad.


I live in AZ, try driving on Lincoln in Paradise Valley. Everyone is going at 40mph because of the speed cameras. Most people don't want to be fugitives.


I sometimes use Tatum with PV's speed vans parked on the side of the road to head towards downtown Phx and, yes, the common speed is definitely around 40. But pretty much as soon as past McDonald and on 44th St, I resume the the normalized 7-8 mph over the posted limit because I know there are no more speed cameras.


Which proves the speed cameras in PV work...


It's just a process server, not cops. It's just the equivalent of a glorified delivery man looking for you. The general counsel, an executive, and the employees in general of ATS (the company that does the traffic cameras in most of AZ and I think much the USA) dodge the process servers when they get caught by their own cameras. The people that understand how the process works don't seem too bothered being a "fugitive" as it's all a nothing-burger and if you get caught all it means is you need to hire a lawyer to make it go away or pay the ticket.


This isn't true we've had plenty of programs where red light camera tickets were rolled out.

Voters just really don't like them.


They were rolled out but the mailed tickets are legally meaningless, someone has to actually hunt you down within a short timespan (I think 90 days) to create any binding requirement to address it.

   A mailed citation from a photo radar camera is not an official ticket and does not need to be responded to unless it has been formally served to you.
https://rideoutlaw.com/photo-radar-tickets-in-arizona-a-comp...


The problem with traffic cameras in the US was that they became outsourced revenue enhancement rather than public safety.

The cameras would get installed at busy intersections with lots of minor infractions to collect fines on rather than unsafe intersections that had lots of bad accidents. And then, when the revenue was insufficient, they would dial down the yellow light time.

Consequently, and rightly, Americans now immediately revolt against traffic cameras whenever they appear.

(San Diego was one particularly egregious example. They installed the cameras on the busy freeway interchange lights when the super dangerous intersection that produced all the T-bone accidents was literally one traffic light up the hill. This infuriated everybody.)


[deleted]


Nobody thinks it's racist to enforce traffic laws. People think it's racist to selectively enforce traffic laws by race, which usually takes the form of police pulling over Black drivers at higher rates. (But it can also mean installing more traffic cameras in minority neighborhoods!)


Try driving anywhere in the world that's not Western Europe or The USA and you'll quickly see how advanced even our worst cities are when it comes to traffic.

Last time I was in China drivers simply go through four way intersections at top speed from all directions simultaneously. If you are a pedestrian I hope you're good at frogger because there is a 0% chance anyone will stop for you. I really wonder how self driving cars work because they must program some kind of insane software that ignores all laws or it wouldn't even be remotely workable.


When I was living in China I got used to crossing large streets one lane at a time. Pedestrians stand on the lane markers with cars whizzing by on either side while they wait for a gap big enough to cross the next lane. It's not great for safety, to put it mildly, but the drivers expect it and it's the only way to get across the road in some places. I was freaked out by it but eventually it became habit.

Then I came back to the US and forgot to switch back to US-style street crossing behavior at first. No physical harm done, but I was very embarrassed when people slammed on their brakes at the sight of me in the middle of the road.


It is kinda funny watching people complain here after visiting almost anywhere in Asia. Can't speak for Japan or Korea though.


I've never been to SK, but in Japan things are -- unsurprisingly, as one might guess -- very orderly. For the most part (in cities at least) you don't jaywalk, even when there are no cars on the road.


Same in Korea, just on the other side of the road, very polite and professional, no one breaks rules for the most part, even in Major Cities.

I know a lot of foreigners like Japan for motorcycling specifically because you can "white line" in most places, and the drivers are attentive.

The one quirk I thought was most interesting was Crab Angle Stops or when at a T shape stop lights that have an additional stop light 20 feet further from the intersection. Sometimes the cars will align diagonally to allow more traffic per light and let whoever is in front have a better angle to see traffic on small roads with poor visibility. Then when the light turns green the diagonally aligned cars move back to normal.

Like ////// to - - - - - -

Officially, the 道路交通法 (Road Traffic Act) doesn’t say “you must angle.” It just requires drivers to stop at the line and confirm safety before entering.

The diagonal stop is more of a local driving custom (practical adaptation) rather than a codified rule.


Wait, so all the sibling comments are actually proposing bringing NYC traffic to a gridlock?


What the actual rules are matter far less than that traffic is predictable. Like in Boston it's typical for a few cars to get into the intersection to take a left, traffic goes around them in both directions and only on the change to red to they go. Not technically legal but normal. 4-way stops where nobody stops and everyone times their roll unless there's a reason to. Also not technically legal but normal. Nobody with an opinion worth caring about complains about these things. Blowing lights many seconds after they've changed is still wild IMO.


People are risking their lives and the lives of others, and a fine is supposed to be the thing that finally gets them to comply?


This is what the points system is for.

Any individual infraction might only be a small fine, but it adds points to your license. Collect enough points and you risk license suspension.

I’ve known a couple people who got close to having enough points for license suspension. They drove perfectly for years.


That sounds reasonable to me. Everybody makes mistakes, but nobody should be consistently making grievous mistakes capable of causing serious injury or death to other motorists on a regular basis.

I'm less concerned with a little speeding than I am with blowing through lights and stop signs.


I think in most areas with cameras where fines are automatically assesed to the vehicle owner (who is not necessarily the driver), there are no points. That way it's just a civil penalty and the burden of proof is low. "We have a photo" is enough.


Yes.

If they run a red light today there is some small chance they will injure/kill someone.

If they run a red light with a camera, there is a 100% chance they will receive a ticket.

The key factor is not the magnitude of the penalty (i.e. whether someone dies or they receive a fine) but the chance that they will encounter the penalty.


You've got me: I believe that people respond to financial incentives. I don't think this is a radical position.


Phone while stopped at a red light is explicitly legal here. I don't think it's been a problem?


New startup idea just dropped.


I think (or at least I hope) St Louis is primarily focused on reducing their sky-high murder rates. But who knows.


Blocking the box is a ticket in London. It works.

Edit: let me clarify: there is a camera on every intersection which automatically gives a ticket to everyone who blocks for >5sec. That works.


It is in NYC also, except it's entirely unenforced. We need a lot more red light cameras.

The nominal regulations on automotive behavior is pretty sufficient throughout the US, the main problem is that in most parts of the country traffic law may as well be a dead letter.


> It is in NYC also, except it's entirely unenforced.

It's enforced in the worst congested zones, the intersections around tunnel entrances and midtown, but as I said in my other comment usually by parking enforcement not NYPD.

A workaround in the law is to throw your turn signal on if stranded in the box, this doesn't count as blocking the box.


It is in NYC as well and it's usually enforced by parking enforcement (doesn't carry points but it has a steep fine), if NYPD writes it also comes with points but in my experience they'd rather let the walking ticket printers do it.


I paid a ticket for this in NYC.


Great! and if enforcement were consistent, rule-breaking behavior would probably decline:

> Quick, clear and consistent also works in controlling crime. It’s not a coincidence that the same approach works for parenting and crime control because the problems are largely the same. Moreover, in both domains quick, clear and consistent punishment need not be severe.

https://marginalrevolution.com/marginalrevolution/2015/09/wh...


> Who knows, maybe we’ll start taking our cues from our polite new robot driver friends…

I think this could be an interesting unintended consequence of the proliferation of Waymos: if everyone gets used to drivers that obey the law to letter, it could slipstream into being a norm by sheer numbers.


Here in Washington DC, we've installed a variety of different kinds of traffic cameras. And the local police engage in targeted enforcement actions against things like driving on the highway shoulder or recklessly operating a moped. It makes a difference. An unexpected upside is that the regular police spend a lot less time catching speeders and stop sign runners, which frees them up for more serious situations.


If you look into the fleet size serving Waymo service areas, it's remarkably small. But because they work 24/7 they serve up a lot of rides, punching way above their weight in terms of market share in ride hailing.

Their effect on traffic and how drivers behave will be similarly amplified. It could turn out to be disastrous for Waymo. But I suspect that low speed limits in New York will work to Waymo's favor.]


Real question for waymo will be snow and ice, or do they just get parked in that situation when demand is highest?


I've seen reports that they've been testing Driver 6 in snowy places like around Lake Tahoe and the upper Midwest last winter. I suppose this year we'll find out how well that went.


Ultimately I wouldn’t support this level of snitching (especially in our current political env) but I’ve had the idea of:

A bounty program to submit dash cam video of egregious driving crimes. It gets reviewed, maybe even by AI initially and then gets escalated to formal ticket if legit. Once ticket is paid, the snitch gets a percentage.

Again, I am fundamentally against something like this though, especially now.


Snitching... Please. We're not in the school playground any more. We're talking about taking responsibility when it comes to operating a vehicle in public. There's a huge imbalance of power when it comes to car use especially and we need to restore the balance. People need a recourse against irresponsible and bad drivers.


do you actually know every specific vehicle code and regulation everywhere you drive? I constantly remark to my dashcam about people breaking the law in my jurisdiction.

Would you be bothered if someone was following you all day, making recordings every time you didn't signal for exactly 5000ms before a lane change, or 300' before a turn? how about not stopping for a full three seconds, or driving an extra 100' in the left lane than necessary?

three felonies a day


No I wouldn't be bothered because I actually do those things. As should anyone driving.

And they're not felonies, they're cheap tickets that probably won't even add points to your license.


Three felonies a day: ISBN 978-1594035227

and i'm sorry, i can't take anyone seriously that claims they both know and always follow every vehicle code, wherever they are and wherever they travel. I do understand it allows one to shake their head in disdain for the unwashed cars driven by the lesser drivers of our society.

I can follow anyone, including federal agents, local and state law enforcement, CDL drivers; i will catch something that is a violation of the vehicle code.

If you live anywhere near the gulf coast, i'd be happy to tail you for a couple of days and point out every violation you make.


there's a big leap between "every vehicle code" and something as simple as "using a turn signal properly" and "not camping in the left lane"


my list of possible infractions was non-exhaustive if you'll excuse the car pun.


NYPD cops don't like enforcing traffic violations: https://i.redd.it/w6es37v1sqpc1.png (License holders and drivers on the road are up in the same period that summonses are down, too. Traffic is up since pre-covid.)

Now that I live in Toronto we face the same challenges. Politicians may introduce traffic laws to curb dangers and nuisances from drivers, but police refuse to enforce them. As they don't live in the city, cops seem to prefer to side with drivers over local pedestrians, residents or cyclists who they view antagonistically. Broken window works for them because they enjoy harassing pedestrians and residents of the communities they commute into.

So there is a bigger problem to solve than legislation.


Part of the problem is we have police doing far too many jobs. We need to separate out traffic enforcement, mental health responses, and other works into their own focused units. Especially the mental health responses, as far too often police refuse to or (at best) don't know how to de-escalate in those situations.


Bringing a gun and a taser to every problem guarantees that a lot of problems will be "solved" with the wrong tools. It's impossible to train enough people to carry guns and tasers and use them wisely.


It’s also expensive training and on-going cost when you add it all up.

Canada budgeted the cost of arming its border officers at ~$1 billion.

In the first 10 years, they fired them 18 times. 11 were accidents and the rest were against animal, usually to euthanize it rather than defend.

Works out to ~$55 million per bullet.

https://www.cbc.ca/news/politics/cbsa-border-guards-guns-1.4...


The current Democratic nominee and frontrunner for NYC mayor plans to do exactly that! He plans to create a Department of Community Safety to take over mental health responses from NYPD.


I agree we need to separate these responsibilities, but when it comes to mental health response, the police themselves are often opposed to alternatives, even while they complain that they're not mental health providers and often can't do anything in those types of situations.

In my city, we've had an underfunded street response program for a few years now, but a lot of people (including a lot of people who don't live here) see it as antagonistic to police and police funding, when really it should just be part of a holistic system to address social issues.

It makes no sense to me that the people who ostensibly care the most about addressing crime and "disorder" on the streets are often the most oppositional to programs that might actually address some of the underlying issues (not all of course, but some).


As a paramedic, multiple times I've watched police walk into a mental health emergency that we were handing satisfactorily, to everyone's contentment, patient, family, bystanders...

... and escalate it into a law enforcement situation.

One situation sticks in my mind. Person had broken a glass bottle on a curb. Family member was sweeping and cleaning that up while we dealt with laceration and planning for in-patient help (they were off their meds).

LE shows up, and immediately starts yelling aggressively at the patient about the broken glass, liability for any tires, injuries. Patient makes some comments back, so LE gets in his face and yells more, leads to patient trying to push off a bit and saying "get out of my face", cop is arresting him for assaulting a police officer.

Only with me and my partner talking to the Sergeant who showed up shortly after did it get de-escalated, but better believe the cop (and even the Sergeant) weren't happy with us about it.


Police quiet quitting and arbitrarily choosing what laws they feel like enforcing is a huge problem.

The most effective fix vis a vis traffic is simply automating so much of it with speed averaging cameras and intersection cameras and taking police out of the equation and retasking them to more important things that only they can do.


Don't police have quotas any more? 40 years ago everybody knew not to speed at the end of the month because a cop that would normally give you a warning for a small speed infraction would give you a ticket instead so they could make this month's quota.


Isn't that what speed cameras are about? Seem a lot more efficient and cheaper. I got a few tickets, nothing too serious just ran the yellow a little too close and 40 in 25. And if def changed my behavior


In many places outside the USA they just use cameras for box blocking, stop sign rolling, speeding...and there is a system for honking also. But many in the states think automation here is too Orwellian.


They do that in NY too. The worst offenders inevitably have fake/defaced/covered/no license plates. That should be cracked down on very hard but the police and prosecutors are strangely reluctant.


A sound solution in general, but the majority of police and firefighters and government employees with a connection to law enforcement cover their license plates with magnetic 'leaves' and so on. It's an undocumented perk for government employees.


We have all of those things in the states too. Just not ubiquitous.


We don't have much of it, not compared to Europe or Australia. This is a solved problem, but we don't want to solve it.


If only Mitch Hedberg was still alive: https://youtu.be/zonQXdmIlqQ?si=EBrpJiCk2XlhGJIs&t=97


We don't go after moving violations anymore (in NYC) because the driver might have a bad reaction. True story.


>We don't go after moving violations anymore (in NYC) because the driver might have a bad reaction. True story.

Who is "we"? And it's not a "true story." In fact, the NYPD issued almost 52,000 moving violation summonses in July 2025 alone and more than 400,000 year to date.[0]

If 400,000 moving violation summonses just this year is your "true story" about moving violations not being issued to avoid "bad reactions", do you believe in the tooth fairy and santa claus as well?

Or are you referring to the policy that NYPD cars shouldn't endanger the lives of everyone by engaging in high-speed chases on city streets?[1] Which is a completely different thing.

[0] https://www.nyc.gov/assets/nypd/downloads/pdf/traffic_data/m...

[1] https://nypost.com/2025/01/15/us-news/nypd-cops-ordered-not-...

Edit: Clarified prose.


My wife and I took a road trip that included time in SF last year and seeing a Waymo was pretty neat.

To save some money, we stayed in downtown Oakland and took the BART into San Francisco. After getting ice cream at the Ghirardelli Chocolate shop, we were headed to Pier 39. My wife has a bad ankle and can't walk very far before needing a break to sit, and we could have taken another bus, we decided to take a Waymo for the novelty of it. It felt like being in the future.

I own a Tesla and have had trials of FSD, but being in a car that was ACTUALLY autonomous and didn't merely pretend to be was amazing. For that short ride of 7 city blocks, it was like being in a sci-fi film.


Why does tesla pretend to be autonomous? My friends with tesla fsd use it fully autonomously. It even finds a spot and parks for them.


The company selling the car is adamant that none of their cars are fully autonomous in every single legal or regularity context. Any accident caused by the car is 100% the fault of the driver. But the company markets their cars as fully autonomous. That's pretty much the definition of pretending to be autonomous.


It's a level 2 system, it can't be operated unattended. Your friends are risking thier lives as several people (now dead) have found out.


I think we are at the point where the data suggests they bear more risk when they drive the tesla themselves. See the bloomburg report on accidents per mile.


Wikipedia lists two fatal crashes involving Tesla FSD and one involving Waymo.


> one involving Waymo

Are you referring to the one where a Waymo, and several other cars, were stopped at a traffic light, when another car (incidentally, a Tesla) barreled into the traffic stack at 90 MPH, killing several people?

Because I am not aware of any other fatal accidents where a Waymo was even slightly involved. I think it's, at best, misleading to refer to that in the same sentence as FSD-involved fatalities where FSD was the direct cause.


They key difference is that the Teslas killed their passengers, the Waymo hit someone outside the car (and it wasn't the Waymo's fault, it was hit by another car).


Yes. [1] That incident got considerable publicity in the San Francisco media. But not because of the Waymo.[2][3]

Someone was driving a Tesla on I-280 into SF. They'd previously been involved in a hit-and-run accident on the freeway. They exited I-280 at the 6th St. off ramp, which is a long straightaway. They entered surface streets at 98 MPH in a 25 MPH zone, ran through a red light, and reached the next intersection, where traffic was stopped at a red light. The Tesla Model Y plowed into a lane of stopped cars, killing one person and one dog, injuring seven others, and demolishing at least six vehicles. One of the vehicles waiting was a Waymo, which had no one on board at the time.

The driver of the Tesla claims their brakes failed. "Police on Monday booked Zheng on one count of felony vehicular manslaughter, reckless driving causing injury, felony vandalism and speeding."[2]

[1] https://www.nbcbayarea.com/investigations/waymo-multi-car-wr...

[2] https://www.sfchronicle.com/sf/article/crash-tesla-waymo-inj...

[3] https://www.youtube.com/watch?v=ULalTHBQ3rI&


The question should be less who was at fault and more would a human driver have reacted better in that situation and avoided the fatality. I'm not sure why you think that whether the fatality occurred inside or outside of the car changes the calculus, but in that case only one of the two documented Tesla FSD-related fatalities killed the driver. Judging by the incident statistics of Tesla's Autopilot going back over half a decade, I'm pretty sure it's significantly safer than the average human driver and continues to improve, and the point of comparison in the original post was with human driving rather than Waymo. I have no doubt that Waymo, with its constrained operating areas and parameters, is safer in aggregate than Tesla's general-purpose FSD system.


Only one of the two, and it's not nearly enough data to draw a conclusion one way or another in any case.


Wikipedia lists at least 28 fatal crashes involving Tesla FSD:

https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...


FSD is not Autopilot despite the names being conflated today, but even if you want to count all 28, it's not enough to compare raw numbers of fatal incidents without considering the difference in scale. That's not to justify taking your eyes off the road when enabling FSD on a Tesla, but the OP did not suggest that either anyway.


If Waymo were operating at 1000 times the scale then I suppose their total fatalities would be somewhere in the ballpark of 0 x 1000.


There is no world in which New York lets Teslas drive autonomously in the next decade. Had they not been grandfathered in in California, I doubt politics there would have allowed it either.


Sources? Havent heard of deaths except total idiots sleepping at 80mph.


If the car needs any occupant to be awake, it is not an autonomous vehicle.

Some of the best marketing ever behind convincing people that the word "autonomous" does not mean what we all know it means.


Are you trying to draw a distinction between sleeping versus looking away from the road and not paying attention to it? I expect both situations to have similar results with similar levels of danger in a Tesla, and the latter is the bare minimum for autonomous/unattended.


You don't need to cite accidents when you're stating the true fact that the system is not approved for unattended use.


It's just pretending to do that, seemingly?


If I can't use the center console to pick a song on Spotify without the car yelling at me to watch the road, it's not autonomous.


No, rather, if the manufacturer of the self-driving software doesn't take full legal liability for actions taken by the car, then it's not autonomous. This is the once and final criterion for a self-driving vehicle.


Sounds like we're in agreement then.

Right now, Tesla skirts legal liability by saying that the driver needs to watch the road and be ready to take control, and then uses measures like detecting your hand on the wheel and tracking your gaze to make sure you're watching the road. If a car driving on FSD crashes, Tesla will say it's the driver's fault for not monitoring the drive.


Hell, they'll even hold a press conference touting (selective data from) telemetry to say "The vehicle had been warning him to pay attention prior to the accident!"

And then four months later when the actual accident investigation comes out, you'll find out that yes, it had. Once. Eighteen minutes prior to the accident.

And then to add insult to that, for a very long time, Tesla would fight you for access to your own telemetry data if, for example, it was needed in a lawsuit against someone else for an accident.


That is for the lawyers not indicative of capability


I’ve taken a nap in my Waymos. One can’t in a Tesla. That is a difference in capability.


There are news articles of people doing that on the 5 freeway


I was in a Waymo in SF last weekend riding from the Richmond district to SOMA, and the car actually surprised me by accelerating through two yellow lights. It was exactly what I would have done. So it seems the cars are able to dial up the assertiveness when appropriate.


An autonomous vehicle's hivemind knows the exact duration of all yellow lights, even ones that vary based on traffic flow.


Not if they change the timing.


I'm sure "timing of yellow" is only a few parameters in its network at this point. And it's continuously training, it can probably one-shot the timing changes ( one taxi ride through maybe 3 lights ).


It doesn't seem impossible technically to up the assertiveness. The issue is the tradeoffs: you up the assertiveness, and increase the number of accidents by X%. Inevitably, that will contribute to some fatal crash. Does the decision maker want to be the one trying to justify to the jury knowingly causing an expected one more fatal incident in order to improve average fleet time to destination by 25%?


Nah, it's not that simple. Excessive passiveness causes ambiguity which causes its own risks.

You want the cars to follow norms, modifying them down slightly for safety in cases where it's a clear benefit.


Reinforcement learning is a helluva drug. I'm sure by now Waymos can time yellows in SF to within a nanosecond, whereas humans will only ever drive through so many yellows will never get that much training data.


A human can know the yellows on a few routes. A Waymo can pull over, observe a given intersection for an hour, and tell every other Waymo that exists precisely how long that light lasts.

It's not just collecting the information; it's the ability to spread it.


When red-light cameras are installed at an intersection, the number of rear-end accidents typically increases as drivers unexpectedly slow down instead of speeding up at yellow lights.

The cost of these accidents is borne by just about everyone, except the authority profitably operating the red lights. (To be fair, some statistics also show a decrease in right-angle collisions, which is kinda the point of the red-light rules to begin with.)


That seems only like a temporary problem until people get used to actually stopping at red lights, as they are supposed to. After the initial acceptance phase, it should minimise accidents over the longer term.


Unless there is a warning of how long is left on the yellow light, it’s an unsolvable problem because there is an asymmetric risk of stopping vs accelerating


> it’s an unsolvable problem because there is an asymmetric risk of stopping vs accelerating

This just sounds like everyone is speeding and/or distracted.


The lights should be designed so that if you don't have enough space to stop with a mild deceleration you should just go through. If a mild deceleration get you rear ended then of course that's an unsolvable problem


No one wants to risk a ticket with a guess at how long the yellow is going to be, or whether they’ll make it thru or not. That is the unsolvable part. Yellows are inconsistent , and you aren’t accounting for slow-moving traffic ahead of you that might cause you to block the intersection, etc.

There was actually a scandal in Chicago were a study found that the city systematically reduced the length of yellows only on lights that had red light cameras in order to harvest tickets.


I feel like the subtext of all these concerns is that you'd need to drive very carefully to reliably avoid camera tickets... and nobody wants to drive that carefully. I get it, I don't either, and I do get occasional camera tickets. But like: I should also be driving more carefully.


Slamming on the breaks because you’re anxious about a yellow light is not careful driving. But that’s what red light cameras do


Then they shorten the yellow so that it isn't "with a mild deceleration" but a full-on stomp-on-the-brakes stop.


>speeding up at yellow lights

I remember reading somewhere accelerating at orange light is actually a ticket-able offense?


My memory may be outdated or only local to my jurisdiction but my understanding is that yellow means “do not enter the intersection” where “intersection” begins before the box, usually with some alternate street indicator, like broken white lines turning to solid, at a braking distance that accounts for posted speed limit and yellow light duration.


My general understanding is , you can still enter the intersection when the light is yellow.

If it is already red, you may be photographed and fine sent to your address. Or a nearby cop can give you a ticket.


Each Waymo is equipped with multiple cameras (potentially LPR), LIDAR, etc. The car knows when the vehicles around it are breaking traffic laws and can provide photographic/video evidence of it. Imagine if Waymo cars started reporting violators to the police, and if the police started accepting those reports. Someday they might.


Isn't it too dystopian to have cars follow you around and report you to authorities? I can easily imagine some bad scenarios.


Yes it could potentially be very dystopian for human drivers. That doesn't mean it won't happen. Police departments could make a lot of extra money from the additional traffic tickets; there is a financial incentive for them to do this.


Making money from tickets is supporting the wrong behavior of trying to find excuses to ticket you for anything to get extra money - this is often leading to cops looking for cheap ways to get the extra cash where they can get it easily, instead of doing more important work where their chance to ticket you is lower even if more important for safety.


I had my second Waymo ride in SF 2 weeks ago and I had to press the support button: it was behind a large bus that was backing up to parallel park. The bus was waiting for the Waymo to get out of the way while the Waymo was waiting for the bus to move forward.

It took only a few seconds for a human to answer the support request and she immediately ordered the Waymo to go to a different lane. Very happy with the responsiveness of support, but there's clearly still some situations that Waymo can't deal with.


Eventually the waymo would determine the bus wasn't moving and go around. I had the same situation happen with a garbage truck, but I didn't press the button. It can handle the scenario if you just wait.


>> could you imagine the audacity of actually not driving into an intersection when the light is yellow and you know you're going to block the crossing traffic?

I wonder how many Waymos following the rules would be needed to reduce gridlock.


Waymo in SF pretty much drives like a human, and that includes doing human things like cutting lanes, stopping wherever it feels like, driving in the bus lane etc. I think it’ll be fine in NYC


Waymo in LA also drives pretty much like a human here would, which includes: not yielding for pedestrian-only crosswalks, running red lights, driving in the oncoming traffic/suicide/bike lane, occupying two lanes, blocking entrances/driveways/intersections, and stopping/parking in no-stop/parking curbs.

They're only really phenomenal at not hitting things; they really aren't good/courteous/predictable drivers under most conventional definitions.

Still, I think rollout in NYC will be fine. NYC generally drives slower and much less aggressively than LA, and slower gives the Waymo plenty of reaction time to not hit things.


I can attest to that. I live in Orange county and occasionally see Waymos when I go to LA, and they'll do things like merging with very little gap or merging in the middle of interactions.


SF traffic is but a single speck of nyc


Traffic? The issue half the time in NYC is the drivers. I can't compare it to SF since I haven't been there in a while but I still thought it was not as congested to compared to NYC.

NYC has a greater population and also has a greater number of registered cars compare to SF however.


As a comparison, I feel safe riding a motorcycle in SF. I don’t think I would ever ride a motorcycle in NYC.

Riding safely requires predicting what the cars around you are about to do. I find it an order of magnitude harder to predict driver behavior in NYC.


the solution to traffic is transit, not computers driving cars.


People complain a lot about drivers in dense eastern states, such as Massachusetts, New York, Rhode Island, New Jersey, etc. but compare the traffic fatality statistics:

https://www.iihs.org/research-areas/fatality-statistics/deta...

Having grown up driving in these places, I can confirm that people drive a whole lot more aggressively, but what blows my mind driving damn near anywhere else in the country is how inattentive many drivers are. Around here, our turns are tight and twisty, the light cycles at our 6-way intersections are too short, most streets are one lane but on the ones that aren't, lanes disappear without warning, some lanes that are travel lanes during the day have cars parked there at night... all of this means that you need to a) be much more attentive, and b) be more aggressive because that's the only way anybody gets anywhere at all.

It's a cultural difference. Almost any time I've encountered anyone complaining about rudeness in a busy northeastern city it was because they were doing something that inconvenienced other people in a way that wasn't considered rude where they're from: pausing for a moment in a doorway to check a phone message, not immediately and quickly ordering and having their payment method ready when they reached the front of the line at a coffee shop, not staying to the right on escalators if they're just standing there and not climbing/descending... all things that are rude in this environment and people are treated the same way rude people are treated anywhere else.

That culture expresses itself in the driving culture. If those 3 extra people didn't squeeze through after that red for 3 or 4 light cycles, suddenly you're backed up for an entire light cycle which is bad news.

Waymo cars are designed for a different style of driving. I'm skeptical that they will easily adapt.


This is an interesting point of view, and I think it intuitively makes sense. But it breaks down when considering people who block the flow of traffic by running red lights and clogging the intersection - that's just straightforwardly worse for everyone except the blocker.


People do that everywhere I’ve ever driven. Not getting in other people’s way is a core cultural tenet here more than most places but there are self absorbed jerks everywhere. Consider the vitriol unleashed on people that do that. It’s not acceptable.


Tried Waymo in SF and LA, and the service was great. The only problem I noticed is that sometimes it tells you they'd pick you up in 5 minutes, and then when it's almost over they tell you "sorry, it's actually going to be 20 minutes now". Since it's still new technology, I always gave it enough buffer so it never actually was a problem for me, but they probably could do better than that... Another weird thing was it chooses strangest places to stop. E.g. I asked it to pick me up at the hotel once, and it drove right past the hotel way to the end of the block where by coincidence a couple of homeless people were camping. Not that it led to any problems, just weird, it could have stopped right where hotel had a convenient place for loading/offloading of people. Maybe eventually that gets sorted out.


I was on Market Street yesterday on my bike next to a Waymo. A bunch of cars were blocking the intersection when we had the green. The light turned red and the cars blocking the intersection were able to move. I decided to stay, but the Waymo sped through despite the light being red. I regretted not crossing.


Honestly the train system in NYC is so good, I have only taken a cab a few times since I moved here. I’ll probably take a waymo once if they roll it out here for the novelty of it, but I’d rather see people getting exciting about public transit. Life is so much better when you don’t have to depend on cars to get you places.


Driving in most of the city isn't that bad. Even most of Manhattan is fairly regular driving compared to most of the country. It really isn't until you're near midtown that the insanity kicks up.


occasionally ride Waymo in SF and I pretty much always have a good time

Surreal. You have to step back and absorb what you just said. We have self driving cars, insane.


Imagine somewhere like Bangkok with millions of motorcycles that completely ignore traffic laws.

Self-driving is a non starter in many parts of the world.


A simple example is `toJSON`. If an object defines that method, it'll get invoked automatically by JSON.stringify and it could have arbitrary side effects.

I think it's less about side effects being common when serializing, just that their fast path avoids anything that could have side effects (like toJSON).

The article touches briefly on this.


Right, I get that you could define something to have side effects. My question is why would you? What are some expected visible side effects of converting something to json?

That said, I see that they called it out as you say, now. When I first read it, I thought they watched for side effects.

I'm assuming they have an allow list on all standard types. The date types, in particular, often have a toJson that seems like it should still be used? (Or am I wrong on that, too? :D )


An alternative that is available right now is to get an M1/M2 MacBook Air and run Asahi Linux on it. The older models are pretty cheap, but still quite fast. There are some missing features, but it runs really well. I've been using it as my home driver for over a year and it's really solid.


I am so tempted. Is battery life good with Asahi?


It's unfortunately not (yet) as good as in MacOS, but acceptable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: