I think we might need to take a hard look at the higher end path of CS and ask why we're not focused more on Software Engineering.
Research is great, but if wre trying to train people to make real things, there's a whole different set of skills where you only need a thin vineer of CS training and a deeper understanding of software maintenance.
Do you hold this position for everything or only some things?
If it's some things, how do you determine which things can have some level of risk that's acceptable and which things can have no acceptable risk? And if there is an acceptable risk for some things, how do you set that level?
TBF, if your paraphrasing others as "curing cancer" but what they claimed is "treats cancer" then the issue may be comprehension or activite listening.
What are the chances some non-trivial proportion of the millions of cars on the road will not have their LIDAR designed, built, installed or calibrated correctly? I suspect this is going to be a recognized public health issue in a decade or two. (It will likely be an issue well before that, but unrecognized...)
There is an incentive to use higher power. Push the edge of safety limits to achieve higher performance from lower cost devices, for example.
It occurs to me there is an opportunity here. Passive lidar detectors sampling fleets of vehicles in the real world, measuring compliance and detecting outliers, would be interesting. A well placed, stationary device could sample thousands of vehicles every day. Patterns will emerge among manufacturers. Failure modes will be seen.
Cursory queries on this reveal nothing. Apparently, no one is doing this. We're all relying on front end certification and compliance. No thought given to the real world of design flaws, damage, faulty repairs, unanticipated failure modes, etc.
Apparently there are lidar jammers. I bet those are rigorously compliant with Class 1 safety regs... No one manufacturing those is ever going to think; "hey, why not a 50W pulse train?"
For everyone of those safety measures to be intentionally bypassed or ignored, the numbers are assuredly non-zero.
But is it going to raise to a level of concern? I don't think we're going to see a ton of cars with blinding lasers installed, unless they are installed to intentionally blind people.
If you have used face I'd, or someone has used a face detection on modern smart phone on you, or if you've pulled up to a modern intersection, you've been blasted with lasers. It may come one day where that's the largest concern but today it's not my primary problem and investing in FUD isn't going to bring any benefits.
That's a lot of qualifiers. And replace "humans" with "cameras" and I'm reminded that despite their well-intentioned efforts Volvo has failed there already.
It really isn't though. It's how you do something correctly. Drill into the details of just about any system and you'll see there's a lot of assumptions based on the layers above and below.
A good safety system requires multiple of these failures to occur together to become unacceptable in risk.
>Corporate life under a tyrant who demands prayers or pronouns were unthinkable not long ago.
See you give the game away with shallow complaints of forced pronouns. It's a fantasy that allows you to sit fence while only really casting strong blame in one direction, while excusing it when the wind blows a new way.
Is it chop down a tree that shouldn't be chopped down? I probably would need more people and equipment to effectively stop them because I'm probably not going to convince them with pretty words if they are already doing something that shouldn't be done.
Is it try to kill a member of my family? Yeah I'd probably try to intervene sans weapons in order to prevent the loss of life of another family member.
reply