There’s a section of I-15 in Utah’s Salt Lake County which reliably has a crash on weekdays at 6pm. It was unfortunately at a pinch point in the mountains with no good alternate route… very annoying.
In a similar way that Google Maps shows eco routes, it’d be fun for them to show “safest” routes which avoid areas with common crashes. (Not always possible, but valuable knowledge when it is.)
Much of the scientific medical literature is behind paywalls. They have tapped into that datasource (whereas ChatGPT doesn't have access to that data). I suspect that were the medical journals to make a deal with OpenAI to open up the access to their articles/data etc, that open evidence would rely on the existing customers and stickiness of the product, but in that circumstance, they'd be pretty screwed.
Getting everyone to fall in love with the thing is not doing the thing... learned this as a data scientist brought in to work on a project which ended soon thereafter. A team of 20 people spent 1.5 years getting people to love an idea which never materialized. Time was wasted because the technical limitations and issues came too late... it died as a 40 page postmortem that will never see daylight.
I learned that lesson as a solo dev on a project that lasted a year, then learned it again as a team of 4 on a 2-year project. I've not had to learn the lesson again but I've certainly trod the same path... 20 people (including some VERY expensive contractors), 3.5 years, AU$80m to deliver what amounts to a timesheeting system that needs a team of 10 people manually massaging the data every month to make it work.
How do you not be "toxic" after that? How do you retain a chipper attitude when you know for a rock-solid certainty that even if the project is successful it's likely by accident?
What abstraction levels do you expect will remain only in the Human domain?
The progression from basic arithmetic, to complex ratios and basic algebra, graphing, geometry, trig, calculus, linear algebra, differential equations… all along the way, there are calculators that can help students (wolfram alpha basically). When they get to theory, proofs, etc… historically, thats where the calculator ended, but now there’s LLMs… it feels like the levels of abstractions without a “calculator” are running out.
The compiler was the “calculator” abstraction of programming, and it seems like the high-level languages now have LLMs to convert NLP to code as a sort of compiler. Especially with the explicitly stated goal of LLM companies to create the “software singularity”, I’d be interested to hear the rationale for abstractions in CS which will remain off limits to LLMs.
In a similar way that Google Maps shows eco routes, it’d be fun for them to show “safest” routes which avoid areas with common crashes. (Not always possible, but valuable knowledge when it is.)
reply