Hacker Newsnew | past | comments | ask | show | jobs | submit | olliepro's commentslogin

There’s a section of I-15 in Utah’s Salt Lake County which reliably has a crash on weekdays at 6pm. It was unfortunately at a pinch point in the mountains with no good alternate route… very annoying.

In a similar way that Google Maps shows eco routes, it’d be fun for them to show “safest” routes which avoid areas with common crashes. (Not always possible, but valuable knowledge when it is.)


That feels like it would cause induced demand for crashes though.

Much of the scientific medical literature is behind paywalls. They have tapped into that datasource (whereas ChatGPT doesn't have access to that data). I suspect that were the medical journals to make a deal with OpenAI to open up the access to their articles/data etc, that open evidence would rely on the existing customers and stickiness of the product, but in that circumstance, they'd be pretty screwed.

For example, only 7% of pharmaceutical research is publicly accessible without paying. See https://pmc.ncbi.nlm.nih.gov/articles/PMC7048123/


Do you think maybe ~10B USD to should cover all of them? For both indexing and training? Seems highly valuable.

Edit: seems like it is ~10M USD.


It depends on your thing. If the marathon was just the motivation, your thing is running... if the marathon was the bucketlist item, it is the thing.


Getting everyone to fall in love with the thing is not doing the thing... learned this as a data scientist brought in to work on a project which ended soon thereafter. A team of 20 people spent 1.5 years getting people to love an idea which never materialized. Time was wasted because the technical limitations and issues came too late... it died as a 40 page postmortem that will never see daylight.


I learned that lesson as a solo dev on a project that lasted a year, then learned it again as a team of 4 on a 2-year project. I've not had to learn the lesson again but I've certainly trod the same path... 20 people (including some VERY expensive contractors), 3.5 years, AU$80m to deliver what amounts to a timesheeting system that needs a team of 10 people manually massaging the data every month to make it work.

How do you not be "toxic" after that? How do you retain a chipper attitude when you know for a rock-solid certainty that even if the project is successful it's likely by accident?


Everyone's threshold is different. I aspire to "move fast and break things", but more often than not, I obsess over the rough edges.


The more I use AI to do the thing, the more it feels like I didn't do the thing.


Yet the thing got done. Perhaps in the age of AI, it’s about making things get done.


What abstraction levels do you expect will remain only in the Human domain?

The progression from basic arithmetic, to complex ratios and basic algebra, graphing, geometry, trig, calculus, linear algebra, differential equations… all along the way, there are calculators that can help students (wolfram alpha basically). When they get to theory, proofs, etc… historically, thats where the calculator ended, but now there’s LLMs… it feels like the levels of abstractions without a “calculator” are running out.

The compiler was the “calculator” abstraction of programming, and it seems like the high-level languages now have LLMs to convert NLP to code as a sort of compiler. Especially with the explicitly stated goal of LLM companies to create the “software singularity”, I’d be interested to hear the rationale for abstractions in CS which will remain off limits to LLMs.


I made a skill that reflects on past conversations via parallel headless codex sessions. Its great for context building. Repo: https://github.com/olliepro/Codex-Reflect-Skill


I was thinking about something like this, but I don't have codex running on a server. Keep me posted on how it goes!


I believe the idea is that it “files away” the files into folders.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: