Hacker Newsnew | past | comments | ask | show | jobs | submit | JohnCClarke's commentslogin

I definitely want this for QA. And luckily I haven't quite finished spending this Sunday setting up Claude Code in a container...

Instead I'm just going to give Claude a separate laptop. Not quite air-gapped, but only need-to-know data, and dedicated credentials for Claude.


I'm (genuinely) curious about the overwhelming preference for PostgreSQL on HN. I've always used MySQL for OLTP, and been very happy with it.

If you've seriously considered both and then selected PostgreSQL please comment and tell me what drove that decision.

Note: I'm only talking about OLTP. I do see that PostgreSQL adds a lot for OLAP.


Personally it's the history for me. MySQL started with MyISAM, not innodb.

So if you wanted an actual transactional database back in the day, MySQL was definitely not it. You needed Postgres.

InnoDB was not MySQL. It was an add on. So if I had to use MySQL it was with innodb of course but why not just use Postgres. And after the Oracle acquisition... Yes I know MariaDB. But I'm already on Postgres so...


I'm also curious about this, especially if anyone has operated postgres at any kind of scale. At low scale, all databases are fine (assuming you understand what the particular database you're using does and doesn't guarantee).

Postgres has some really great features from a developer point of view, but my impression is that it is much tougher from an operations perspective. Not that other databases don't have ops requirements, but mysql doesn't seem to suffer from a lot of the tricky issues, corner cases and footguns that postgres has (eg. Issues mentioned in a sibling thread around necessary maintenance having no suitable window to run at any point in the day). Again I note this is about ops, not development. Mysql has well known dev footguns. Personally I find Dev footguns easier to countenance because they likely present less business risk than operational ones. I would like to know if I am mistaken in this impression.


Excited for this! A couple of questions:

1. What is the resolution of timestamps (milli-, micro-, nano-seconds)? 2. Any plans for supporting large data BLOBs (e.g. PostgreSQL TOAST)? This would open up a lot of use cases and would be really interesting to make compatible with the in-memory emphasis for the atomic data types.


Not 100 hours a week. More like 50. Taxes to the local baron, lord, monastery, or whoever took the other 50.


Well, "Arcadia" is good, but "Tron Legacy" & "Star Trek" are better. Famously he hated ghost writing, so I hope he can make his peace with it now.


FWIW: I think we've all been there.

I certainly did the same in my first summer job as an intern. Spent the next three days reconstructing Clipper code from disk sectors. And ever since I take backups very seriously. And I double check del/rm commands.


What percentage of the papers where written by AI?

And, if your AI can't write a paper, are you even any good as an AI researcher? :^)


Did you mean: “if your AI can’t write a paper that passes an AI detector, are you any good as an AI researcher?”


The question is not are the reviews AI generated. The question is are the reviews accurate?


No.. that is not the question.

This is a conference purporting to do PEER review. No matter how good the AI, it's not a peer review.


Exactly this. Like is the research actually useful and correct is what matters. Also if it is accurate, instead of schadenfreude shouldn't that elicit extreme applause? It's feeling a bit like a click-bait rage-fantasy fueled by Pangram, capitalizing on this idea that AI promotes plagiarism / replaces jobs and now the creators of AI are oh-too human... and somehow this AI-detection product is above it all.


LOL. So basically the correct sequence of events is: 1. The scientist does the work, putting their own biases and shortcomings into it 2. The reviewer runs AI, generating something that looks plausibly like review of the work but represents the view of a sociopath without integrity, morals, logic, or any consequences for making shit up instead of finding out. 3. The scientist works to determine how much of the review was AI, then acts as the true reviewer for their own work.


Don't kid yourself, all those steps have AI heavily involved in them.

And that's not necessarily a bad thing. If I set up RAG correctly, then tell the AI to generate K samples, then spend time to pick out the best one, that's still significant human input, and likely very good output too. It's just invisible what the human did.

And as models get better, the necessary K will become smaller....


That's a strategy for producing maximally convincing BS content, but the scientific method was absent. I'm sure it was an oversight... ; )


That’s on you. You get to decide what “best” means when picking among the K, so you only get bs if you want bs.

I occasionally get people telling me AI is unreliable, and I tell them the same thing: the tech is nearly infinitely flexible (computing over the space of ideas!), so that says a lot more about how they’re using it.


So to really think the person writing the paper is the right one to review it?


In 1967 Louis Washkansky lived 18 days after receiving the world's first heart transplant. Today Bert Janssen has lived 41 years with a transplanted heart.

Progress is hard won, and the first people to undergo new procedures are the ones who have it hardest.


If the cost of returning and refurbishing a rocket stage is less than the cost of building a new one then that gives a competitive advantage on price.

But space flight is a strategic competency for states so China, the EU, and even eventually Russia, will all develop reusable rockets. I suspect that launch capacity will far exceed demand and that none will make profits for a long time.


If only EU would take affordable mass space launch like the critical strategic capability it is :P


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: