I'm working on a search engine for clinical trials around the world.
I was frustrated by the process of searching for clinical trial info using the clunky and slow registry websites. So I aggregated all trials around the world and made the search faster. Additional features:
- You can watch a trial and get email updates
- Sometimes a trial is done and a paper published, but no updates to the official page. I try to find these papers and link them
- I try to link trials with the same drug together, showing the drug going through different phases
I have a theory: I think the recent advance in coding agent has shocked everyone. It's something of such unheard-of novelty that everyone thinks they've discovered something profound. Naturally, they all think they are the only ones in on it and feel the need to share. But in reality, it's already public knowledge, so they add no value. I've been in this trap many times in the last couple years.
Interesting article. What prevents me from dumping the memory of the shell process and reading the local variable?
My program requires a password on startup. To run it in a loop, I used a script that takes the password as input, stores it in a local variable, and echoes it to the program. At the time I thought the only weakness was the memory of the shell process. But it was the best I could come up with.
I recently started digging into databases for the first time since college, and from a novice's perspective, postgres is absolutely magical. You can throw in 10M+ rows across twenty columns, spread over five tables, add some indices, and get sub-100ms queries for virtually anything you want. If something doesn't work, you just ask it for an analysis and immediately know what index to add or how to fix your query. It blows my mind. Modern databases are miracles.
I don't mean this as a knock on you, but your comment is a bit funny to me because it has very little to do with "modern" databases.
What you're describing would probably have been equally possible with Postgres from 20 years ago, running on an average desktop PC from 20 years ago. (Or maybe even with SQLite from 20 years ago, for that matter.)
Don't get me wrong, Postgres has gotten a lot better since 2006. But most of the improvements have been in terms of more advanced query functionality, or optimizations for those advanced queries, or administration/operational features (e.g. replication, backups, security).
The article actually points out a number of things only added after 2006, such as full-text search, JSONB, etc. Twenty years ago your full-text search option is just LIKE '%keyword%'. And it would be both slower than less effective than real full-text search. It clearly wasn’t “sub-100ms queries for virtually anything you want” like GP said.
And 20 years ago people were making the exact same kinds of comments and everyone had the same reaction: yeah, MySQL has been putting numbers up like that for a decade.
I am a DBA for Oracle databases, and XE can be used for free. It has the reference SQL/PSM implementation in PL/SQL. I know how to set up a physical standby, and otherwise I know how to run it.
That being said, Oracle Database SE2 is $17,500 per core pair on x86, and Enterprise is $47,500 per core pair. XE has hard limits on size and limits on active CPUs. XE also does not get patches; if there is a critical vulnerability, it might be years before an upgrade is released.
Nobody would deploy Oracle Database for new systems. You only use this for sunk costs.
Postgres itself has a manual that is 1,500 pages. There is a LOT to learn to run it well, comparable to Oracle.
For simple things, SQLite is fine. I use it as my secrecy manager.
Postgres requires a lot of reading to do the fancy things.
Postgres has a large manual not because it's overly complex to do simple things, but because it is one of the best documented and most well-written tools around, period. Every time I've had occasion to browse the manual in the last 20 years it's impressed me.
I read Jason Couchman's book for Oracle 8i certification, and passed the five exams.
They left much out, so many important things that I learned later, as I saw harmful things happening.
The very biggest thing is "nologging," the ability to commit certain transactions that are omitted from the recovery archived logs.
"You are destroying my standby database! Kyte is explicit that 'nologging' must never be used without the cooperation of the DBA! Why are you destroying the standby?"
It was SSIS, and they could never get it under control. ALTER SYSTEM FORCE LOGGING undid their ignorant presumption.
> XE has hard limits on size and limits on active CPUs
So no, it can't be used for free, you will pay for that with your time to keep it in usable state. We had to use it as our dev dbs because our customers used it and the size limit was a huge PITA. We mostly use MSSQL as dev db now because the other half of customers use that (we have to support both in the end) and it is way worse in every other way but there is no size limit for dev use.
My perspective might be equally naive as I've rarely had contact with databases in my professional life, but 100ms sounds like an absolutely mental timeframe (in a bad way)
OP here. Roughly 50GB in db size. Fairly standard queries (full-text search + filters). Most queries are on the order of 10-100ms. Some more complex ones involving business logic exceeds 100ms.
This is well within my budget, but it sounds like there might be room for improvements?
Right now that’s not the case. Satellites just store whatever fuel they need for orbital adjustments and by default, they fall back to earth and burn up at the end of their life. All the Starlink satellites are configured to fall back to earth within 5 years (the fuel is used to re-raise their orbit). The new proposed datacenters would sit in a higher orbit to avoid debris, allegedly, but that means it is even more expensive to get to them and refuel them, and the potential for future debris is far worse (since it wouldn’t fall back to earth and burn up for centuries or millennia).
I feel like there's this weird information gap that shouldn't be there in this day and age. FSD drives me around every day. 99% of the time, I don't touch the steering wheel for the entire trip, both city and highway. It's obvious to me that it is working well and has tremendous value. As far as I know, no other car on the market can do this. But apparently this info is not reaching a large number of people. I'm genuinely confused.
I was frustrated by the process of searching for clinical trial info using the clunky and slow registry websites. So I aggregated all trials around the world and made the search faster. Additional features:
- You can watch a trial and get email updates
- Sometimes a trial is done and a paper published, but no updates to the official page. I try to find these papers and link them
- I try to link trials with the same drug together, showing the drug going through different phases
https://findtrial.co/
reply