There’s a thread about how emergency physicians are paid. It varies from group to group:
Physicians can be salaried and receive benefits from their group or hospital
Physicians can be 100% productivity based meaning that they will only get paid by the amount of patients they treat but they receive no other benefits from the group or hospital
In between these two groups, there is a wide variety of compensation Packages that are complicated to discuss in this comment.
Nonetheless, the overwriting factor for all emergency physicians is that we triage patients, not only after triage, but internally as well, including those patients at reside within the treatment rooms and those outside in the waiting room.
The question is, can we see less patients and spend more time with them and the answer is yes but to the detriment of the entire department and possibly not seeing a patient who is sick and who hasn’t been seen yet. Do you have to be able to tell who you can spend five minutes with and who needs 30 minutes.
Through put his king, but quality is queen, so there’s always a trade-off between seeing patients fast enough and to see enough patients through your shift, but to also how they were with all to determine which patients will require more time and more due diligence.
Every shift is a pull and push between these two dichotomies and it’s never easy and there are multiple decisions that have to be made.
The ED is specifically attuned to these presentations but the sepsis alerts and algorithms in place are horrendous and will fire off even for this with viral illnesses and syndromes.
Sepsis alerts are meant to find bacteremia in patients who present with a set of vital signs and laboratory findings indicative of it and even those definitions are not readily agreed upon.
The ED is highly accurate with its diagnosis and treatments despite everything that has been said.
Trying to find a zebra in the hoof beats of horses when the number of patients quickly outstrips your department’s capabilities is a fools errand because if the workup require will overwhelm throughout to the point that the delay in care will put other patients at risk.
There is a fine line between doing enough and doing too much that will grind your department to a halt and then have your waiting room backing up.
Unfortunately for this patient, his occult condition didn’t manifest itself within his two ED visits and we don’t have prognostic capabilities to tell who will and won’t decompensate. We all make value judgements and treat the patients in front of us
> Trying to find a zebra in the hoof beats of horses when the number of patients quickly outstrips your department’s capabilities is a fools errand because if the workup require will overwhelm throughout to the point that the delay in care will put other patients at risk.
This is where that I hope (as a non-physician) that AI, used carefully, should actually be able to help. A well-designed ML system should have a decent chance at distinguishing a zebra from a horse because it has read absolutely everything, has perfect recall of that corpus, and has some ability to contextualize the knowledge it has to the situation at hand. I suspect that a good proportion of ER doctors already have those characteristics, but surely not all of them do, and surely not consistently.
An AI-assisted system will still false-positive, because the computer is still just a tool, tools are never perfect and designers of safety systems tend to err on the side of false positives. However, a thoughtful pop-up that displays when the situation really may warrant it is surely more helpful to a physician that one that cries wolf to you constantly?
My optimism assumes that there's fundamentally enough information available to make the diagnosis, however. If you're actually saying that finding the zebra would require gathering so much more information for each patient that it would lower overall outcomes for all patients, then I guess we're stuck.
Perhaps, but it should be noted that sepsis detection is one of the places where ML has been applied for a very long time. A lot of these popups are built off those models.
I have been in the informatics space for some time and none of these ML generated sepsis alerts are helpful in the ED.
We can readily tell who has sepsis when confronted with patient appearance, vital signs, and workup.
The issue is trying to find a signal in all of this data where they have an occult condition that we are not yet observing. These folks get sick real bad and real quick and we often do miss this!
Trying to find that signal with our current medical technology is difficult but there are some immune markers that could potentially alert us. These tests are now coming online and should be prevalent within the next few years. Hopefully, more research will show whether they are helpful or not.
Sometimes, even the best tests or calculators or ML generated alerts do not measure up to physician gestalt.
Exactly. Look at Wikipedia for a counter-example, showing how to do things right. Sure, Wikipedia is frequently asking for donations, but in reality they have a large endowment and can go without new funding for ages, though they'd have to cut back on the unimportant stuff.
Mozilla could have done the same thing if they had a little foresight and didn't just assume the piles of cash would keep flowing in forever.
Firefox OS was pretty cool and had it succeeded(Which it might well have if they'd stuck with Witch's plan of targeting the third world), Mozilla would be in a much stronger position to influence web standards and generate revenue nowadays.
You've created a beautiful alternative timeline. One that posits that the Firefox team had the technical chops to compete. Money isn't a panacea: Microsoft clearly threw a lot at the Edge redevelopment and that team apparently failed. Also the Chromium team are 10x superstars in my eyes.
The glimmers of technical hope within Gecko like Servo and Rust were chopped.
> but no, they blew it on obvious boondoggles (like Firefox OS)
I agree they wasted money on a heap of useless things (as we see plenty of failed companies do - like Borland).
But I thought Firefox OS was one of the most promising ideas. I'd still like to see a vendor try and produce a phone that supported browser-tech for Apps - the idea is clearly valuable for Apps delivered using WebViews on iPhone and Android.
Personally I think the rot is throughout the org - and blaming one section is pointless - it weren't just management or tech. Watching previously successful corporations die has a similar feel to me.
I don’t like relative risk and relative risk reduction because it tends to overestimate the effectiveness of the intervention.
In this case, the absolute risk when measuring for death in the GIM pre-intervention and GIM post-intervention are 0.0215 (2.15%) and 0.0146 (1.46%) with an absolute risk reduction of 0.0069 (.69%).
While the relative risk is 26% across the pre- and post-intervention, the absolute risk reduction is only 0.69% with a NNT (number needed to treat) of 1/156. Which means that 1 patient in 156 was helped by this intervention.
In addition, they had 2 false alarms for each true alarm and could suggest that interventions were performed in patients who did not require it — more tests, medications and possibly increased risk from said interventions.
This shows that the CHARTwatch ML/AI is not helping at all that much clinically.
I like this analysis, although I come to a different conclusion: if AI can give early warning to nursing staff, telling them 'look closer', and over 1/3 of the time, it was right, that seems great. Right now in a 30 bed unit, nurses have to keep track of 30 sets of data. With this, they could focus in on 3 sets when an alarm goes off. I believe these systems will get better over time as well. But, as a patient, I'd 100% take a ward that early AI warning with 66% chance of false positives over one with no such tech. Wouldn't you?
I would not. High false alarm rates are a problem in all sorts of industry when it comes to warnings and alerts. Too many alerts, or too many false positive alerts cause operators (or nurses in this example) to start ignoring such warnings.
This is the real problem. In a perfect world, everyone pays attention to alarms with the same attentiveness all the time. But it just isn't reality. Before going into building software, I was in the Navy and after that did work as a chemical system tech. In the Navy, I worked in JP-5 pumprooms. In both environments we had alarms and in both environments we learned what were nuisance alarms and what weren't, or just took alarms with a grain of salt and there for never paid proper attention to them.
That is always the issue with alarms. You have a fine line to walk. Too many alarms and people become complacent and learn to ignore alarms. Too few alarms and you don't draw the attention that is needed.
More data with appropriate confidence intervals can always be leveraged for good. I hear this application often in medical systems, and recognize the practical impact. The problem is incorrect use of this knowledge (eg to overtreat); not having the knowledge.
No, the problem is information overload. Even without these errors nurses are often overburdened with work and paperwork. Adding another alarm, with a >50% false positive rate is going to make that situation worse. And the nurses will start ignoring the unreliable warning.
I suspect we are on the same page. My point is in regards to using information as described in the article to improve the system. I do not think an on/off "alarm" is the way to do this. The key is to use information from signal processing theory (eg how a Kalman filter updates) to provide input into what medical action to take. The reactions against more diagnostics etc is due to how they are applied, like a brute force alarm, leading to worse outcomes through, for example, unnecessary surgeries etc.
The reduction I am arguing against is: "Historically, extra information and diagnostics that have an error margin results in worse outcomes because we misapply it; therefore don't build these systems."
At work, we had an appliance which went into failsafe on average 8 times per day. The failsafe is meant to remove power from a device-under-test in case of something like fire in the DUT. The few actual critical failures were not detected by the appliance.
Instead, the failsafe has the effect of merely invalidating the current test, and making the appliance unable to run a test correctly until either power cycled or the appliance's developer executes a secret series of commands that are not shared with us.
So of course an operator of the appliance found a way to feed in a false "I'm here!" with a loop, to trick the appliance into never going into failsafe…
That's for ~6.8% of all tests being false-positive, ~93.2% being true-negative, and ~3 tests that should have triggered failsafe did not.
Sorry, I meant to say that with only 6.8% of all tests triggering a false alarm (and 0% true alarm), a test operator still found a way to prevent the alarm from occurring rather than being kept on their toes.
I'm not. If you have three alerts a day, a 33% chance of true positive per alert means you'll get an alert pointing to a problem at least once per day.
That's enough to anchor "alert == I might find a problem" in the user's mind.
No, many people working in clinical units wouldn't. Because of what might happen on false alarms. What GP said: more meds, more interventions. It's not clear at all whether such systems would help with current workflows and current technology. One of the most famous books about medicine says that good medicine is doing nothing as much as possible. It's still very true in 2024, and probably for a long time still.
I like this analysis, although I come to a different conclusion: if AI can allow nurses to manage 10x as many beds (30 vs 3), a hospital can now let go 90% of its nursing staff. Wouldn’t you?
Generally speaking, they aren’t short staffed because there aren’t enough nurses, but because they can’t/won’t pay them enough. Those same hospitals hire large numbers of travel nurses to supplement their “short staff” at pay rates double or triple a local nurse.
And the nurses who want decent pay and can do travel nurse, do travel nurse
Would you suffer serious nonlethal complications from false alarm to (maybe) save your room neighbour that you've never met before? This wouldn't be captured.
You can't automate it. You have to look at the data and charts to figure out the specifics you want and then you plug and chug. I haven't looked deeply at this though but whenever researchers use relative risk and it shows a profound effect, I always calculate the absolute risk to make sure that the intervention is effective.
Many researchers go to relative risk because it shows better results!
That’s a good point, a similar conversation was had around the Covid jabs, with some research re vaccine mandates concluding “heads of governments, schools, healthcare facilities, and private businesses (were) misled by the vaccines’ reported 95% relative risk reduction”
Are you seriously asking residents to subsidize their own training working 60-80 hours a week [1] and being paid maybe $20-25/hr for 3-7 years while the hospitals make money from their services?
If that were the case, then the only ones who would put up with such a system would be those who have rich families to back them up.
I came from a blue collar background and my family could not help subsidize my training and I could not afford to live, make rent, or afford food if I had to pay for my training as well.
There are so many other things that you are completely glossing over including annual income between generalists vs specialists, pediatricians vs non-pediatricians, reimbursement inequality from Medicare (has not kept up with inflation and is ~40% below inflation), Medicaid pays at best 10-15% on the dollar, and so on.
American physicians also work almost teice as many hours as their European counterparts which would increase their incomes.
[1] it’s considerably more because if you report it then the program can be in danger and lose its credentialing
> American physicians also work almost teice as many hours as their European counterparts which would increase their incomes.
I’d be curious as to where you got that. I was personally suprised to discover that in Germany, doctors in university hospitals can work up to 80 hours a week.
US doctors are among the richest people in the entire world, have an involuntary unemployment rate that rounds to zero, can live wherever they like without serious impact on pay and yet somehow are also the most oppressed and put upon if you ask them. Go figure it out.
It’s inappropriate to link once dissatisfaction with one’s income.
There are multiple reasons why US doctors are unhappy about their profession. I’m not sure where you get being oppressed as being the titular complaint.
I was an engineer before going to medical school and have an outside perspective as well.
I have complaints with medicine as to how it has changed over the last 10 years and not for the best. I have the right to complain to make things better for my patients and my work-life balance. That doesn’t mean that I am oppressed.
You have the right to complain and I have the right to point out that your industry, along with the housing and education industries, has stolen the entire productivity gains of a generation of Americans. I’m not concerned with your work life balance.
Your complaint should be directed towards the hospitals, for-profit insurance companies, and others who profit massively on the backs of patients.
My "industry" is only myself as I am beholden to my patients. I work and am paid by each patient who I treat and receive reimbursement by 50% of them at best.
If you don’t care about my work-life balance and lump me in with the rest of the healthcare system, then there’s nothing much to discuss.
Doctors also massively profit off the back of patients. They make far more and live a far more lavish lifestyle than doctors in other countries. I have multiple doctors in my family, they all work less than 30 hours a week and make more than 300k. And theyre not even in the lucrative specialties. Most are family physicians which Im sure you know is the worst paid doctor.
Doctors are paid by the patients that they treat. As an emergency physician, I treat 20-25 patients per shift on average and I am paid by approximately half of them due to not having insurance or money to pay for my services. I am paid roughly $120 per patient. Is that excessive in your opinion?
I would say that doctors in other countries are not paid accordingly and are UNDER-paid. See the UK NHS strikes as an example.
You should be complaining about is the excessive charges and reimbursement that hospitals receive for the care that they give. It gives me pause that despite having multiple physicians in your family that you do not understand the difficulty of their practice or work environment.
Most white collars jobs including doctors, engineers, and administrators make multiples of our European counterparts. I would say that Europe underpays its workers.
Again, I am a highly paid worker just like the rest of you. If I don’t work and treat patients then I am not paid anything whatsoever.
I would be further much ahead financially if I had stuck to being an engineer 18 years ago and not have put off earning an income for 7 years and going into massive debt. This was a risk because I come from a blue collar, lower middle class background without a safety net.
Yes $120 per patient is excessive. Yes the rest of the medical industry is even more of an excessive rip off. I dont think being a doctor is easy, I just think doctors are taking advantage of the government limited supply of their profession to negotiate salaries that are much higher than they would be if we had enough doctors. Doctor salaries are 9% of medical costs in the US, you cant just brush that off as insignificant.
Health care cost disease in the US has little to do with doctor salaries or demand for doctors. Unfortunately, as that would be a relatively easy problem to solve. The data I've seen indicates physician salaries adjusted for inflation are relatively flat or have even declined since the 1970s.
I agree you should be paid properly. But I don't understand why, if the US has a privatised healthcare system, hospitals are not paying to train their own labour?
US hospitals are almost all nominally non-profit. In practice, they are partnerships run for the economic benefit of controlling employees but the scam the tax code by pretending otherwise.
They do to a degree but most of the money comes from the government to subsidize costs. I’m not privy to the amount or percentage but they do cry about it when the residency union (CIR) asks for inflation adjusted salary increases.
> subsidize their own training working 60-80 hours a week [1] and being paid maybe $20-25/hr for 3-7 years while the hospitals make money from their services
That sounds like an apprenticeship. Many other industries do exactly that. Why should doctors be different?
But why must medical residency require 60 to 80 hours per week of work? AFAICT, it's because residencies are competitive, because there aren't enough of them, because nobody wants to pay for them. (But you can't make them unpaid, because, by circular logic, they require 60 to 80 hours per week.)
As an outsider, the whole system seems completely bizarre to me.
> [AMA president] Dr. Richard Corlin has called for re-evaluation of the training process, declaring "We need to take a look again at the issue of why the resident is there."
Martin Makary’s study and the previous IOM one are based on faulty statistics used. The number is extrapolated from a small population to a larger one.
I haven’t paid it any attention because of this problem. GIGO.
Give me a few minutes and I can pull up any number of medical studies or references to back up my claims.
I don’t have them memorized to the actual URL but I have kept up to date with the latest studies and summaries that pertain to my field and my patients.
The study performed by AHRQ used incorrect methodologies and datasets to extrapolate its findings.
The ED is far more accurate with a much lower error rate than the study found.