Hacker Newsnew | past | comments | ask | show | jobs | submit | nmitchko's commentslogin

Can someone make a startup that allows me to do this as an individual?

Join Bluesky and you too can lie about whatever you want.

It's called a "farm" (note the quotes). You may need a few acres of very cheap rural land, and some chickens. The IRS loves chickens


Join that startup as a founder, have a million+ exit and you will have the capability to do this as an individual.

Don't be poor, got it.

Good life advice in general really.

Don't know why so many people are so stupid they don't follow such simple and sensible advice. /s

Effective exit rate tax is around 24%

You don't need a startup. Millions of people have an effective tax rate that is 0% and they have a net tax rate that is negative. They do this simply by having no meaningful skills or knowledge.

Individual Meta employees and shareholders couldn't do this either.

But they can. Any Meta employee or shareholder is also free to go on Bluesky and tell lies about taxes.

You can make stuff up even on this site.

In case anyone wants to do this themselves, check out the pipeline here: https://github.com/isc-nmitchko/iris-document-search

Colnomic and nvidia models are great for embedding images and MUVERA can transform those to 1D vectors.


> check out the pipeline here

“the pipeline” - seems like this is just a personal hackathon project?

Why these models vs other multimodals? Which “nvidia models”?


Next steps for AI in general:

  - additional modalities
  - Faster FPS (inferences per second)
  - Reaction time tuning (latency vs quality tradeoff) for visual and audio inputs/outputs
  - built-in planning modules in the architecture (think premotor frontal lobe)
  - time awareness during inference (towards an always inferring / always learning architecture)


Interesting they don't compare to open-bio. Page 7 charts are quite weak.

https://huggingface.co/aaditya/Llama3-OpenBioLLM-70B


Steve here, one of the co-authors. Totally valid on OpenBio. I will say that comparison numbers for this paper were such a challenge, in part because we found that a lot of the LLMs on the Medical LLM leaderboard struggled to follow even slight changes in instructions. On one hand it felt inaccurate to just print '[something very low]% Accuracy' on structuring/abstraction tasks and call it a day, but it also seemed like the amount of engineering effort needed to get non-trivial results from those LLMs was saying something important about how they worked.

I think that's especially true when you look at how well GPT-4o worked out of the box -- it makes clear what you get from the battle-hardening that's done to the big commercial models. For the numbers we did include, the thought was that was the most meaningful signal was that going from 8B to 70B with Llama3 actually gives you a lot in terms of mitigating that brittleness. That goes a step towards explaining the story of what we're seeing, moreso than showing a bunch of comparison LLMs fall over out of the box.

In the end, we presented those models that did best with light tuning and optimization (say a week's worth of iteration or so). I anticipate that we'll have to expand these results to include OpenBio as we work through the conference reviewer gauntlet. Any others you think we definitely should work to include? Would definitely be helpful!


No other models that are public worth comparing to... Hippocratic advertises good benchmarks but that might be marketing fluff.

Have you checked out dataset building with nemotron? The nemotron synthetic data builder is quite powerful.

Moreso, check out model merging. It's possible if you merge some of your model against llama3.1 base it may perform much better.

Check out max labonne's work on hugging face


We're excited to share pitchpilot with the HN community. Our beta users have found the embedded audio particularly useful for enterprise sharing. We're keen to keep improving, and our mission is to make communication easier.

In the roadmap is adding video export, digital twin presentations, and real-time presentations. We don't wrap a public LLM, so we don't share any data.


Given that Generative AI can now read brain scans [1] and this, I wonder how far away we are from "you thought negatively about something, the authorities are on their way".

[1] -- https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3


Well we’re not infinitely far away from it, which is why we need to build political and legal systems that can respect human dignity even in the presence of such technologies.

Be sure to vote :)


He is going to build these ? The same people that are build systems expressly to avoid accountability?


The EU will want to scan your brain...for the children....


Tin-foil hat time:

1. First, models will predict pollution. The outcomes will help shape urban policy. But these won't solve crime or stop people from driving.

2. Second, models will predict individual behavior and track person level emissions. The outcomes will force behavior changes, mostly freedom limiting.

3. Third, and finally, models will predict thoughts. The the thought of driving instead of walking might trigger a response.

It's a slippery slope and we need to draw a line between prediction and policy.


That is some heavy-duty foil in your hat there.

Even allowing for the ridiculously massive technical leap from 1 to 2 and then 2 to 3, it doesn't make much sense.

For one thing, if states are determined to enforce individual emissions limits, they can do it today with legislation. You don't need a predictive model. What does the model add?

Also, the only difference between 2 and 3 is whether a person acts on a thought.

So are you suggesting with #3 that predicted thoughts (e.g. not literal mind reading) which a person doesn't act upon will prompt state action?


Why is it that freedom is always tied to the right to pollute as much as possible, as opposed to the right to live in a world with low pollution?


Using the unqualified word "freedom" has an ambiguity that political actors exploit. Freedom to do something is entirely separate to "free to live in a world where ___".

To be honest, I feel the latter sense of the word is a bit of a stretch - semantically, not politically.

But you see it because "freedom" is a powerful word in politics, and rather than argue against "freedom", pundits go up the ladder of abstraction and argue the definition instead.


Sorry, that question was rhetorical to point out the sillyness of equating driving to freedom.


Ah. Well I hope my answer was useful for anyone who didn't take it as rhetorical!


Indeed this is the another thing pushing us towards dystopia. Now it's "climate change" . Previously it was drugs and terrorism.


How does this compare to ehealthexchange or other qhins that have many years of experience and charge lower costs?


> How does this compare to ehealthexchange

Good question! eHealth Exchange (eHEX) is one of 3 national HIEs that we connect to (currently through Carequality). eHEX is mainly focused on connecting to state-level regional HIEs, which cover a different portion of providers than CommonWell, or Carequality do.

For example, Cerner is a major EHR vendor (used by the VA and others) whose data can only be accessed through CommonWell, since they don't participate in other HIEs.

> that have many years of experience

Relatively speaking, modern HIEs are a relatively new concept (Carequality was founded in 2014) - so extra years of experience doesn't necessarily add any value, and usually just results in more legacy tech to deal with!

> charge lower costs?

This isn't necessarily true - since you brought up eHEX, see their pricing page: https://ehealthexchange.org/pricing-payers-vendors-and-for-p...

TL;DR just to get started it's going to cost you $20k + some months to integrate, $12.5k/yr as the base membership fee (up to $400k if you make a lot of money!), and they charge a per-query price.

The caveat here is per-query in eHEX, isn't what a query is in Metriport. They literally mean every single query (remember the HTTP requests to thousands of endpoints to find patient records, each one of those would be a query). So, if you want to integrate with eHEX only to get limited, messy C-CDA data, then you're looking at paying ~$0.80 per full record retrieval for a patient with 2k documents.


It truly feels like the space race in terms of building LLMs right now. Question is, who lands on the moon first?


I don't think the moon's real.

I think we've largely arrived in terms of capabilities and companies are just competing to work out the kinks and fully integrate their products. There will be some new innovations, but nothing like the moon that caps off "you've won". The winner(s) will just be whoever can keep funding long enough to find a profitable use for them.


Where's the moon? Do you mean like AGI?


It seems to me like the moon is "chatbots which are somewhat convincing" and everybody is landing there in OpenAI's wake. The real problem is Mars - make a computer which can learns as quickly and reason as deeply as, say, a stingray or another somewhat intelligent fish[1].

[1] This task seems far beyond the capability of any transformer ANN absent extensive task-specific training, and it cannot be reasonably explained by stingray instinct: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8971382/


This is true in more ways than one. My question is – what happens once we do land on the moon? Will we become a spacefaring civilization in the decades to come, or will the whole thing just...fizzle out?


Is there any indication that we're converging to AGI instead of to some asymptote that lies far away from it?


I don't think a pure language model of the sort under consideration here is heading towards AGI. I use language models extensively and the more I use them the more I tend to see them as information retrieval systems whose surprising utility derives from a combination of a lot of data and the ability to produce language. Sometimes patterns in language are sufficient to do some rudimentary reasoning but even GPT4, if pushed beyond simple patternish reasoning and its training data, reveals very quickly that it doesn't really understand anything.

I admit, its hard to use these tools every day and continue to be skeptical about AGI being around the corner. But I feel fairly confident that pure language models like this will not get there.


This reminds me of the matrix movie scene when they look at the encrypted thoughts of the matrix.

https://cdn.swisscows.com/image?url=https%3A%2F%2Fi.pinimg.c...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: