Hacker Newsnew | past | comments | ask | show | jobs | submit | cwbrandsma's commentslogin

* How much stress? * How many hours?


Braden Health | Boise (ONSITE) | Full-time

We are writing and maintaining health software for rural hospitals in Tennessee and Virginia. Small team, products in active development.

We are looking for a full stack developer, comfortable in backend and frontend.

Tech we use: C#, ASP.NET, TypeScript, React, Sql Server. Nice to have: knowledge of HL7, general familiarity with how hospitals work

Email resume to [email protected]


AI is somehow supposed to generate $1T a year by 2028? From where? Never mind the lack of electric grid capacity to keep all of this running.


OpenAI's ChatGPT alone hit 500 million weekly active users in March, apparently they're closer to 800 million now. I guess they're still working out the monetization strategy, but in the worst case just think of how Google makes their revenue off search..


Each ChatGPT query costs orders of magnitude more than a google search. I can’t say for sure how many orders, but I suspect more than a few.


The first one does, then prompt caching kicks in.. turns out many people ask similar questions. People who frequently ask complicated questions might have to pay extra, we can already see this playing out.


That’s not what prompt caching is.

Also, most ChatGPT users have their “personalization” prefix in the system prompt (which contains things like date/time), which would break caching of the actual user-query.


The prompt has to be precisely the same for that to work (and of course now you have to have an embedding hashmap which is its own somewhat advanced problem.) I doubt they do that especially given the things I've heard from API users.


you realise now google is plugging gemini to all their queries and giving you summaries and stuff

so maybe not so much anymore? would be true if it was -pure- search on google's part but it isn't anymore


> google is plugging gemini to all their queries

Not to all, definitely. I haven't figured out what is the differentiator here but many queries are excluded.


I read someone that adding -nsfw- or such words to the prompt made it go away reliably funnily enough


The delta might not be that large these days, with the AI suggestions that Google is placing on search result pages.


Because they have their own hardware.


Say: 3 billion users, 20% xconvert to customers (600M) paying 20mo, is a combined 144B. Nowhere near a 1T reality


I'm curious why they only publish weekly active users. Isn't it usually monthly active users?


In the recent Sam Altman interview he said the plan should be keep burning fossil fuels to power the data centers running AI because that’s the path to fusion. Just like LLM can help devs code 100x faster they can do that for nuclear engineers too.


Fusion seems short-sighted though. Antimatter is 100% efficient. I personally think Sam Altman should be looking into something like an Infinite Improbability Drive as it would would be a better fit here.


Sounds like he is sucking up to the Dear Leader and his sponsors again.


The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade, so they should get a pass on the "haha they're saying that because they want to pander to Trump" accusations.


> The pro-singularity/AGI people genuinely seem to believe that takeoff is going to happen within the next decade

I'm as anti-AI as it can get - it has its uses, but it is still fundamentally built on outright sharting on all kinds of ethics, and that's just the training phase - the actual usage is filled with even more snake-oil salesmen and fraudsters, and that's not to speak of all the jobs for humans that are going to be irreversibly replaced by AI.

But I think the AGI people are actually correct in their assumption - somewhen the next 10-20 years, the AGI milestone will be hit. Most probably not on LLM basis, but it will hit. And societies are absolutely not prepared to deal with the fallout, quite the contrary - particularly the current US administration is throwing us all in front of the multibillionaire wolves.


> somewhen the next 10-20 years, the AGI milestone will be hit

You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.

Second, if AGI means that ChatGPT doesn't hallucinate and has a practically infinite context window, that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention. We'll adapt just like we adapted to using LLMs.


> You seem quite confident for a person who doesn't offer any arguments on why it would happen at all, and why within two decades specifically, especially if you claim it won't be LLM-based.

Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.

> that's good for humanity but I fail to see any of the usual terrible things happening like the "fallout" you mention.

A decent-enough AI, especially an AGI, will displace a lot of white collar workers - creatives are already getting hit hard and that is with AI still not being able to paint realistic fingers, and the typical "paper pusher" jobs will also be replaced by AI. In the "meatspace", aka robots doing tasks that are _for now_ not achievable by robots (say because the haptic feedback is lacking) there has been pretty impressive research happening over the last years. So that's a lot of blue collar / trades jobs going to go away as well when the mechanical bodies are linked up to an AI control system.

> We'll adapt just like we adapted to using LLMs.

Yeah, we just stuck the finger towards those affected. That's not adaptation, that's leaving people to be eaten by the wolves.

We're fast heading for a select few megacorporations holding all the power when it comes to AI, and everyone else will be serfs or outright slaves to them instead of the old scifi dreams where humans would be able to chill out and relax all day.


> Rather sooner than later, IMHO the sheer amount of global compute capacity available will be enough to achieve that task. Brute force, basically. Doesn't take much imagination other than looking at how exponential curves work.

Only assuming there is something to be found apart from the imagination itself. We can imagine AGI easily but it doesn't mean it exists, and even if it does, that we will discover it. By that logic - we want something and we spent a lot of compute resources on it - the success of a project like SETI would be guaranteed based on funding alone.

In other words, there is a huge gap between something that we are sure can be done, but it requires a lot of resources, like a round trip to Mars, and we can even speculate it can be done within 10-20 years (and still be wrong by a couple of decades) on the one hand, and something we just hope to discover based on the amount of GPUs available, without slightest clue of success other than funding and our desire for it to happen.


The thing is, for economic devastation you don't (necessarily) need to have an actually "general" intelligence that's able to do creative tasks - and the ethical question remains if "creative humans" aren't just a meat based PRNG.

A huge amount of public service and corporate clerkwork is served enough by an AI capable enough of understanding paperwork and applying a well-known set of rules against it. Say, a building permit application - an AI to replace a public service has to be able to actually read a construction plan, cross-reference it with building codes and zoning and check the math (e.g. statics). We're not quite there yet, with an emphasis on the yet - especially, at the moment even AI composition with agents calling specialized AI models can't reliably detect when it doesn't have enough input or knowledge and just hallucinates.

But once this fundamental issue is solved, it's game over for clerkwork - even assuming the pareto principle (aka, the first 80% are easy, only the remaining 20% are tough), that will cut 80% of employees and, with it, the managerial layers above. In the US alone, about 20 million people work in public service. Take 50% of that (to account for jobs that need a physical human, such as security guards, police and whatnot), gives 10 million jobs for clerkwork, take 80% of that and you got 8 million unemployed people, alone in government. There's no way any social safety net can absorb that much of an impact, and as said, that's government alone - the private sector employs about 140 million people, do the calculation for that number and you got 56 million people out of a job.

That is what is scaring me because other than "AI doomers" no one seems to have that issue even on their radar on the Democrat side, and the Republicans want to axe all regulations on AI.

> without slightest clue of success other than funding and our desire for it to happen

The problem is, money is able to brute-force progress. And there is a lot of money floating around in AI these days, enough to actually make progress.

[1] https://www.statista.com/statistics/204535/number-of-governm...


Ah I see your point, and I agree. We've seen how it plays out in places where greedy entrepreneurs brought waves of immigrants to do sub-minimal-wages work and what effects it had on the society, so I agree about the consequences.

However, at least for LLMs, the progress slowed down considerably so we're now at the place where they are a useful extension of a toolkit and not a replacement. Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt. (With a huge disclaimer: if history taught me anything, it is that all predictions are as useful as a coin toss.)


> Will it change dramatically in 20 years? Possibly, but that's enough time to give people a chance to adapt.

Yeah, but for that, politicians need to prepare as well, and they don't. All that many of today's politicians care about is about getting reelected or at the very least lining their pockets. In Germany, we call this "nach uns die Sintflut" [1], roughly translated to "after us, the floods may come".

Here in Germany, we at least have set up programs to phase out coal over decades, but that was for a few hundred thousand workers - not even close to the scale that's looming over us with AI.

[1] https://de.wikipedia.org/wiki/Nach_uns_die_Sintflut


Somewhen?


You statement could be seen as sarcastic...or not...and in itself that is tragic...


Maybe if we put them in some sort of four armed exoskeleton


If nobody has a job, I am unsure who will be left to pay the subscription fee.


The other way around maybe?

They think the employers are going to line up to pay billions for AI workers in order to avoid paying trillions in benefits to human ones?


And who will buy the products and serivces of these "employers" when nobody has a job?

See you can keep adding middle layers, but eventually you'll find there's no one with any money at the bottom of this pyramid to prop this whole thing up.

When the consumer driven economy has no critical mass of consumers, the whole model kinda goes belly up, no?


Perhaps a feudal economy where it is just the wealthy who consume. The rest of humanity is pure subsistence.


Who buys the products then?


Who will be the customers for those employers?


5 million claude agents? Oh wait that's a billion. Yeah it's not happening.


And you will forever have your name mispronounced by English speakers that can’t wrap their tongues around the -sma or -stra ending.

Things like TerpSON, TerpSmith, Terp….

One time while voting the lady working there butchered my name 8 times, she literally could not get it right.


To be fair to an English speaker reading your name from paper: some native English speakers are taught to read by recognizing words by their first letter and their shape, and skipping the word to later fill in the blanks when they don't recognize the word. The lady may have simply never been taught how to sound out unfamiliar letter combinations, and may have been trying her best to make sense of the unrecognizable mess of letters she saw in front of her.

https://www.apmreports.org/episode/2019/08/22/whats-wrong-ho...


Oh gosh, this explains so much.

I always felt that many native English speakers can't really parse a text properly. They seem to react to certain keywords. When the text says something they didn't expect, they often miss it or get confused.

I thought it might be a side-effect of being monolingual and hence having a less explicit understanding of language but seeing how they are taught to read, things make perfectly sense.

It is crazy of much staying power bogus science has in education. Reminds me how the idea of individual learning styles is still popular and though it lacks empirical evidence.


She wasn't even reading it, I was telling her my name, all she had to do was repeat it and she couldn't do it.


I challenge you to go to China and ask people to make fun of you if you are unable to correctly pronounce half their words. Not because of stupidity but because of a mix of not hearing the subtle difference ("but that's exactly what you said!") and being unable to accurately reproduce a sound that you hear.

As kids, we have the ability to make lots of noises. Kids learning languages keep those skills alive. Over time, we lose that ability for sounds that we don't use regularly, and re-acquiring that capability is really hard.


And the 13-episode podcast from them: https://features.apmreports.org/sold-a-story/


Can confirm, usually misunderstood too.


> she literally could not get it right

Eh, they get their revenge. As any Australian of a certain age can tell you TelSTRA could not get it right, for any value of right, without expending an equivalent effort to moving a mountain.


* Step 2: design is "tested" with the users, later we find out the users really had no idea what was going on because they weren't really paying attention. Then the real product is delivered and they are shocked that the changes were made behind their back and without their input.


That sucks, since I developed a neurological issue from Covid I could really use a refresh of my nervous system.


Be careful what you wish for.

(With an understanding that where you are now undoubtedly sucks. My sympathies.)


Doesn't look like they are paywalling existing feature, just some new AI back functionality. Considering how much is costs to run those sorts of features I can understand.

Plus, there are lots of alternative app that are free and easy to download and install.


The language that could have been. I did a bunch of Delphi around the Delphi 5 and 6 era. It was a nice middle ground between VB6 and C++.

Unfortunately it made my startup more difficult to sell. Eventually Microsoft bought it, but there were a lot of rounds.


Delphi is still reasonably popular in some niches. It has a powerful but easy WYSIWYG GUI builder and is close to the hardware without being C++, making it decently popular for tooling for industrial hardware.

Though their website makes me suspect they have given up trying to find new customers and are just building new features for the customers they have


In Germany we still have enough folks to keep a conference going.

https://entwickler-konferenz.de/en/


I started with version 1.0. Such an incredibly elegant language. I choose it not to feed the Microsoft Monopoly, but in the end, and after the most incredible f*ck ups by Borland, VB won. I used D7 for many years. Now I use it only as a hoby language.

What do you mean Microsoft bought it? IT was never bought by MS.


I think he means that Microsoft bought his own startup whose product was written in Delphi.


I think "it" is the startup, not Delphi.


Microsoft did snipe some of the key people at Borland who created Delphi to get from VB4 to VB6.


Microsoft also bought Skype which was also written with Delphi.


You forgot: Engineering is requesting this. So no.


Same boat with having a chronic illness, not fatal, but no cure either. It gets tiring wading thru all the snake oil salesmen selling false hope. And it isn’t them directly, because my older family members will hear about it with “have you tried…”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: