Hacker Newsnew | past | comments | ask | show | jobs | submit | cowboylowrez's commentslogin

this is what I've been using freebie gemini chat for mostly, example code, like reminding me of c stdlib stuff, javascript, a bit of web server stuff here and there. I think it would be fun to give googles agent or cli stuff a spin but when I read up here and there about antigravity, I'm reading that people are getting their accounts shutdown for stuff I would have thought was ok, even if they paid for it (well actually as usual the actual reasons for accounts getting zapped remain unknown as is today's trend for cloud accounts).

I'm too poor for local llms, I think there might be a 2 or 4gb graphics card in one of my junk pcs but thats about it lol


"Boris Cherny" seems pretty good at this enshittification stuff. Think about it, normal coders would consider having a config like show details or don't, you know, a developers preference but no this guy wants you to control-o all the time, read the article its right there what this guy says:

" A GitHub issue on the subject drew a response from Boris Cherny, creator and head of Claude Code at Anthropic, that "this isn't a vibe coding feature, it's a way to simplify the UI so you can focus on what matters, diffs and bash/mcp outputs." He suggested that developers "try it out for a few days" and said that Anthropic's own developers "appreciated the reduced noise.""

Seriously man, whatever happened to configs that you can set once. They obviously realise that people want it with the control-o but why make them do this over and over without a way to just config it, or whatever the cli does like maybe:

./clod-code -v

or something. Man I dislike these AI bros so much, there always about "your personal preferences are wrong" but you know they are lying through their smirking teeth they want you to burn tokens so the earth's inhabitability can die a few minutes earlier.


Speaking of burning tokens, they also like to waste our tokens with paragraphs of system messages for every single file read you do with Claude. Take a look at your jsonl files, search for <system-reminder>.

Thats my thought too. The chatbot bros probably feel the need to be responsive and there's probably an express lane to update a trivia file or something lol

I got your point because it seemed that you were precisely avoiding the anthropomorphizing and in fact seemed to be honing in on whats happening with the weights. The only way I can imagine these models are going to work with trick questions lies beyond word prediction or reinforcement training UNLESS reinforcement training is from a complete (as possible) world simulation including as much mechanics as possible and let these neural networks train on that.

Like for instance, think chess engines with AI, they can train themselves simply by playing many many games, the "world simulation" with those is the classic chess engine architecture but it uses the positional weights produced by the neural network, so says gemini anyways:

"ai chess engine architecture"

"Modern AI chess engines (e.g., Lc0, Stockfish) use a hybrid architecture combining deep neural networks for positional evaluation with advanced search algorithms like Monte-Carlo Tree Search (MCTS) or alpha-beta pruning. They feature three core components: a neural network (often CNN-based) that analyzes board patterns (matrices) to evaluate positions, a search engine that explores move possibilities, and a Universal Chess Interface (UCI) for communication."

So with no model of the world to play with, I'm thinking the chatbot llms can only go with probabilities or what matches the prompt best in the crazy dimensional thing that goes on inside the neural networks. If it had access to a simple world of cars and car washes, it could run a simulation and rank it appropriately, and also could possibly infer through either simulation or training from those simulations that if you are washing a car, the operation will fail if the car is not present. I really like this car wash trick question lol


yep, 18+, show id at the time of purchasing access soooo easy and zero technical issues.

just make internet 18+, solves everything. demand ID's at the time of purchase.

ICE agents will LOVE this one neat hack

purchasing alcohol must be a dream come true for them lol

Weird jump between voicing your political points online and which beer you prefer, but okay.

sorry, I probably needed to spell it out for you, when you buy an 18+ or 21+ age limited product, you have to physically show up for it and show your id. registering to vote is also pretty age limited. Now at least for the alcohol thing, parents I think can actually give their kids drinks and get away with it if they don't get caught heh I remember my dad gave me my first beer but you can rest assured that if I proceeded to be knocking back my brothers secret stash of 90 proof shit on the regular in front of my dad, well shit would have turned out pretty much the same as when my brother ratted me out for stealing his own hash pipe lol

oops forgot to mention, you can check the potential internet customer's id also at the time of purchasing the internet connection or obtaining a wifi login. I hope that I've clarified the similarity here and that there was actually no "Weird jump between voicing my political points online and which beer I prefer"


if you make smartphones an 18+ item like alcohol many of these problems would go away.

That would also spur the market to produce actually nice pure communication devices. Flip phones could stop being for people with AARP cards again and would give better options to adults who don't want the smart phone all the time.

And have schools stop giving kids laptops or tablets. I wonder how much of the Chromebooks for school incitive was to develop a new market for Google

It's wild seeing these opinions on hackernews of all places. Do we want future generations to know nothing about computing?

I would not be here if I didn't get my start in my early teen years.


When I was an early teen I had access to the internet but my activities weren't entirely unsupervised (and I doubt yours were either). Since it was a new technology there was a lot of discussion around how best to talk to children and make sure they felt safe reporting threats or harms to parents.

A smart phone is too disconnected of a device when compared to the desktops we all grew up on. No one is talking about fully banning <18s from the internet (at least no one serious) - it's a discussion about making sure that the way folks <18 use the internet is reasonably safe and that parents can make sure their children aren't being exposed to undue harm. That's quite difficult to do with a fully enabled smart phone.


Mine was not supervised cause immigrant parents that didn't know anything about computers really. So more or less entirely unsupervised.

By 16 I was regularly ignoring my parents to go to bed when I was up coding or gaming and doing dumb script kiddie stuff on IRC.

I had an adult introduce me to Astalavista (https://en.wikipedia.org/wiki/Astalavista.box.sk)

Thinking back to that I was very well aware of the fucked up part of the internet much more so than most adults around me. People did in fact meet up in person with strangers from the internet even back then.

I think it's more important to teach around age 10-14 about the dark side of the internet so that late teens can know how to stay safe. Rather than simply throwing them into the reality of it unprepared as "adults".

Also frankly I don't want to know the search history of a late teen. There's a degree of privacy everyone is entitled to.


Do you think the younger generations are properly prepared to view the internet as having a dark side? My impression has been that such an early introduction has caused those warnings to be delayed and lost and younger folks are much more trusting of the internet than most millenials were.

It's also important to acknowledge that kids that used the internet weren't everyone in our day and the usage of the internet varied wildly. While now-a-days it's an expectation for everyone to be at least moderately online (often required by academia) and often that their presences are tied to their real names.


I think so yes. What's acceptable changes over time. Gore "content" of WW2 is now presented to 12 year olds as history.

It's not the porn or the LiveLeak gore content that would have me worried. It's groomers and other adults with bad intentions. Not something you can easily block and not something this ID check will stop. A groomer will slow burn a social relationship with someone until they are legal adults. That's something you can only teach someone to look out for. And even adults are susceptible to this.


I remain skeptical. I have some 20sish cousins that seem to be highly aware of the potential dangers online and were pretty clear eyed about it in their teens - but the relatives I have that are five years younger seem absurdly trusting. This is empirical of course but it's concerning to me.

we would hope the power plant, water utility etc are regulated. AI isn't even a stationary target yet and additionally, there are some pushes for simply banning AI regulation by the states, and a certain percentage of us believe the federal government branch called the executive branch are just flat out whackadoodles.

yep, and for the finances to make sense, these AIs need to make a significant impact on employment rolls, ie., they need to replace humans. Personally I think its the wrong tech at the wrong time, but I don't think me and this timeline is a very good match so ymmv

"This page is not supported. Please visit the author’s profile on the latest version of X to view this content.

twitter is useless haha

https://shumer.dev/something-big-is-happening

Its a pretty good article but tldr is that AI is getting pretty good according to the author, and he believes it will impact employment etc because a couple of the latest models are really good. I have to agree and the thing is that many folks are writing that these AIs have to replace or automate a really significant number of jobs for the financials to make sense.

Personally I think these giant models are the wrong tech for the wrong time but there's no denying these things are getting very good at what they do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: