Hacker Newsnew | past | comments | ask | show | jobs | submit | fao_'s commentslogin

Really doesn't surprise me, to be honest:

> We strongly recommend contributing with Claude Code or similar AI coding tools.


so which one, the coding by hand part?

I disagree with your framing cynicism as an "agenda". For the record, I agree that the maker movement hasn't actually ended, and most of your points are correct; however, the idea of LLMs teaching Electronics worries me about as much as people using LLMs to learn Chemistry.

A little while ago I had to dissuade someone from learning Chemistry via an LLM, because the advice that they had been given by the LLM would have very literally either blown up the glassware, throwing molten chemicals all over their clothing, or killed them when they tried to taste whatever they were trying to synthesize. There was no consideration of safety protocol, PPE, proper glassware, or correctly dealing with chemical reactions, and nary a mention of a fucking fume hood. NileRed and a few other chemistry youtubers have utterly woeful approaches to laboratory safety (NileRed specifically I have a chip on my shoulder about — I've seen him practice bad lab work on a number of occasions and violate many of the common safety practices from e.g. Vogel's), but even then they do still take precautions! Let it not be forgotten that safety practices are born through bloodshed. Now we have a whole new wave of people who are excited to learn, and that's great, but one stray hallucination will kill them. I'm sure that the LLM will be more than happy to write an "Oh I'm sorry, it's my bad that I forgot to tell you to double glove when handling organic mercury!" but by then it is too late.

The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly. Likewise, the idea of someone following an LLM's instructions and then blowing themselves up in a shower of capacitors or chemical glassware is also utterly terrifying to me.

Yes, you could do all these things before. But at least the most commonly available learning materials to you were trustworthy and written by experts!


I guess we have to agree to disagree, because I am not particularly interested in chemistry and ChatGPT has been extraordinarily helpful in demystifying electronics. Having 24/7 access to a patient person who can unpack the difference between TTL and CMOS logic or when you'd choose a buffer instead of a Schmitt trigger without belittling you for not already knowing what they know is awesome and not going to get anyone even slightly killed.

Electronics can kill too. IIRC capacitors in CRTs are particularly deadly. Though I suppose someone using LLMs only as a first step, much like Wikipedia, is probably at much less risk than someone using it as their only source.

Yeah, okay but... look, I concede that someone who shouldn't be doing anything except watching passive entertainment could absolutely take insane advice from an LLM (or a sociopathic human) and seriously hurt themselves.

But raw dogging capacitors in CRTs is such an overtly straw man argument in this conversation. People who are cleaning bathrooms for the first time can hopefully be trusted not to drink the bleach, right?

If someone licks a running table saw because an LLM said it would be fine, we're talking about entirely different problems.


Don't worry, LLMs are perfectly ok for getting information. Just ask drugs.com about penisomab https://bsky.app/profile/harrisonk.bsky.social/post/3mfs6adw...

Again: not doing anything at all with health or chemistry. They aren't what I am interested in, even peripherally.

What you seem to be missing is that LLMs are better at/for some things than others. Legal review, 3D geometry, therapy and apparently chemistry are off the list.

It doesn't make sense to project that onto domains where it excels.


What are you doing with TTL logic in 2026, out of curiosity?

(I’m not saying it’s not used, but the only thing I’d use TTL for is building old circuits out of the Forrest Mims books.)


Reasonable question and hopefully an interesting answer...

The simple lack of reasons to use TTL logic in 2026 was exactly why I didn't know what the deal was. It'd never come up, but I'd see it referenced.

I'm self-taught and in defiance of the people who insist that LLMs turn our brains to passive mush, the more things I learn the more things I have to be curious about.

LLMs remove the gatekeeping around asking "simple" questions that tend to make EEs roll their eyes. I didn't know, so I asked and now I know!


What was the answer?

I’m just curious at this point about what the quality of the answer is, just because you made a point about LLM use not turning your brain into mush.

I’ve not really used LLMs to answer questions, since it hasn’t gotten me the answers I wanted, but maybe I’m just set in my ways.


I'm actually pretty thrilled that you asked, because I think that this chat is an extremely solid example of LLM usage in the EE domain, and I'm happy to share.

https://chatgpt.com/share/69a184b0-7c38-8012-b36d-c3f2cefc13...

I definitely led some questions to try and squeeze new-to-me perspectives out of it; for example, there could be tricks that make the active high variant more useful in some scenarios.

I think it does a good job of surfacing adjacent questions you might not realize you were eager to ask, as well as showing how it's able to critically evaluate real-world part suitability. I do find that ChatGPT in particular does better with a screengrab of the most likely parts vs a URL to the search engine.


I see the chat, but it looks like you’re not actually considering using TTL anywhere, and ChatGPT isn’t giving any explanations about TTL?

> I would definitely like to understand HCT vs HC (CMOS vs TTL) much better than I do, which currently isn't at all.

I think what ChatGPT should have explained at the beginning is that both HCT and HC are CMOS logic families, it’s just that HCT is designed to interface with TTL (receive TTL signal levels as inputs). The outputs are the same (CMOS outputs are rail to rail, which you can feed into TTL just fine).

Actual TTL logic, like the 7400 series and the variations (LS is one of the more popular variations), uses NPN transistors as inputs and to pull output signals low. It uses resistors to pull the signals high. The result is a lot of current consumption and asymmetrical output signals… maybe a good question to ask ChatGPT is “why does TTL use so much current?” CMOS, by comparison, uses a tiny amount of current except when it is switching.

I would probably choose AHC first as a logic family these days. It’s a slightly better version of HC, but it’s not so fast that it will cause problems.

Just peeking at one of the recommendations in the chat, if you search for 74HCT125 or 74AHC125 on Mouser, you’ll see that the AHC has more options available and more parts in stock. That’s a sign that it’s probably a more popular logic family than HCT, which is something I consider when buying (more popular = better availability).


Thanks so much for the additional context. You've given me more to dig into.

What I would like to know from you is:

1. On the whole, is the information you see it present more or less coherent and useful? Is it better to have this information than not have it at all?

2. Where does this land in terms of your expectations? Did anything surprise you?

It's clear from your reply that you know what you're talking about, while I'm still clawing my way up from nothing... so it makes sense that you have fewer things that you need to ask about.

I've bootstrapped my entire EE skillset over the past 2-3 years, largely with the help of LLMs to interrogate. It's helped me design and build my first product. I'm confident that without these tools, it's not a question of how long it would have taken so much as the truth: it would have died on the vine.

Follow-up: https://chatgpt.com/share/69a184b0-7c38-8012-b36d-c3f2cefc13...

I asked it about the AHC family equivalent and it recommended against using it, suggesting either AHCT or sticking with HCT. For what it's worth, the reference board that I'm tracing uses an HCT, so the LLM isn't wrong.

Note that at the time I'm writing this, I have an extremely fuzzy understanding of the difference between these three... but I'm working through it.


I’m mostly just curious about how people use LLMs to learn. I don’t know what your goals are, and even if your goals were the same as mine, I don’t know how LLMs stack up against the way I learned (mostly from books). At least, not long-term. I’m not that good at electronics, I’m just a hobbyist that went through Forrest Mims mini-notebooks and later Horowitz and Hill.

What I like about information from humans is that humans are always trying to figure out how to say things that are relevant and informative. By “relevant”, I mean that we try to avoid saying things that don’t help you. By “informative”, I mean that we try to include information that you want to know, even if you didn’t specifically ask for it.

Picking on the chat for a moment—when you started out with the question, my first thought was, “This person is specifically asking about HC versus HCT, but maybe they want a broader overview of logic families, and maybe they want to understand which logic family to pick for their hobby project.” That’s an example where I think ChatGPT could have identified something that you wanted to know, but didn’t. (It wasn’t as informative as it could have been.)

Then there’s some times that ChatGPT gave you information of dubious relevance.

> Important: On the HCT125, the enable is active-LOW.

I don’t think that’s contextually important. It’s like saying, “Important: On the Honda Civic, the gas tank is on the left.” That’s contextually important when you’re at the gas station, but not when you’re buying a car.

I’m not sure why the LLM is recommending the TTL-compatible chips. IMO, the right thing to do here is probably to run everything at 3.3V, unless you have something that specifically needs 5V. When everything is at 3.3V, you don’t have to think about level shifting and you can just pick a very boring logic family like AHC. But I don’t know what you’re building. Likewise, I would lean towards using normal CMOS logic levels, unless I had a specific reason to choose TTL-compatible. The regular CMOS versions have better noise margin, because the threshold is in the optimum place—right in the middle.


I can actually clear a lot of that up. ChatGPT has accumulated a significant amount of ambient knowledge about what I'm working on and how I typically progress through asking questions, so the path isn't as blue sky as it appears.

For example, I'm working with a specialty SPCO switch IC that runs at 5V. There's never been and likely never will be a 3.3V version of the AS16M1. Being able to drive the switch (which functions like a shift register) from my ESP32 is top-of-mind.

The HCT125 being active low is directly responding to my question about why to choose it vs the 126 version; since the board I'm studying (which again, it has seen) uses 125s, it's reasonable to wonder why they'd choose one over the other.

Overall, the tone of chats on EE topics tend to be task-focused with permission to go on interesting side quests. I'm trying to get stuff done with room for relevant exploration along the way.

Does that change anything for you?


Sure, that makes sense. Use the 74AHCT125/6 or 74HCT125/6 to drive a 5V chip from a µc with 3.3V IO.

I think you're in the minority of people that are using LLMs for one of the best uses - for augmenting your own understanding and intelligence. Of course you have to triangulate and triple check what they say, but that's a good habit to get into anyway. Many of my teachers would repeat tribal myths all the same.

> Having 24/7 access to a patient person

It’s not a person. You understand that, right? I have to ask considering the amount of people who are “dating” and wanting to marry chatbots.

It’s a tool. There’s no reason to anthropomorphise it.


I'm glad that you brought that up, because I actually hovered on my response precisely because of those words. Specifically, I wondered if I could reliably count on someone showing up to say something patronizing and unnecessary.

This particular combination of snark, faux-concern and pedantry doesn't help the point you're trying to make about my loving AI wife.


It was not my intention to be patronising nor snarky, nor was I the least bit concerned for you (faux or otherwise). Though on a reread I do understand how my reply can be understood as unkind. I regret that and apologise for it. It was not my intention but it was my mistake. I should’ve made it shorter:

> It’s not a person, It’s a tool. There’s no reason to anthropomorphise it.


Fair enough; I appreciate the follow-up.

Without wanting to be argumentative, I would push back and say that I really did stop to consider my implied assignment of personhood before committing to it. I went with it because it reflects both the role it plays - you'll be relieved that I stopped short of deploying "mentor" - and the fact that English is highly adaptable and already the linguistic tug to use They feels very comfortable in relation to LLMs. Buckle up!


> you'll be relieved that I stopped short of deploying "mentor"

Funnily enough, I think that might’ve been better. I don’t think a mentor has to necessarily be human; one can learn from nature or pets. Or even a machine: Stockfish can teach you to play better chess and give context as to why you fumbled and how to do better next time.

I just don’t think LLMs are people and that we should avoid anthropomorphising them (for a whole plethora of reasons which are another discussion). I’m not even saying I think there could never be a robot which is a person. Just not what we have now.


noooo we have to go back to stackoverflow to feel small.

> The idea of someone learning, say, House DIY from an LLM and then sawing through the joists or rewiring their electronics is utterly terrifying to me, quite frankly.

Can't wait for the load-bearing drywall recommendations coming from LLMs that were trained on years of Groverhaus content.


> This is actually a clean commodity price spike because it’s specifically not for market manipulation or financial engineering. It’s because demand for this product really did explode overnight.

Based on how the same 3 billion has been circiling between Anthropic, OpenAI, Nvidia, Google, Microsoft, Amazon, and a few other companies... I really doubt that this is the case, to be honest.


I think it's reasonable to distinguish which side drove this. RAM prices are going up but it's not engineered primarily by RAM manufacturers. They are naturally jumping on the bandwagon and responding, but they aren't the drivers. Of course, how they respond matters. They could make other choices. Over time we'll see how this goes because AI could cool and then RAM manufacturers end up in a spot where they choose to manipulate prices to keep them higher.

IDK, I understood them perfectly well.

But what were they actually saying? They just used the phrase "college-educated" and several synonyms as an insult to put themself forward as just some working class Joe who has no time for rich people and their hoity toity high and mighty philosphizing.

If I was to be charitable, I guess maybe their argument was that Kant only believed in subjective universality because he was rich, but that doesn't make any sense. Both Kant and Hume grew up middle class, and ended up in academia, and had very different conclusions about what "taste" is.

It's just a knee jerk reaction to dead white men philosophers and anyone who is interested in them as a bunch of elitists. That's not an argument, that's some kind of misplaced class resentment masquerading as an argument.


that's not what they said at all though, sorry but the only one doing knee jerk reactions here seems to be you

Idk I've read a lot of Selridge's comments up and down the whole post now and it really seems like any idea of taste to them defaults to classism and then they misapply that framework here, which is realistically one of the fairest arenas.

If someone likes what you make it doesn't matter where you come from.


It doesn’t default to class, people just pretend class doesn’t apply at all.

Taste is often advanced as this subjective yet ultimately discriminating notion which refuses to be pinned down. Insistent but ineffable. This idea that you and I know what good software is due to having paid dues and they don’t, and the truth will out, is a common one!

My argument isn’t that it’s class. It’s that this framework of describing taste is PURPOSE BUILT to ignore questions like status, access, and money in favor of standing in judgment.


I hear you, but I at least try to disarm that notion. I even have a footnote talking about how taste is entirely group dependent and measured by reception so while I think your point is more broadly applicable I feel it has less to do with what I was writing about which is broadly in the technical realm I feel pretty meritorious.

Yeah, that’s the concern.

We are in the middle of an earthquake. The 90s was like this, but it’s bigger. Radical changes in what it means to build software are happening right now. That will without a hair of a doubt result in equally radical changes in what constitutes good and bad work.

Maybe, just maybe, the thing that seems really durable (taste) is already getting put into a blender that’s still running.


Ok, so what were they saying?

Lmao. I love Kant. He’s great. I love dead white guys. One I’ve been banging on about in this thread is Bourdieu, who wrote a whole book on taste in France, Distinction. Here Bourdieu has the matter rightly and Kant doesn’t. Sometimes that happens. When you read a lot of dead white guys you find lots of them said very wise shit and also stuff that’s harder to find the wisdom in.

Here I don’t know what the trouble is. I’m sorry for calling your phrasing the equivalent of “hafalutin” (a word Marx has used more than twice—he’s dead and white), but what do you expect having come in to cloud the waters with 2 extra syllables to little end?


I know that I'm both pretentious and inarticulate. It's a rough combo. But I resented the idea that what I was saying was inauthentic. I legitimately love Kant, even though reading him is like trying to hammer nails through my skull.

He's quite good sometimes. But we don't always need to reach for that kind of writing if we struggle. If you want something from that era which is written by a young man who is trying to set the world on fire, you should try: https://en.wikipedia.org/wiki/Preliminary_Discourse_to_the_E...

This dude, Diderot, is gonna make a new encyclopedia of the world with his friends which breaks the monopoly on printed and well-regarded learning that was held by the traditional humanities. He wanted articles about the trades, about objects and engineered things, and he was PISSED OFF that he had to fight for it.

Is this guy's idea of how to organize knowledge "right"? Probably not. Will it light your brain on fire and make you grumpy or nosy or suspicious about categories of knowledge that persist still? YEAH.


Maybe it's me, but only the first quote seems cumbersome, and wasn't very cumbersome in the article when I read it in context.

”I had no issues with complex sentence structure, therefore the whole planet is fluent in english and college-level literate”

Simpler language is an accessibility issue


Being able to easily, and quickly read scientific literature is not a universal trait. You're in the top 1% (probably top 0.1%?) if you're able to do that and actually understand the source material.

The average person has a hard time reading and fully understanding a newspaper article or cooking instructions on a pre-prepared meal.


The first paragraph is fine -- I agree. The second paragraph is a silly hyperbole that comes up over and over again on HN and needs to die. Major newspapers are written for about 8th grade level reading comprehension. Cooking instructions on a prep'd meal are probably much lower -- maybe 5th grade. The "average person" (whatever that term means) living in highly developed nations can read at 8th grade level or above.

Well, except for America, according to statistics :P

> There needs to be a statute of limitations just like there is for reporting the crimes.

The UK does not have a statute of limitations


The UK has multiple legal systems.

https://en.wikipedia.org/wiki/Limitation_Act_1980

Applies to England and Wales, I believe there are similar ones for Scotland and NI


No? So I can report a petty theft from 35 years ago?

Even with reinforcement learning, you can still find phrases and patterns that are repeated in the smaller models. It's likely true with the larger ones, too, except the corpus is so large that you'll have fat luck to pick out which specific bits.

what?

what do you mean? are you claiming its hard to recognize the features of speech of large models? its really not. There are famous wikipedia articles about it. Heck an em dash, a single character is often a pretty good clue


That is not what I was saying at all :P

"By today's standards"? no that's just FORTH code

> The counterpoint of this is Linux distros trying to resolve all global dependencies into a one-size-fits-nothing solution - with every package having several dozen patches trying to make a brand-new application release work with a decade-old release of libfoobar. They are trying to fit a square peg into a round hole and act surprised when it doesn't fit.

This is only the case for debian and derivatives, lol. Rolling-release distributions do not have this problem. This is why most of the new distributions coming out are arch linux based.


I'm going to need a source for both of those claims.

It sure sounds very Debian-ish, at least. I’m a Fedora user, and Fedora stays veeeery close to upstream. It’s not rolling, but is very vanilla.

Agreed, but I don't think that has to do with either it's "vanillaness" or the 6 month release schedule. Fedora does a lot of compatibility work behind the scenes that distros not backed by a large company more than likely couldn't afford.

"Vanillaness" is exactly what's in-scope here:

> ...every package having several dozen patches trying to make a brand-new application release work with a decade-old release of libfoobar.

Applying non-vanilla flavor (patches) to libraries in able to make new packages work with old packages. (It's not just a library thing of course--I've run into packages on Debian where some component gets shimmed out by some script that calls out to some script to dynamically build or download a component. But I digress.)

Maybe I'm just out of the loop here, but I'm not aware of this being a general practice in Fedora. Yes, Fedora does a lot of compatibility work of course, but afaik the general practice isn't to add Fedora-flavored patches.


`crote` claimed:

> with every package having several dozen patches trying to make a brand-new application release work with a decade-old release of libfoobar.

Quite frankly, as someone started distro-hopping around ~2009 & only stopped around 2020, I have experienced a lot of Linux distributions, as well as poked at a lot of maintainer pipelines — it is simply categorically untrue for the majority of non-Debian derived Linux distributions.

It used to be that a decent number of Linux distributions (Slackware, Debian, RedHat, whatever) put in a lot of work to ensure "stability", yes. However "stability" was, for the most part, simply backporting urgent security fixes to stable libraries and to their last three or so versions. The only system that is very well known for shipping "a decades old version" of a system library would be Debian (or its biggest derivative, Ubuntu), because it's release cycle is absolutely glacial, and most other Linux distributions do give somewhat of a shit about keeping relatively close to upstream. If only because maintaining a separate tree for a program just so you can use an ancient library is basically just soft-forking it, which incurs a metric shitton of effort and tech-debt that accrues over time.

One of the reasons I switched to and then ran at least one single Arch Linux installation for the back half of the 2010s (and used other computers or a dual boot for distrohopping) was partly for the challenge (I used to forget to read the NEWS before upgrading and got myself into at least one kernel panic that way lol), and partly because it was the only major rolling-release distribution at the time. In the last 6 years that's changed a lot, and now there's a whole slew of rolling-release distributions to choose from. The biggest is probably Steam's Holo distribution of Arch, but KDE's new distribution (replacing Kubuntu as the de-facto "KDE Distro") is based on Arch as well, along with I think Bazzite and CachyOS; Arch has always had a reputation (since before the 2010s) for keeping incredibly close to upstream in it's package distributions, and I think that ideology has mostly won-out now that a lot of Linux software is more generally stable and interacts reasonably well (Whereas, back when Pipewire was a thing... that was not the case).

Now, sure, I'm not going to the effort of compiling my decade+ experience of Linux into a spreadsheet of references of every major distribution I tried in the last ten years, just to prove all of this on an internet forum, but hopefully that write-up will suffice. Furthermore, as far as I can see, the burden of proof is not on me, but on `crote` to prove the claim they made.

Also, I get how if you've only ever used Debian-derivatives, all of this may appear to be incorrect. I would suggest, uh, not doing that — if only because it's a severely limiting view of what Linux systems are like. Personally, I've been a big fan of using Alpine's Edge release since before they had to drop linux-hardened and it's a really nice system to use as a daily driver.


I don't think this is the case, because the AI companies are all just shuffling around the same 300 million or trillion to each other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: