Hacker Newsnew | past | comments | ask | show | jobs | submit | gopalv's commentslogin

Chrome runs Gemini Nano if you flip a few feature flags on [1].

The model is not great, but it was the "least amount of setup" LLM I could run on someone else's machine.

Including structured output, but has a tiny context window I could use.

[1] - https://notmysock.org/code/voice-gemini-prompt.html


> The writing isn’t the problem. The problem is that when I’m done, I look at what I just wrote and think this is definitely not good enough to publish.

Ira Glass has a nice quote which is worth printing out and hanging on your wall

Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work.

Or if you're into design thinking, the Cult-of-Done[1] was a decade ago.

[1] - https://medium.com/@bre/the-cult-of-done-manifesto-724ca1c2f...


That's the exact opposite of OP's issue, right? He was producing, and it was good, but somewhere along the way he developed good taste (or some facsimile). Ira is claiming that people who are creative beginners start with good taste, which doesn't seem to be the case for a lot of us.

> More addictive than that is the unpredictability and randomness inherent to these tools. If you throw a problem at Claude, you can never tell what it will come up with. It could one-shot a difficult problem you’ve been stuck on for weeks, or it could make a huge mess. Just like a slot machine, you can never tell what might happen. That creates a strong urge to try using it for everything all the time.

That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].

The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.

Sure it made a mistake, but it is right there, you could go again.

Pull the lever, doesn't matter if the kids have Karate at 8 AM.

[1] - https://github.com/t3rmin4t0r/magic-partitioning


> The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.

If you can write a blogpost for this i'd like to read it.


This post (https://www.fast.ai/posts/2026-01-28-dark-flow/) covers this well already.

> This sounds like the Loss Disguised as a Win concept from gambling addiction. Consider the hundreds of lines of code, all the apps being created: some of these are genuinely useful, but much of this code is too complex to maintain or modify in the future, and it often contains hidden bugs.


> Somehow human cells age and humans get old and die but humans can somehow make an entirely new creature through reproduction where that is reset

I think the eggs aren't dividing as you age (you are born with them, so to speak) and the sperm is held "outside" the body.

One is in original packaging and the other is produced in a "cooler" enviroment by the billions with a heavy QA failure of 99.9999%.


> We'll need to figure out the techniques and strategies that let us merge AI code sight unseen

Every strategy which worked with an off-shore team in India works well for AI.

Sometime in mid 2017, I found myself running out of hours in the day stopping code from being merged.

On one hand, I needed to stamp the PRs because I was an ASF PMC member and not a lot of the folks who were opening JIRAs were & this wasn't a tech debt friendly culture, because someone from LinkedIn or Netflix or EMR could say "Your PR is shit, why did you merge it?" and "Well, we had a release due in 6 days" is not an answer.

Claude has been a drop-in replacement for the same problem, where I have to exercise the exact same muscles, though a lot easier because I can tell the AI that "This is completely wrong, throw it away and start over" without involving Claude's manager in the conversation.

The manager conversations were warranted and I learned to be nicer two years into that experience [1], but it's a soft skill which I no longer use with AI.

Every single method which worked with a remote team in a different timezone works with AI for me & perhaps better, because they're all clones of the best available - specs, pre-commit verifiers, mandatory reviews by someone uncommitted on the deadline, ease of reproducing bugs outside production and less clever code over all.

[1] - https://notmysock.org/blog/2018/Nov/17/


> Every strategy which worked with an off-shore team in India works well for AI.

Why hasn't SWE then not been completely outsourced for 20 years. Corporations were certainly trying hard.


Cost. Claude code is two orders of magnitude cheaper than an offshore dev.

we are talking 20 - 30 years back when offshore was and still is cheaper.

> * No MagSafe

For my kid who uses a Chromebook right now, Magsafe would've been improvement in how often the power cable pulls the it off the desk.

But otherwise, this checks all the boxes, including applecare.


In case you didn't already know or haven't considered it, you can find right-angle usb-c MagSafe adaptors that basically allow the charging cable to disconnect from the device like MagSafe.

Most of these devices are a fire hazard. And in an environment where kids are needing magsafe, is probably the most dangerous for fire safety.

*edit

https://www.reddit.com/r/UsbCHardware/comments/motlhn/magnet...


So true. Regarding those magnetic USB connectors: not just a fire hazard but also a tendency to eventually burn out whatever is on the other end of them IME.

Maybe ok for giving power if you are careful I think, I never had any fires, knock on wood.

But it's a bummer to zap/kill the data-functionality of USB ports on nice stuff just because a non-spec connector was used in between the two things being connected, for convenience.

So I don't trust them except for conveniently connecting power to low-cost devices. Whether Neo fits that... I doubt but YMMV.


oh, probably not enough contact pressure. resistance is inversely proportional to contact pressure after all.

I am now nerd sniped on the surface physics of connectors. Thanks!

Any you could recommend that are safe?

Mid 2015, I spent months optimizing Apache ORC's compression models over TPC-H.

It was easier to beat Parquet's defaults - ORC+zlib seemed to top out around the same as the default in this paper (~178Gb for the 1TB dataset, from the hadoop conf slides).

We got a lot of good results, but the hard lesson we learned was that scan rate is more important than size. A 16kb read and a 48kb read took about the same time, but CPU was used by other parts of the SQL engine, IO wasn't the bottleneck we thought it was.

And scan rate is not always "how fast can you decode", a lot of it was encouraging data skipping (see the Capacitor paper from the same era).

For example, when organized correctly, the entire l_shipdate column took ~90 bytes for millions of rows.

Similarly, the notes column was never read at all so dictionaries etc was useless.

Then I learned the ins & outs of another SQL engine, which kicked the ass of every other format I'd ever worked with, without too much magical tech.

Most of what I can repeat is that SQL engines don't care what order rows in a file are & neither should the format writer - also that DBAs don't know which filters are the most useful to organize around & often they are wrong.

Re-ordering at the row-level beats any other trick with lossless columnar compression, because if you can skip a row (with say an FSST for LIKE or CONTAINS into index values[1] instead of bytes), that is nearly infinite improvement in the scan rate and IO.

[1] - https://github.com/amplab/succinct-cpp


> correspond to a binary format in accordance with the C ABI on your particular system.

We're so deep in this hole that people are fixing this on a CPU with silicon.

The Graviton team made a little-endian version of ARM just to allow lazy code like this to migrate away from Intel chips without having to rewrite struct unpacking (& also IBM with the ppc64le).

Early in my career, I spent a lot of my time reading Java bytecode into little endian to match all the bytecode interpreter enums I had & completely hating how 0xCAFEBABE would literally say BE BA FE CA (jokingly referred as "be bull shit") in a (gdb) x views.


ARM is usually bi-endian, and almost always run in little endian mode. All Apple ARM is LE. Not sure about Android but I’d guess it’s the same. I don’t think I’ve ever seen BE ARM in the wild.

Big endian is as far as I know extinct for larger mainstream CPUs. Power still exists but is on life support. MIPS and Sparc are dead. M68k is dead.

X86 has always been LE. RISC-V is LE.

It’s not an arbitrary choice. Little endian is superior because you can cast between integer types without pointer arithmetic and because manually implemented math ops are faster on account of being linear in memory. It’s counter intuitive but everything is faster and simpler.

Network data and most serialization formats are big endian by convention, a legacy from the early net growing on chips like Sparc and M68k. If it were redone now everything would be LE everywhere.


> Little endian is superior because you can cast between integer types without pointer arithmetic

I’ve heard this one several times and it never really made sense. Is the argument that y you can do:

    short s;
    long *p = (long*)&s;
Or vice versa and it kind of works under some circumstances?

Yes. In little-endian, the difference between short and long at a specific address is how many bytes you read from that address. In big-endian, to cast a long to a short, you have to jump forward 6 bytes to get to the 2 least-significant bytes.

Wow, I've been living life assuming that little endian was just the VHS of byte orders with no redeeming qualities whatsoever until today. This actually makes sense, thank you!

Network data and most serialization formats are big endian because it's easiest to shift bits in and out of a shift register onto a serial comm channel in that order. If you used little endian, the shifter on output would have to operate in reverse direction relative to the shifter on input, which just causes stupid inconsistency headaches.

Isn't the issue with shift registers related to endianness at the bit level, while the discourse above is about endianness at the byte level? Both are pretty much entirely separate problems

GCC supports specifying endianness of structs and unions: https://gcc.gnu.org/onlinedocs/gcc-15.2.0/gcc/Common-Type-At...

I'm not sure how useful it is, though it was only added 10 years ago with GCC 6.1 (recent'ish in the world of arcane features like this, and only just about now something you could reasonably rely upon existing in all enterprise environments), so it seems some people thought it would still be useful.


I thought all iterations of ARM are little endian, even going back as far to ARM7. same as x86?

The only big-endian popular arch in recent memory is PPC


AFAIK ARM is generally bi-endian, though systems using BE (whether BE32 or BE8) are few and far between.

It started as LE and added bi-endian with v3.

ARM has always been little-endian. Some were configurable endian.

And it's not a hole. We're not about to spend 100 cycles parsing a decimal string that could have been a little-endian binary number, just because you feel a dependency on a certain endianness is architecturally impure. Know what else is architecturally impure? Making binary machines handle decimal.


> The Graviton team made a little-endian version of ARM just to allow lazy code like this to migrate away from Intel chips without having to rewrite struct unpacking

No? Most ARM is little endian.


I would question why is it big endian in the first place. Little endian is obviously more popular, why use big endian at all?

Fuck, the stupidity of humans really is infinite.

> The story isn’t “MCP was a fake marketing thing pushed on us”. It’s a story of how quickly the meta evolves.

The original take was that "We need to make tools which an AI can hold, because they don't have fingers" (like a quick-switch on a CNC mill).

My $job has been generating code for MCP calls, because we found that MCP is not a good way to take actions from a model, because it is hard to make it scriptable.

It definitely does a good job of progressively filling a context window, but changing things is often multiple operations + a transactional commit (or rename) on success.

We went from using a model to "change this" vs "write me a reusable script to change this" & running it with the right auth tokens.


> a clear explanation of a compression algorithm

The huffman tree, LZ77 and LZMA explanation is truly excellent for how concise the explanation is.

The earlier Veritasium video on Markov Chains in itself is linked if you don't know what a markov chain is.

I expected Veritasium to tank when it got sold to private equity & Derek went to Australia, but been surprised to see the quality of the long form stuff churned out by Casper, Petr, Henry & Greg.


I liked the presentation about paint mixing, however I think this is not impossible to find the missing key paint having the public paint and the message paint, but still, this is really close to what RSA is


The paint mixing is actually way closer to the idea of the Diffie-Hellman key exchange, more than RSA.


I couldn't help but notice it was quite similar to a Numberphile video 8 years earlier...

https://youtu.be/NmM9HA2MQGI


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: