Hacker Newsnew | past | comments | ask | show | jobs | submit | rattray's commentslogin

I've been very curious about that too. I wonder if it's actually much better at admitting when it doesn't know something, because it thinks it's a "dumber model". But I haven't played with this at all myself.

Has anyone tried both jj and gitbutler extensively yet? Both seem to have some interesting new ideas on top of git.

I was hoping someone else had written about it here.

From my knowledge there are three different takes on git being worked on which looked interesting. - JJ - GitButler - Zed

Zed version system doesn't have that much public info yet, but they wanted to build a db for storing code versions for AI agents. Not sure if this is still the direction, and I'm a bit skeptical, but interested to see what they come up with.

Even though git works well enough, I'm certain there will be another preferred way at some point in the future. There are aspects of git that are simply not intuitive, and the CLI itself is not up to standard of today's DX.


What's your naming scheme?

REPO_NAME_0

REPO_NAME_1

REPO_NAME_2

REPO_NAME_3


Abandoned open-source projects with poor code quality are nothing new.

The merits of any project are yours to evaluate.

To me, I see some encouraging thoughtfulness here. However, again, it's true most projects like this don't achieve liftoff.


This seems awesome. Seems to address many of my armchair complaints about both Go (inexpensive) and Rust (bloated/complex).

I'm curious what compilation times are like? Are there theoretical reasons it'd be order of magnitude slower than Go? I assume it does much less than the rust compiler...

Relatedly, I'd be curious to see some of the things from Rust this doesn't include, ideally in the docs. Eg I assume borrow checking, various data types, maybe async etc are intentionally omitted?


Some details on how it works from a code comment:

Problem: DOM-based text measurement (getBoundingClientRect, offsetHeight) forces synchronous layout reflow. When components independently measure text, each measurement triggers a reflow of the entire document. This creates read/write interleaving that can cost 30ms+ per frame for 500 text blocks.

Solution: two-phase measurement centered around canvas measureText.

prepare(text, font) — segments text via Intl.Segmenter, measures each word via canvas, caches widths, and does one cached DOM calibration read per font when emoji correction is needed. Call once when text first appears.

layout(prepared, maxWidth, lineHeight) — walks cached word widths with pure arithmetic to count lines and compute height. Call on every resize. ~0.0002ms per text.

https://github.com/chenglou/pretext/blob/main/src/layout.ts


What about hyphenization?



Regardless of the subject matter, the tweets announcing this are a masterclass in demoing why an architectural/platform improvement can be impactful.


Wow. How fast is haiku?


> For our first experiment, we used ClickBench, an analytical database benchmark. ClickBench has 43 queries that focus on aggregation and filtering operations. The operations run on a single wide table with 100M rows, which uses about 14 GB when serialized to Parquet and 75 GB when stored in CSV format.

very much so…


Curious, what do you use it for?


Huge local thinking LLMs to solve math and for general assistant-style tasks. Models like Kimi-2.5-Q3, DeepSeek-XX-Q4/Q5, Qwen-3.5-Q8, MiniMax-m2.5-Q8 etc. that bring me to Claude4/GPT5 territory without any cloud. For coding I have another machine with 3x RTX Pro 6000 (mostly Qwen subvariants) and for image/video/audio generation I have 2x DGX Sparks from ASUS.


We must be twins, i've got the same three working in a cluster.

I was really excited to see where the GB300 Desktops end up, with 768gb ram but now that data is leaking / popping up (dell appears to only be 496gb), we may be in the 60-100k range and that's well out of my comfort zone.

If Apple came out with a 768gb Studio at 15k i'd bite in a heart beat.

https://www.dell.com/en-us/lp/dell-pro-max-nvidia-ai-dev


Yeah, I didn't want to spend more than 50k for local inference stack. I can amortize it in my taxes so it's not a big deal but beyond it would start eating into my other allocations. I might still get M5 Ultra if it pops up and benchmarks look good, possibly selling M3 Ultra.


Probably an Electron app or two.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: