Hacker Newsnew | past | comments | ask | show | jobs | submit | abrichr's commentslogin

ChatGPT Deep Research dug through Taalas' WIPO patent filings and public reporting to piece together a hypothesis. Next Platform notes at least 14 patents filed [1]. The two most relevant:

"Large Parameter Set Computation Accelerator Using Memory with Parameter Encoding" [2]

"Mask Programmable ROM Using Shared Connections" [3]

The "single transistor multiply" could be multiplication by routing, not arithmetic. Patent [2] describes an accelerator where, if weights are 4-bit (16 possible values), you pre-compute all 16 products (input x each possible value) with a shared multiplier bank, then use a hardwired mesh to route the correct result to each weight's location. The abstract says it directly: multiplier circuits produce a set of outputs, readable cells store addresses associated with parameter values, and a selection circuit picks the right output. The per-weight "readable cell" would then just be an access transistor that passes through the right pre-computed product. If that reading is correct, it's consistent with the CEO telling EE Times compute is "fully digital" [4], and explains why 4-bit matters so much: 16 multipliers to broadcast is tractable, 256 (8-bit) is not.

The same patent reportedly describes the connectivity mesh as configurable via top metal masks, referred to as "saving the model in the mask ROM of the system." If so, the base die is identical across models, with only top metal layers changing to encode weights-as-connectivity and dataflow schedule.

Patent [3] covers high-density multibit mask ROM using shared drain and gate connections with mask-programmable vias, possibly how they hit the density for 8B parameters on one 815mm2 die.

If roughly right, some testable predictions: performance very sensitive to quantization bitwidth; near-zero external memory bandwidth dependence; fine-tuning limited to what fits in the SRAM sidecar.

Caveat: the specific implementation details beyond the abstracts are based on Deep Research's analysis of the full patent texts, not my own reading, so could be off. But the abstracts and public descriptions line up well.

[1] https://www.nextplatform.com/2026/02/19/taalas-etches-ai-mod...

[2] https://patents.google.com/patent/WO2025147771A1/en

[3] https://patents.google.com/patent/WO2025217724A1/en

[4] https://www.eetimes.com/taalas-specializes-to-extremes-for-e...


LSI Logic and VLSI Systems used to do such things in 1980s -- they produced a quantity of "universal" base chips, and then relatively inexpensively and quickly customized them for different uses and customers, by adding a few interconnect layers on top. Like hardwired FPGAs. Such semi-custom ASICs were much less expensive than full custom designs, and one could order them in relatively small lots.

Taalas of course builds base chips that are already closely tailored for a particular type of models. They aim to generate the final chips with the model weights baked into ROMs in two months after the weights become available. They hope that the hardware will be profitable for at least some customers, even if the model is only good enough for a year. Assuming they do get superior speed and energy efficiency, this may be a good idea.


It could simply be bit serial. With 4 bit weights you only need four serial addition steps, which is not an issue if the weight are stored nearby in a rom.

Source? How recently?


You can reach much higher spend through the API (which you can configure `$claude` to use)


> Like, Copilot could watch my daily habits and offer automation for recurring things.

We're working on it at https://github.com/openadaptai/openadapt.




> But that one year of knowledge building, distribution, early customers, then dominates who has control over the cap table for a decade.

Can you recommend any resources for learning how to do this work yourself?


I'd recommend:

- the lean startup: https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous...

- Amy Hoy's blog: https://stackingthebricks.com/

- working in the domain you are interested in for a year. Go be a realtor. Go work in a lawncare business. etc. etc.

- read Patrick's advice about how he found customers for Appointment reminder: https://www.kalzumeus.com/2010/05/14/unveiling-my-second-pro...

edit: added link to Patrick's piece


One shortcut would be to:

1. pick a vertical (construction, legal, accounting, etc)

2. look at how to apply ai (compliance, voice customer service, email)

3. find sales people from a would be competitor or old school b2b saas in that area, and pay them 150$ an hour to tell you what customers are paying for

4. at the direction of how to do sales, come up with the product and either do the sales yourself without building, or pay someone to do cold calls and pitch the product

5. build the product


> While the LLMs get to blast through all the fun, easy work at lightning speed, we are then left with all the thankless tasks: testing to ensure existing functionality isn’t broken, clearing out duplicated code, writing documentation, handling deployment and infrastructure, etc.

I’ve found LLMs just as useful for the "thankless" layers (e.g. tests, docs, deployment).

The real failure mode is letting AI flood the repo with half-baked abstractions without a playbook. It's helpful to have the model review the existing code and plan out the approach before writing any new code.

The leverage may be in using LLMs more systematically across the lifecycle, including the grunt work the author says remains human-only.


That's my experience as well. The LLM is great for what I consider scaffolding. I can describe the architecture I want, some guidelines in CLAUDE.md, then let it write a bunch of stubbed out code. It saves me a ton of time typing.

It's also great for things that aren't creative, like 'implement a unit test framework using google test and cmake, but don't actually write the tests yet'. That type of thing saves me hours and hours. It's something I rarely do, so it's not like I just start editing my cmake and test files, I'd be looking up documentation, and a lot of code that is necessary, but takes a lot of time.

With LLMs, I usually get what I want quickly. If it's not what I want, a bit of time reviewing what it did and where it went wrong usually tells me what I need to give it a better prompt.



Corporate income taxes are treated differently than personal income taxes. You absolutely can deduct corporate expenses in Canada.


I didn't mention Canada in order to nonsensically compare corporate taxes in the USA versus personal income tax in Canada, but because it's a good idea to mention where you are if you're writing about taxes.

Yes, corps get all kinds of tax breaks, whereas individuals don't.


As a Canadian I still don’t understand what the point of your original comment was. We already know this thread is about a US rule.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: