Distributed ai inference pool for any Mac/iOS device where devices are paid for contributing unused ram. To help with the demand, also doing multiplayer AI. $0.05/million tokens - teale.com
Annecdotal here but I've been buying studios with as much ram as possible. I probably got in one of the last orders for the m3 ultra 512gb ram officially from apple late February right before they took that config down. I know this because the very next day I went back to order more and couldn't.
There's a ton of spam on ebay currently with configs of the 512gb ram variant going for sub $4k. Those are all scams/spam. You can tell instantly because the seller's account is/was created this month 2026.
The $20k+ listings aren't selling but what's interesting is technically, $20k for 512gb ram is still less than $40/gb ram on a single device. Compare that to a Nvidia spark with 128gb ram selling at $4k+ or if you're lucky, you snagged one for ~$3500. That's $31.25/gb
The retail on the 512gb was $9500 and after tax if you don't ship to a sales tax free state makes it only $20/gb ram.
I'm curious what price apple's newest studio variant will be priced at per gb ram but I recently bought a "refurb" from a reputable seller I've purchased many laptops from in the past at $13,750 which still put the 512gb at a $27/gb ram price point.
I'm bullish on local models especially due to gemma4. Good luck out there!
> but I've been buying studios with as much ram as possible
Why though? Are you managing a fleet of headless Mac Studios? You mention LLMs, but today's Mac hardware really isn't great as soon as you use bit larger contexts, so I'm guessing it's not that either?
But with macOS hardware? What kind of calculations did you do that shows that this makes sense? 6-7 months ago I looked into the same, getting a bunch of Apple hardware to do local LLM inference (for a client), but the bad performance + high pricing just makes it completely infeasible compared to nvidia hardware.
Now I'm really curious how you got that to make sense in reality.
250+ employees on the lowest paid subscriptions of claude or openai is $5k/month. their usage of AI is chatbot most of the time which local models/inference can easily handle. so just being able to cancel those subscriptions with local hardware makes my break even on a single $10k mac studio 2 months and honestly the mac studio for their use is overkill.
A bunch of ideas that have had domains but never enough engineers. Now there isn't enough time it seems except when I've hit my LLM subscription limits and they need to cool down.
Already launched biz-in-a-box.org and a life-in-a-box.org spinoff as frameworks to replace every entity's QuickBooks. I'm using them myself for every project my agents are spinning up.
Stealth project is related to classpass but for another category of need that won't go away even in the age of AI that really is only possible with critical mass of supply to meet existing demand. Super excited cus there's no better time to build with unlimited agents that scale without people problems.
Lastly, can't wait to run local LLMs so no longer limited by tokens/money.
996 - it's a global marketplace of talent and very few in America are willing to work 996. If you are, you either are the founder type or young and unshackled.
This is my experience as well. For a lot of these people, it's performative. Just kind of existing at work for 12 hours is "getting work done" to them, despite the fact it's life destroying for pretty much everyone that has an actual life to live.
Labor laws in California make infeasible (for good reason). The kinds of jobs going into 996 won't pay enough for exempt status, so that's 32 hours of overtime a week.
Fyxed deploys capital to landlords and their property managers for cash needs like repairs, turnovers, and improvements on rental properties.
We're at an inflection point where we have the deployment capital ($50M) to really take a big swing at this opportunity.
The perfect candidate has prior fintech (lending) experience, loves wrangling the complexities of reconciling/tracking money, and bonus points if you've built on top of Column.com's API.
to add to the post, our humans are all over the world but we have folks in the States, Latam, Philippines, and other SE Asian countries. they primarily do accounting/bookkeeping tasks today (for us at APM Help which I'm the founder).
i'm working on github.com/teale-ai (distributed inference)
reply