Hacker Newsnew | past | comments | ask | show | jobs | submit | garblegarble's commentslogin

Would you elaborate what you mean by saying Linux on an M-series chip isn't straightforward? That's not been my experience, I (and lots of other devs) use it every day, Apple supports Linux via [0], and provides the ability to use Rosetta 2 within VMs to run legacy x86 binaries?

0: https://github.com/apple/container


Clearly I'm not as knowledgable about this as I thought I was. I already have a Ubuntu x86 VM running on an Intel Mac (inside VirtualBox). Same with Windows 11. Can this tool allow me to run both VMs in an Apple Silicon device in a performant way? Last I checked VirtualBox on Apple Silicon only permits the running of ARM64 guests.

While I have a preference for VirtualBox I'd say I'm hypervisor agnostic. Really any way I can get this to work would be super intriguing to me.


> Can this tool allow me to run both VMs in an Apple Silicon device in a performant way?

I use VMWare Fusion on an M1 Air to run ARM Windows. Windows is then able to run Windows x86-64 executables I believe through it's own Rosetta 2 like implementation. The main limitation is that you cannot use x86-64 drivers.

Similarly, ARM Linux VMs can use Rosetta 2 to run x86-64 binaries with excellent performance. For that I mostly use Rancher or podman which setup the Linux VM automatically and then use it to run Linux ARM containers. I don't recall if I've tried to run x86-64 Linux binaries inside an Linux ARM container. It might be a little trickier to get Rosetta 2 to work. It's been a long time since I tried to run a Linux x86-64 container.


Possible catch: Rosetta 2 goes away next year in macOS 27.

I don’t know what the story for VMs is. I’d really like to know as it affects me.

Sure you can go QEMU, but there’s a real performance hit there.


Not until macOS 28., but you're right, it's frustratingly unclear whether the initial deprecation is limited to macOS apps or whether it will also stop working for VMs.

https://support.apple.com/en-us/102527

https://developer.apple.com/documentation/virtualization/run...


This can be avoided by not upgrading to MacOS 28 right? I'm new to Mac's and the Apple release schedule so I'm not sure how mandatory the annual updates are.

Does Apple Silicon support VMs within VMs?

What if you run MacOS 27 in a VM, and then run the x86-hosting VM inside that?


It would be pretty difficult for Apple to disable Rosetta for VMs.

How so?

It doesn’t require anything from the host

The Apple documentation for using the Virtualization framework with ARM Linux VMs to run x86_64 binaries requires Rosetta to be installed:

https://developer.apple.com/documentation/virtualization/run...

So you must be talking about something else, perhaps ARM Windows VMs which use their own technology for running x86 binaries[^1].

In any case, please elaborate instead of being so vague. Thanks.

[^1]: https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x8...


You can just splat whatever support files it needs into the VM there isn't anything special about them. In fact you can copy them onto a different (non-Mac) device and use them there too

It never existed.


Oh I have another year? Phew.

> Last I checked VirtualBox on Apple Silicon only permits the running of ARM64 guests.

I used to use VirtualBox a lot back in the day. I tried it recently on my Mac; it's become pretty bloated over the years.

On the other hand, this GUI for Quem is pretty nice [1].

[1]: https://mac.getutm.app


Run ARM64 Linux and install Rosetta inside it. Even on the MacBook Neo it'll be faster than your 2020 Intel Mac.

https://github.com/abiosoft/colima

This is a super easy way to run linux VMs on Apple Silicon. It can also act as a backend for docker.


Pay Parallels for their GPU acceleration that makes Arm windows on apple silicon usable.

GPU encoding is fast, but usually it produces poorer quality results because it avoids trying paths that are hard to do quickly on the GPU.

If you want to optimise, try different encoders (sounds like you've already done some of this) and lots of different settings - it'll involve a lot of tuning if you want to figure out the right balance for your particular media between quality/speed/size, while also making sure that your machine hurts as much as possible.

Driveby 2c as a video industry person: don't retranscode your media unless you've got them in a really space inefficient codec and you're seriously hurting for space. You'll burn a lot of power retranscoding, are you actually saving useful $$$ of storage in exchange for that spend? Storage is cheap, and there's always a better codec coming along you could retranscode into and save some more space. It's a vicious cycle: each generation has to encode the artifacts from the previous generations.


Sounds like the pitch writes itself, "you'd better spend a lot of token money with us before the bad guys do it to you..."


I couldn't parse the intended meaning from "lack of commitment to no longer commiting crimes"), so here's a response that just answers the question raised.

Do you regard the justice system as a method of rehabilitating offenders and returning them to try to be productive members of society, or do you consider it to be a system for punishment? If the latter, is it Just for society to punish somebody for the rest of their life for a crime, even if the criminal justice considers them safe to release into society?

Is there anything but a negative consequence for allowing a spent conviction to limit people's ability to work, or to own/rent a home? We have carve-outs for sensitive positions (e.g. working with children/vulnerable adults)

Consider what you would do in that position if you had genuinely turned a corner but were denied access to jobs you're qualified for?


I assume this is a jailbreak / exfiltration detection condition triggering, I wonder if it would do the same if you started speaking to it in base64


>does this limit the agent's ability to run standard Linux tooling? Or are you relying on the AI to just figure out the BSD/macOS equivalents of standard commands?

Slightly counterintuitively, Apple Containers spawns linux VMs.

There doesn't appear to be any way to spawn a native macOS container... which is a pity, it'd be nice to have ultra-low-overhead containers on macOS (but I suspect all the interesting macOS stuff relies on a bunch of services/gui access that'd make it not-lightweight anyway)

FYI: it's easy enough to install GNU tools with homebrew; technically there's a risk of problems if applications spawn commandline tools and expect the BSD args/output but I've not run into any issues in the several years I've been doing it).


For my inputs, whisper distil-large-v3.5 is the best. I tried Parakeet 0.6 v3 last night but it has higher error rates than I'd like (but it is fast...)


Nice I'll try it, as of now for my personal stt workflow I use eleven labs api which is pretty generous but curious to play around with other options


I assume that will be better than whisper - I haven't benchmarked it against cloud models, the project I'm working on cannot send data out to cloud models


oh I've been looking into whisper and vosk in the last few days. I'll probably go with whisper (with whisper.cpp) but has anyone compared it to vosk models?


>In June 2025, 56% of people in Great Britain thought it was the wrong decision

It's not so clear when you consider that 48.1% of the original referendum voters wanted to stay in the EU. I'm honestly very surprised by this poll, 8% change is pretty minimal considering the turmoil the country has gone through since 2016.

How much of this can be explained by older voters dying in the intervening 10 years, I recall that demographic skewed much more heavily Leave in 2016


Half the issue is the definition of ‘voter’. Turn-out is abysmal and polling has been crap in major ways. Calling someone eligible to vote a ‘voter’ is probably only right 50-60% of the time.

https://commonslibrary.parliament.uk/general-election-2024-t...


>And what do you even mean by "prepare"?

Not the person you're responding to but... if you think it's a horse -> car change (and, to stretch the metaphor, if you think you're in the business of building stables) then preparation means train in another profession.

If you think it's a hand tools -> power tools change, learn how to use the new tools so you don't get left behind.

My opinion is it's a hand -> power tools change, and that LLMs give me the power to solve more problems for clients, and do it faster and more predictably than a client trying to achieve the same with an LLM. I hope I'm right :-)


That's a good analogy. I'm on team hand tools to power tools too.


Why do you suppose that these tools will conveniently stop improving at some point that increases your productivity but are still too much for your clients to use for themselves?


Because I've seen how difficult it is to get a client to explain to me what they need their software to do.


And so the AI will develop the skills to interview the client and determine what they really need. There are textbooks written on how to do this, it's not going to be hard to incorporate into the training.


If they're using Opus then it'll be the $100/month Claude Max 5x plan (could be the more expensive 20x plan depending on how intensive their use is). It does consume a lot of tokens, but I've been using the $100/mo plan and get a lot done without hitting limits. It helps to be mindful of context (regularly amending/pruning your CLAUDE.md instructions, clearing context between tasks, sizing your tasks to stay within the Opus context window). Claude Code plans have token limits that work in 5-hour blocks (that start when you send your first token, so it's often useful to prime it as early in the morning as possible).

Claude Code will spawn sub-agents (that often use their cheap Haiki model) for exploration and planning tasks, with only the results imported into the main context.

I've found the best results from a more interactive collaboration with Claude Code. As long as you describe the problem clearly, it does a good job on small/moderate tasks. I generally set two instances of Claude Code separate tasks and run them concurrently (the interaction with Claude Code distracts me too much to do my own independent coding simultaneously like with setting a task for a colleague, but I do work on architecture / planning tasks)

The one manner of taste that I have had to compromise on is the sheer amount of code - it likes to write a lot of code. I have a better experience if I sweat the low-level code less, and just periodically have it clean up areas where I think it's written too much / too repetitive code.

As you give it more freedom it's more prone to failure (and can often get itself stuck in a fruitless spiral) - however as you use it more you get a sense of what it can do independently and what's likely to choke on. A codebase with good human-designed unit & playwright tests is very good.

Crucially, you get the best results where your tasks are complex but on the menial side of the spectrum - it can pay attention to a lot of details, but on the whole don't expect it to do great on senior-level tasks.

To give you an idea, in a little over a month "npx ccusage" shows that via my Claude Code 5x sub I've used 5M input tokens, 1.5M output, 121M Cache Create, 1.7B Cache Read. Estimated pay-as-you-go API cost equivalent is $1500 (N.B. for the tail end of December they doubled everybody's API limits, so I was using a lot more tokens on more experimental on-the-fly tool construction work)


FYI Opus is available and pretty usable in claude-code on the $20/Mo plan if you are at all judicious.

I exclusively use opus for architecture / speccing, and then mostly Sonnet and occasionally Haiku to write the code. If my usage has been light and the code isn't too straightforward, I'll have Opus write code as well.


The problem with current approaches is the lack of feedback loops with independent validators that never lose track of the acceptance criteria. That's the next level that will truly allow no-babysitting implementatons that are feature complete and production grade. Check out this repo that offers that: https://github.com/covibes/zeroshot/


That's helpful to know, thanks! I gave Max 5x a go and didn't look back. My suspicion is that Opus 4.5 is subsidised, so good to know there's flexibility if prices go up.


The $20 plan for CC is good enough for 10-20 minutes of opus every 5h and you’ll be out of your weekly limit after 4-5 days if you sleep during the night. I wouldn’t be surprised if Anthropic actually makes a profit here. (Yeah probably not, but they aren’t burning cash.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: