Hacker Newsnew | past | comments | ask | show | jobs | submit | musictubes's commentslogin

It isn’t clear to me that Apple will ever pursue their own chatbot like Gemini, ChatGPT, etc. There’s lots of potential for on device AI functions without it ever being a general purpose agent that tries to do everything. AI and LLMs are not synonymous.

From UX perspective they already have Siri for that

There are some visualizers in the Mac App Store. I'm using Ferromagnetic right now and like it well enough. There are still visualizers in Apple Music left over from the iTunes days but they're kind of lame.


I stumbled onto one years ago by accident, maybe an Easter egg or something. I came back to my computer (Mac) after several hours of iTunes playback to see a hitherto unknown visualization running, with fairly primitive-looking graphics by today's standards. It was not any of the visualizations available in iTunes at the time.

I filed a bug on it with Apple and they got back to me asking how the hell I had invoked this, because they'd never seen it before. Never did get to the bottom of it.


Intentional pun?


That article points out that GB5 and GB6 test multi-core differently. The author notes that GB6 is supposed to approach performance the way most consumer programs actually work. GB5 is better suited for testing things like servers where every core is running independent tasks.

The only “evidence” they give that GB6 is “trash” is that it doesn’t show increasing performance with more and more cores with certain tests. The obvious rejoinder is that GB6 is working perfectly well in testing that use case and those high core processors do not provide any benefit in that scenario.

If you’re going to use synthetic benchmarks it’s important to use the one that reflects your actual use case. Sounds like GB6 is a good general purpose benchmark for most people. It doesn’t make any sense for server use, maybe it also isn’t useful for other use cases but GB6 isn’t trash.


> The only “evidence” they give that GB6 is “trash” is that it doesn’t show increasing performance with more and more cores with certain tests. The obvious rejoinder is that GB6 is working perfectly well in testing that use case and those high core processors do not provide any benefit in that scenario.

The problem with this rejoinder is, of course, that you are then testing applications that don't use more cores while calling it a "multi-core" test. That's the purpose of the single core test.

Meanwhile "most consumer programs" do use multiple cores, especially the ones you'd actually be waiting on. 7zip, encryption, Blender, video and photo editing, code compiles, etc. all use many cores. Even the demon scourge JavaScript has had thread pools for a while now and on top of that browsers give each tab its own process.

It also ignores how people actually use computers. You're listening to music with 30 browser tabs open while playing a video game and the OS is doing updates in the background. Even if the game would only use 6 cores by itself, that's not what's happening.


Ok I had time to read through this, and yeah I agree, multicore test should not be waiting on so much shared state.

There are examples of programs that aren't totally parallel or serial, they'll scale to maybe 6 cores on a 32-core machine. But there's so much variation in that, idk how you'd pick the right amount of sharing, so the only reasonable thing to test is something embarassingly parallel or close. Geekbench 6's scaling curve is way too flat.


Yeah. I think it might even be worse than that.

The purpose of a multi-core benchmark is that if you throw a lot of threads at something, it can move where the bottleneck is. With one thread neither a desktop nor HEDT processor is limited by memory bandwidth, with max threads maybe the first one is and the second one isn't. With one thread everything is running at the boost clock, with max threads everything may be running at the base clock. So the point of distinguishing them is that you want to see to what extent a particular chip stumbles when it's fully maxed out.

But tanking the performance with shared state will load up the chip without getting anything in return, which isn't even representative of the real workloads that use an in-between number of threads. The 6-thread consumer app isn't burning max threads on useless lock contention, it just only has 6 active threads. If you have something with 32 cores and 64 threads and it has a 5GHz boost clock and a 2GHz base clock, it's going to be running near the boost clock if you only put 6 threads on it.

It's basically measuring the performance you'd get from a small number of active threads at the level of resource contention you'd have when using all the threads, which is the thing that almost never happens in real-world cases because they're typically alternatives to each other rather than things that happen at the same time.


It is worse. The use case of many threads, resource contention, diminishing and eventually negative returns does exist and I've run into it, but it's not common at all for regular users and not even that interesting to me. I want to know how the CPU responds to full util (not being able to do full turbo like you said).


“There are no good reasons” really? One of my favorite things about iOS/ipados is the incredible selection of music creation apps. My iPad is loaded with synths, sequencers, and effects. AUM in particular is an amazing program for live performances mixing both software and hardware using a touch interface.

Many, but not all, of the programs I use on iPad are also available on Mac and Windows at much higher prices. That alone is reason enough to use a iPad. Most of these apps can be run on the least expensive iPad and/or older ones.

Like it or not, computing appliances have led to really good software markets. The “clean and honest” software markets are either much more expensive or don’t exist at all. The optimist in me is hoping that Android losing some freedom might lead to higher quality software and some actual competition to Apple.


They have always required a password for encrypted backups, do they now require for all local backups? Or is unencrypted not an option anymore?


Long extra inning games are pretty rare. I too hate the “Manfred Man” but the players and coaches overwhelmingly approve of it. I think that using the ghost runner at the start of the 12th inning would be a good compromise.


I'm in favor of compromise, but why the 12th inning? Haven't looked at the data but from memory it does kind of feel like games that get to the 12th inning tend to last into, say, the 15th or later.


I think 12th based on (and I cannot emphasize this enough) vibes instead of data would be fine.

10th definitely feels too soon (it's basically the 9th), and the 11th still kinda feels too soon too.

If anything, I'd argue it should be fine to ask your closer/reliever to pitch an extra inning (the 10th) "as-is". The 11th makes you burn an extra reliever, and that should be okay.

The 12th is where I'd start to say "okay, wind it down, we're all losing now".

Again, just vibes.


Reminding me of the Mariners playoffs game a couple seasons ago that lasted 18 innings. That was bizarre to watch. I forget who they were facing. Astros I think?


Coming from a synth background I assumed that any timbre variation that pianists can achieve came about via envelope manipulation. That and possibly volume related overtone production I suppose.


One possible exception is Microsoft Office. I prefer the update process through the Mac App Store over the Microsoft updater. It's the same suite just different update process.


This is tiresome. You cannot lock down development machines. If you pay attention you'll see that OSes made for development work will be the only ones not locked down. Android was a holdout but Google is now tightening the screws. MacOS, Linux, BSD, and Windows are the only OSes that can't be locked down. Microsoft tried but they abandoned that.


Good point, but it is entirely possible for Apple/Microsoft to lock down "consumer" versions of their operating systems, effectively turning the common man's computer into a glorified phone and a new cash cow. Add to that a requirement for an online account, age verification, and other malware.


> You cannot lock down development machines.

Of course you can. What makes you think you couldn't?


Steel is too heavy. As they pointed out aluminum is much better at dissipating heat than titanium. Shooting video has always heated phones up. A lot of the video features were aimed directly at actual professional video work so I’m not surprised if preventing throttling was a key goal. Game performance will come along for the ride as well.

They also said that this was the first unibody iPhone. Can titanium be made the same way? The unibody MacBooks are really nice though I’m not sure if the same rigidity issues are at play in such small devices.


Well too bad for them making 17 pro as heavy as 13 pro steel.

Too hot? Well bu-hoo, throttle it. Or, I dunno, don’t run glass shaders.

I drop my iphone more often than I need it to compute pi.

Aluminium deforms on drop too easily. Thanks, Ive had enough of iPhones 6 and alike to willingly come back.

> at actual professional video

On a phone? You must be kidding. Arri, red, blackmagic, sony.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: