Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"With these improvements to the CPU and GPU, M4 maintains Apple silicon’s industry-leading performance per watt. M4 can deliver the same performance as M2 using just half the power. And compared with the latest PC chip in a thin and light laptop, M4 can deliver the same performance using just a fourth of the power."

That's an incredible improvement in just a few years. I wonder how much of that is Apple engineering and how much is TSMC improving their 3nm process.



Apple usually massively exaggerates their tech spec comparison - is it REALLY half the power use of all times (so we'll get double the battery life) or is it half the power use in some scenarios (so we'll get like... 15% more battery life total) ?


IME Apple has always been the most honest when it makes performance claims. LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours. All the while, Dell or HP would claim 19 hours and you'd be lucky to get 2 eg [1].

As for CPU power use, of course that doesn't translate into doubling battery life because there are other components. And yes, it seems the OLED display uses more power so, all in all, battery life seems to be about the same.

I'm interested to see an M3 vs M4 performance comparison in the real world. IIRC the M3 was a questionable upgrade. Some things were better but some weren't.

Overall the M-series SoCs have been an excellent product however.

[1]: https://www.laptopmag.com/features/laptop-battery-life-claim...

EDIT: added link


> ME Apple has always been the most honest when it makes performance claims.

Okay, but your example was about battery life:

> LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours. All the while, Dell or HP would claim 19 hours and you'd be lucky to get 2 eg [1]

And even then, they exaggerated their claims. And your link doesn't say anything about HP or Dell claiming 19 hour battery life.

Apple has definitely exaggerated their performance claims over and over again. The Apple silicon parts are fast and low power indeed, but they've made ridiculous claims like comparing their chips to an nVidia RTX 3090 with completely misleading graphs

Even the Mac sites have admitted that the nVidia 3090 comparison was completely wrong and designed to be misleading: https://9to5mac.com/2022/03/31/m1-ultra-gpu-comparison-with-...

This is why you have to take everything they say with a huge grain of salt. Their chip may be "twice" as power efficient in some carefully chosen unique scenario that only exists in an artificial setting, but how does it fare in the real world? That's the question that matters, and you're not going to get an honest answer from Apple's marketing team.


M1 Ultra did benchmark close to 3090 in some synthetic gaming tests. The claim was not outlandish, just largely irrelevant for any reasonable purpose.

Apple does usually explain their testing methodology and they don’t cheat on benchmarks like some other companies. It’s just that the results are still marketing and should be treated as such.

Outlandish claims notwithstanding, I don’t think anyone can deny the progress they achieved with their CPU and especially GPU IP. Improving performance on complex workloads by 30–50% in a single year is very impressive.


It did not get anywhere close to a 3090 in any test when the 3090 was running at full power. They were only comparable at specific power usage thresholds.


Different chips are generally compared at similar power levels, ime. If you ran 400 watts through an M1 Ultra and somehow avoid instantly vaporizing the chip in the process, I'm sure it wouldn't be far behind the 3090.


Ok but that doesn't matter if you can't actually run 400 watts through an M1 Ultra. If you wanna compare how efficient a chip is, sure, that's a great way to test. But you can't make the claim that your chip is as good as a 3090 if the end user is never going to see the performance of an actual 3090


You're right, its not 19 hours claimed. It was more than even that.

> HP gave the 13-inch HP Spectre x360 an absurd 22.5 hours of estimated battery life, while our real-world test results showed that the laptop could last for 12 hours and 7 minutes.


the absurdness was difference in claimed battery life vs actual battery life. 19 vs 2 is more absurd than 22.5 vs 12

> Speaking of the ThinkPad P72, here are the top three laptops with the most, er, far out battery life claims of all our analyzed products: the Lenovo ThinkPad P72, the Dell Latitude 7400 2-in-1 and the Acer TravelMate P6 P614. The three fell short of their advertised battery life by 821 minutes (13 hours and 41 mins), 818 minutes (13 hours and 38 minutes) and 746 minutes (12 hours and 26 minutes), respectively.

Dell did manage to be one of the top 3 most absurd claims though.


You’re working hard to miss the point there.

Dell and IBM were lying about battery life before OSX was even a thing and normal people started buying MacBooks. Dell and IBM will be lying about battery life when the sun goes red dwarf.

Reviewers and individuals like me have always been able to get 90% of Apple’s official battery times without jumping through hoops to do so. “If you were very careful” makes sense for an 11% difference. A ten hour difference is fucking bullshit.


So you are saying that Dell with Intel CPU could get longer battery life than Mac with M1? What does that say about quality of Apple engineering? Their marketeering is certainly second to none.


Maybe for battery life, but definitely not when it comes to CPU/GPU performance. Tbf, no chip company is, but Apple is particularly egregious. Their charts assume best case multi-core performance when users rarely ever use all cores at once. They'd have you thinking it's the equivalent of a 3090 or that you get double the frames you did before when the reality is more like 10% gains.


They are pretty honest when it comes to battery life claims, they’re less honest when it comes to benchmark graphs


I don't think less honest covers it and can't believe anything their marketing says after the 3090 claims. Maybe it's true, maybe not. We'll see from the reviews. Well assuming the reviewers weren't paid off with an "evaluation unit".


> LIke when they said an Macbook Air would last 10+ hours and third-party reviewers would get 8-9+ hours.

For literal YEARS, Apple battery life claims were a running joke on how inaccurate and overinflated they were.


I’ve never known a time when Dell, IBM, Sony, Toshiba, Fujitsu, Alien, weren’t lying through their teeth about battery times.

What time period are you thinking about for Apple? I’ve been using their laptops since the last G4 which is twenty years. They’ve always been substantially more accurate about battery times.


The problem with arguing about battery life this way is that it's highly dependent on usage patterns.

For example I would be surprised if there is any laptop, which is sufficiently fast for my usage, and it's battery life is more than 2-3 hours top. Heck, I have several laptops and all of them dies in one-one and a half hours. But of course, I never optimized for battery life, so who knows. So in my case, all of them are lying equally. I don't even check battery life for 15 years now. It's a useless metric for me, because all of them are shit.

But of course for people who don't need to use VMs, run several "micro"services at once, have constant internet transfer and have 5+ Intellij project open at the same time which caching several millions LOC, while gazillion web pages are open, maybe there is a difference, for me it doesn't matter whether it's one or one and a half hours.


You should try a MacBook Pro someday. It would still last all day with that workload. I had an XPS at work and it would last 1.5 hrs. My Apple laptop with the same workload lasts 6-8 hours easily. I never undocked the dell because of the performance issues. I undock the mac all the time because I can trust it to last.


I have a 2 year old high spec Macbook Pro with less load than the GP and rarely can get > 3 hours out of it.


I'm curious, what do you do with it?


Nothing too crazy I don't think. A bunch of standard Electron applications, a browser, a terminal - that's pretty much it. Sometimes Dockers, but I always kill it when I'm done.


> IME Apple has always been the most honest when it makes performance claims.

In nearly every single release, their claims are well above actual performance.


Controlling the OS is probably a big help there. At least, I saw lots of complaints about my zenbook model’s battery not hitting the spec. It was easy to hit or exceed it in Linux, but you have to tell it not to randomly spin up the CPU.


I had to work my ass off on my Fujitsu Lifebook to get 90% of the estimate, even on Linux. I even worked on a kernel patch for the Transmeta CPU, based on unexploited settings in the CPU documentation, but it came to no or negligible difference in power draw, which I suppose is why Linus didn’t do it in the first place.


BTW I get 19 hours from DELL XPS and Latitude. It's Linux with custom DE and Vim as IDE though.


I get about 21 hours from mine, it's running Windows but powered off.


This is why Apple can be slightly more honest about their battery specs, they don’t have the OS working against them. Unfortunately most DELLs XPS will be running Windows, so it is still misleading to provide specs based on what the hardware could do if not sabotaged.


I wonder if it’s like webpages. The numbers are calculated before marketing adds the crapware and ruins all of your hard work.


can you share more details about your setup?


Archlinux, mitigations (spectre alike) off, X11, OpenBox, bmpanel with only CPU/IO indicator. Light theme everywhere. Opera in power save mode. `powertop --auto-tune` and `echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo` Current laptop is Latitude 7390.


Right, so you are disabling all performance features and effectively turning your CPU into a low–end low–power SKU. Of course you’d get better battery life. It’s not the same thing though.


> echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo

Isn't that going to torch performance? My i9-9900 has a base frequency of 3.6 Ghz and a turbo of 5.0 Ghz. Disabling the turbo would create a 28% drop in performance.

I suppose if everything else on the system is configured to use as little power as possible, then it won't even be noticed. But seeing as CPUs underclock when idle (I've seen my i9 go as low as 1.2 Ghz), I'm not sure disabling turbo makes a significant impact except when your CPU is being pegged.


That's the point. I have no performance bottleneck with no_turbo. My i5 tends to turn on turbo mode and increased power demand (heat leaks) even if it's no needed. For example with no_turbo laptop is always cold and fan basically stays silent. With turbo it easily gets 40C warm while watching YT or doing my developer stuff, building docker containers and so.


I get 20 minutes from my Dell (not the XPS), with Vim. When it was brand-new, I got 40 minutes. A piece of hot garbage, with an energy-inefficient intel cpu..


Frankly that sounds like you got a lemon. Even the most inefficient gaming laptops get over an hour under a full gaming workload.


> IME Apple has always been the most honest when it makes performance claims

Yes and no. They'll always be honest with the claim, but the scenario for the claimed improvement will always be chosen to make the claim as large as possible, sometimes with laughable results.

Typically something like "watch videos for 3x longer <small>when viewing 4k h265 video</small>" (which means they adapted the previous gen's silicon which could only handle h264).


> IME Apple has always been the most honest when it makes performance claims

That's just laughable, sorry. No one is particularly honest in marketing copy, but Apple is for sure one of the worst, historically. Even more so when you go back to the PPC days. I still remember Jobs on stage talking about how the G4 was the fasted CPU in the world when I knew damn well that it was half the speed of the P3 on my desk.


Worked in an engineering lab at the time of the G4 introduction and I can contest that the G4 was a very, very fast CPU for scientific workloads.

Confirmed here: https://computer.howstuffworks.com/question299.htm (and elsewhere.)

A year later I was doing bonkers (for the time) photoshop work on very large compressed tiff files and my G4 laptop running at 400Mhz was more than 2x as fast as PIIIs on my bench.

Was it faster all around? I don't know how to tell. Was Apple as honest as I am in this commentary about how it mattered what you were doing? No. Was it a CPU that was able to do some things very fast vs others? I know it was.


Funny you mention that machine I still have one of those laying around. It was a very cool machine indeed with a very capable graphics card but that's about it. It did some things better/faster than a Pentium III PC but only if you went for the bottom of the barrel unit and crippled the software support (MMX just like another reply mentioned).

On top of that Intel increased frequency faster than Apple could handle. And after the release of the Pentium 4, the G4s became very noncompetitive so fast that one would question what could save Apple (later, down the road, Intel it turns out).

They tried to salvage it with the G5s but those came with so many issues that even their bi-proc water-cooled were just not keeping up. I briefly owned of those after repairing it for "free" using 3 of them, supposedly dead; the only thing worth a dam in that was the GPU. Extremely good hardware in many ways but also very weak for so many things that it had to be used only for very specific tasks, otherwise a cheap Intel PC was much better.

Which is precisely why right after they went with Intel. After years of subpar performance on laptops because they were stuck at G4 (not even high frequency).

Now I know from your other comments that you are a very strong believer and I'll admit that there were many reasons to use a Mac (software related) but please stop pretending they were performance competitive because that's just bonkers. If they were, the Intel switch would never have happend in the first place...


It's just amazing that this kind of nonsense persists. There were no significant benchmarks, "scientific" or otherwise, at the time or since showing that kind of behavior. The G4 was a dud. Apple rushed out some apples/oranges comparisons at launch (the one you link appears to be the bit where they compared a SIMD-optimized tool on PPC to generic compiled C on x86, though I'm too lazy to try to dig out the specifics from stale links), and the reality distortion field did the rest.


While certainly misleading, there were situations where the G4 was incredibly fast for the time. I remember being able to edit Video in iMove on a 12" G4 Laptop. At that time there was no equivalent x86 machine.


Have any examples from the past decade? Especially in the context of how exaggerated the claims are from PC and Android brands they are competing with?


Apple recently claimed that RAM in their Macbooks is equivalent to 2x the RAM in any other machine, in defense of the 8GB starting point.

In my experience, I can confirm that this is just not true. The secret is heavy reliance on swap. It's still the case that 1GB = 1GB.


Sure, and they were widely criticized for this. Again, the assertion I was responding to is that Apple does this ”laughably” more than competitors.

Is an occasional statement that they get pushback on really worse than what other brands do?

As an example from a competitor, take a look at the recent firestorm over Intel’s outlandish anti-AMD marketing:

https://wccftech.com/intel-calls-out-amd-using-old-cores-in-...


> Sure, and they were widely criticized for this. Again, the assertion I was responding to is that Apple does this ”laughably” more than competitors.

FWIW: the language upthread was that it was laughable to say Apple was the most honest. And I stand by that.


Fair point. Based on their first sentence, I mischaracterized how “laughable” was used.

Though the author also made clear in their second sentence that they think Apple is one of the worst when it comes to marketing claims, so I don’t think your characterization is totally accurate either.


Ye that was hilarious, my basic workload borders on the 8GB limit not even pushing it. They have fast swap but nothing beats real ram in the end, and considering their storage pricing is as stupid as their RAM pricing it really makes no difference.

If you go for the base model, you are in for a bad time, 256GB with heavy swap and no dedicated GPU memory (making the 8GB even worse) is just plain stupid.

This what the Apple fanboys don't seem to get, their base model at somewhat affordable price are deeply incompetent and if you start to load it up the pricing just do not make a lot of sense...


> If you go for the base model, you are in for a bad time, 256GB with heavy swap and no dedicated GPU memory (making the 8GB even worse) is just plain stupid ... their base model at somewhat affordable price are deeply incompetent

I got the base model M1 Air a couple of years back and whilst I don't do much gaming I do do C#, Python, Go, Rails, local Postgres, and more. I also have a (new last year) Lenovo 13th gen i7 with 16GB RAM running Windows 11 and the performance with the same load is night and day - the M1 walks all over it whilst easily lasting 10hrs+.

Note that I'm not a fanboy; I run both by choice. Also both iPhone and Android.

The Windows laptop often gets sluggish and hot. The M1 never slows down and stays cold. There's just no comparison (though the Air keyboard remains poor).

I don't much care about the technical details, and I know 8GB isn't a lot. I care about the experience and the underspecced Mac wins.


I don't know about your Lenovo and how your particular workload is handled by Windows.

And I agree that in pure performance, the Apple Silicon Macs will kill it; however, I am really skeptical that an 8GB model would give you a better experience overall. Faster for long compute operations sure, but then you have to deal with all the small slowdown from constant swapping. Unless you stick to a very small amounts of apps and very small amounts of tabs at the same time (which is rather limiting) I don't know how you do it. I don't want to call you a liar but maybe you are emotionally attached (just like I am sometimes) to the device to realize it, or maybe the various advantages of the Mac make you ignore the serious limitations that come with it.

Everyone has their own sets of tradeoffs but my argument is that you can deal with the 8GB Apple Silicon devices you are very likely to be well served by a much cheaper device anyway (like half as cheap).


All I can say is I have both and I use both most days. In addition to work-issued Windows laptops, so I have a reasonable and very regular comparison. And the comparative experience is exactly as I described. Always. Every time.

> you have to deal with all the small slowdown from constant swapping

That just doesn't happen. As I responded to another post, though, I don't do Docker or LLMs on the M1 otherwise you'd probably be right.

> Unless you stick to a very small amounts of apps and very small amounts of tabs at the same time

It's really common to have approaching 50+ tabs open at once. And using Word is often accompanied by VS Code, Excel, Affinity Designer, DotNet, Python, and others due to the nature of what I'm doing. No slowdown.

> maybe you are emotionally attached

I am emotionally attached to the device. Though as a long-time Mac, Windows, and Linux user I'm neither blinkered nor tribal - the attachment is driven by the experience and not the other way around.

> maybe the various advantages of the Mac make you ignore the serious limitations that come with it

There are indeed limitations. 8GB is too small. The fact that for what I do it has no impact doesn't mean I don't see that.

> you can deal with the 8GB Apple Silicon devices you are very likely to be well served by a much cheaper device anyway (like half as cheap)

I already have better Windows laptops than that, and I know that going for a Windows laptop that's half as cheap as the entry level Air would be nothing like as nice because the more expensive ones already aren't (the Lenovo was dearer than the Air).

---

To conclude, you have to use the right tool for the job. If the nature of the task intrinsically needs lots of RAM then 8GB is not good enough. But when it is enough it runs rings around equivalent (and often 'better') Windows machines.


None of that seems to be high loads or stuff that needs a lot of ram.


Not individually, no. Though it's often done simultaneously.

That said you're right about lots of RAM in that I wouldn't bother using the 8GB M1 Air for Docker or running LLMs (it can run SD for images though, but very slowly). Partly that's why I have the Lenovo. You need to pick the right machine for the job at hand.


You know that RAM in these machines is more different than the same as "RAM" in a standard PC? Apple's SoC RAM is more or less part of the CPU/GPU and is super fast. And for obvious reasons cannot be added to.

Anyway, I manage a few M1 and M3 machines with 256/8 configs and they all run just as fast as 16 and 32 machines EXCEPT for workloads that need more than 8GB for a process (virtualization) or workloads that need lots of video memory (Lightroom can KILL an 8GB machine that isn't doing anything else...)

The 8GB is stupid discussion isn't "wrong" in the general case, but it is wrong for maybe 80% of users.


> EXCEPT for workloads that need more than 8GB for a process

Isn't that exactly the upthread contention: Apple's magic compressed swap management is still swap management that replaces O(1) fast(-ish) DRAM access with thousands+ cycle page decompression operations. It may be faster than storage, but it's still extremely slow relative to a DRAM fetch. And once your working set gets beyond your available RAM you start thrashing just like VAXen did on 4BSD.


Exactly! Load a 4GB file and welcome the beach ball spinner any time you need to context switch to another app. I don't know how they don't realize that because it's not really hard to get there. But when I was enamored with Apple stuff in my formative years, I would gladly ignore that or brush it off so I can see where they come from, I guess.


It's not as different as the marketing would like you to think. In fact, for the low-end models even the bandwidth/speed isn't as big of a deal as they make it out to be, especially considering that bandwidth has to be shared for the GPU needs.

And if you go up in specs the bandwidth of Apple silicon has to be compared to the bandwidth of a combo with dedicated GPU. The bandwidth of dedicated GPUs is very high and usually higher than what Apple Silicon gives you if you consider the RAM bandwidth for the CPU.

It's a bit more complicated but that's marketing for you. When it comes to speed Apple RAM isn't faster than what can be found in high-end laptops (or desktops for that matter).


There is also memory compression and their insane swap speed due to SoC memory and ssd


Every modern operating system now does memory compression


Some of them do it better than others though.


Apple uses Magic Compression.


Not sure what windows does but the popular method on e.g. fedora is to split memory into main and swap and then compress swap. It could be more efficient the way Apple does it by not having to partition main memory.


This is a revolution


Citation needed?


Don't know if I'm allowed to. It's not that special though.


> The secret is heavy reliance on swap

You are entirely (100%) wrong, but, sadly, NDA...


I do admit the "reliance on swap" thing is speculation on my part :)

My experience is that I can still tell when the OS is unhappy when I demand more RAM than it can give. MacOS is still relatively responsive around this range, which I just attributed to super fast swapping. (I'd assume memory compression too, but I usually run into this trouble when working with large amounts of poorly-compressible data.)

In either case, I know it's frustrating when someone is confidently wrong but you can't properly correct them, so you have my apologies


Memory compression isn't magic and isn't exclusive to macOS.


I suggest you go and look HOW it is done in apple silicon macs, and then think long and hard why this might make a huge difference. Maybe Asahi Linux guys can explain it to you ;)


I understand that it can make a difference to performance (which is already baked into the benchmarks we look at), I don't see how it can make a difference to compression ratios, if anything in similar implementations (ex: console APUs) it tends to lead to worse compression ratios.

If there's any publicly available data to the contrary I'd love to read it. Anecdotally I haven't seen a significant difference between zswap on Linux and macOS memory compression in terms of compression ratios, and on the workloads I've tested zswap tends to be faster than no memory compression on x86 for many core machines.


How convenient :)


Regardless of what you can't tell, he's absolutely right regarding Apple's claims: saying that a 8gb mac is as good as a 16gb non-mac is laughable.


My entry-level 8GB M1 Macbook Air beats my 64GB 10-core Intel iMac in my day-to-day dev work.


That was never said. They said 8gb mac is similar to a 16gb non-Mac


If someone is claiming “‹foo› has always ‹barred›”, then I don't think it's fair to demand a 10 year cutoff on counter-evidence.


For “always” to be true, the behavior needs to extend to the present date. Otherwise, it’s only true to say “used to”.


Clearly it isn’t the case that Apple has always been more honest than their competition, because there were some years before Apple was founded.


Interesting, by what benchmark did you compare the G4 and the P3?

I don't have a horse in this race, Jobs lied or bent the truth all the time so it wouldn't surprise me, I'm just curious.


I remember that Apple used to wave around these SIMD benchmarks showing their PowerPC chips trouncing Intel chips. In the fine print, you'd see that the benchmark was built to use AltiVec on PowerPC, but without MMX or SSE on Intel.


Ah so the way Intel advertises their chips. Got it.


Yeah, and we rightfully criticize Intel for the same and we distrust their benchmarks


You can claim Apple is dishonest for a few reasons.

1) Graphs often are unannotatted.

2) Comparisons are rarely against latest generation products. (their argument for that has been that they do not expect people to upgrade yearly, so its showing the difference of their intended upgrade path).

3) They have conflated performance, for performance per watt.

However, when it comes to battery life, performance (for a task) or specification of their components (screens, ability to use external displays up to 6k, port speed etc) there are almost no hidden gotchas and they have tended to be trustworthy.

The first wave of M1 announcements were met with similar suspicion as you have shown here; but it was swiftly dispelled once people actually got their hands on them.

*EDIT:* Blaming a guy who's been dead for 13 years for something they said 50 years ago, and primarily it seems for internal use is weird. I had to look up the context but it seems it was more about internal motivation in the 70’s than relating to anything today, especially when referring to concrete claims.


"This thing is incredible," Jobs said. "It's the first supercomputer on a chip.... We think it's going to set the industry on fire."

"The G4 chip is nearly three times faster than the fastest Pentium III"

- Steve Jobs (1999) [1]

[1] https://www.wired.com/1999/08/lavish-debut-for-apples-g4/


Thats cool, but literally last millennium.

And again, the guy has been dead for the better part of this millennium.

What have they shown of any product currently on the market, especially when backed with any concrete claim, that has been proven untrue-

EDIT: After reading your article and this one: https://lowendmac.com/2006/twice-as-fast-did-apple-lie-or-ju... it looks like it was true in floating point workloads.


The G4 was a really good chip if you used photoshop. It took intel awhile to catch up.


If you have to go back 20+ years for an example…


Apple marketed their PPC systems as "a supercomputer on your desk", but it was nowhere near the performance of a supercomputer of that age. Maybe similar performance to a supercomputer from the 1970's, but that was their marketing angle from the 1990's.


From https://512pixels.net/2013/07/power-mac-g4/: the ad was based on the fact that Apple was forbidden to export the G4 to many countries due to its “supercomputer” classification by the US government.


It seems that US government was buying too much into tech hypes at the turn of the millenium. Around the same period PS2 exports were also restricted [1].

[1] https://www.latimes.com/archives/la-xpm-2000-apr-17-fi-20482...


The PS2 was used in supercomputing clusters.


Blaming a company TODAY for marketing from the 1990s is crazy.


Except they still do the same kind of bullshit marketing today.


> Apple marketed their PPC systems as "a supercomputer on your desk"

It's certainly fair to say that twenty years ago Apple was marketing some of its PPC systems as "the first supercomputer on a chip"[^1].

> but it was nowhere near the performance of a supercomputer of that age.

That was not the claim. Apple did not argue that the G4's performance was commensurate with the state of the art in supercomputing. (If you'll forgive me: like, fucking obviously? The entire reason they made the claim is precisely because the latest room-sized supercomputers with leapfrog performance gains were in the news very often.)

The claim was that the G4 was capable of sustained gigaflop performance, and therefore met the narrow technical definition of a supercomputer.

You'll see in the aforelinked marketing page that Apple compared the G4 chip to UC Irvine’s Aeneas Project, which in ~2000 was delivering 1.9 gigaflop performance.

This chart[^2] shows the trailing average of various subsets of super computers, for context.

This narrow definition is also why the machine could not be exported to many countries, which Apple leaned into.[^3]

> Maybe similar performance to a supercomputer from the 1970's

What am I missing here? Picking perhaps the most famous supercomputer of the mid-1970s, the Cray-1,[^4] we can see performance of 160 MFLOPS, which is 160 million floating point operations per second (with an 80 MHz processor!).

The G4 was capable of delivering ~1 GFLOP performance, which is a billion floating point operations per second.

Are you perhaps thinking of a different decade?

[^1]: https://web.archive.org/web/20000510163142/http://www.apple....

[^2]: https://en.wikipedia.org/wiki/History_of_supercomputing#/med...

[^3]: https://web.archive.org/web/20020418022430/https://www.cnn.c...

[^4]: https://en.wikipedia.org/wiki/Cray-1#Performance


>That was not the claim. Apple did not argue that the G4's performance was commensurate with the state of the art in supercomputing.

This is marketing we're talking about, people see "supercomputer on a chip" and they get hyped up by it. Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

> The entire reason they made the claim is

The reason they marketed it that way was to get people to part with their money. Full stop.

In the first link you added, there's a photo of a Cray supercomputer, which makes the viewer equate Apple = Supercomputer = I am a computing god if I buy this product. Apple's marketing has always been a bit shady that way.

And soon after that period Apple jumped off the PPC architecture and onto the x86 bandwagon. Gimmicks like "supercomputer on a chip" don't last long when the competition is far ahead.


I can't believe Apple is marketing their products in a way to get people to part with their money.

If I had some pearls I would be clutching them right now.


> This is marketing we're talking about, people see "supercomputer on a chip" and they get hyped up by it.

That is also not in dispute. I am disputing your specific claim that Apple somehow suggested that the G4 was of commensurate performance to a modern supercomputer, which does not seem to be true.

> Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

This is why context is important (and why I'd appreciate clarity on whether you genuinely believe a supercomputer from the 1970s was anywhere near as powerful as a G4).

In the late twentieth and early twenty-first century, megapixels were a proxy for camera quality, and megahertz were a proxy for processor performance. More MHz = more capable processor.

This created a problem for Apple, because the G4's SPECfp_95 (floating point) benchmarks crushed Pentium III at lower clock speeds.

PPC G4 500 MHz - 22.6

PPC G4 450 MHz - 20.4

PPC G4 400 MHz - 18.36

Pentium III 600 MHz – 15.9

For both floating point and integer benchmarks, the G3 and G4 outgunned comparable Pentium II/III processors.

You can question how this translates to real world use cases – the Photoshop filters on stage were real, but others have pointed out in this thread that it wasn't an apples-to-apples comparison vs. Wintel – but it is inarguable that the G4 had some performance advantages over Pentium at launch, and that it met the (inane) definition of a supercomputer.

> The reason they marketed it that way was to get people to part with their money. Full stop.

Yes, marketing exists to convince people to buy one product over another. That's why companies do marketing. IMO that's a self-evidently inane thing to say in a nested discussion of microprocessor architecture on a technical forum – especially when your interlocutor is establishing the historical context you may be unaware of (judging by your comment about supercomputers from the 1970s, which I am surprised you have not addressed).

I didn't say "The reason Apple markets its computers," I said "The entire reason they made the claim [about supercomputer performance]…"

Both of us appear to know that companies do marketing, but only you appear to be confused about the specific claims Apple made – given that you proactively raised them, and got them wrong – and the historical backdrop against which they were made.

> In the first link you added, there's a photo of a Cray supercomputer

That's right. It looks like a stylized rendering of a Cray-1 to me – what do you think?

> which makes the viewer equate Apple = Supercomputer = I am a computing god if I buy this product

The Cray-1's compute, as measured in GFLOPS, was approximately 6.5x lower than the G4 processor.

I'm therefore not sure what your argument is: you started by claiming that Apple deliberately suggested that the G4 had comparable performance to a modern supercomputer. That isn't the case, and the page you're referring to contains imagery of a much less performant supercomputer, as well as a lot of information relating to the history of supercomputers (and a link to a Forbes article).

> Apple's marketing has always been a bit shady that way.

All companies make tradeoffs they think are right for their shareholders and customers. They accentuate the positives in marketing and gloss over the drawbacks.

Note, too, that Adobe's CEO has been duped on the page you link to. Despite your emphatic claim:

> Apple was 100% using the "supercomputer" claim to make their luddite audience think they had a performance advantage, which they did not.

The CEO of Adobe is quoted as saying:

> “Currently, the G4 is significantly faster than any platform we’ve seen running Photoshop 5.5,” said John E. Warnock, chairman and CEO of Adobe.

How is what you are doing materially different to what you accuse Apple of doing?

> And soon after that period Apple jumped off the PPC architecture and onto the x86 bandwagon.

They did so when Intel's roadmap introduced Core Duo, which was significantly more energy-efficient than Pentium 4. I don't have benchmarks to hand, but I suspect that a PowerBook G5 would have given the Core Duo a run for its money (despite the G5 being significantly older), but only for about fifteen seconds before thermal throttling and draining the battery entirely in minutes.


My iBook G4 was absolutely crushed by my friends Wintel laptops that they bought for half as much. Granted it was more carriable and had somewhat better battery life (needed it cause how much longer was needed to do stuff) but really performance was not a good reason to go with Apple hardware, and that still holds true as far as I'm concerned.


G4 was 1998, Core Duo was 2006, 8 years isn’t bad.


That is a long time – bet it felt even longer to the poor PowerBook DRI at Apple who had to keep explaining to Steve Jobs why a G5 PowerBook wasn't viable!


Ya, I really wanted a G5 but power and thermals weren’t going to work and IBM/Moto weren’t interested in making a mobile version.


Indeed. Have we already forgotten about the RDF?


No, it was just always a meaningless term...


Was simply a phrase to acknowledge that Jobs was better at giving demos than anyone who ever lived.


Didn’t he have to use two PPC procs to get the equivalent perf you’d get on a P3?

Just add them up, it’s the same number of Hertz!

But Steve that’s two procs vs one!

I think this is when Adobe was optimizing for Windows/intel and was single threaded, but Steve put out some graphs showing better perf on the Mac.


> IME Apple has always been the most honest when it makes performance claims.

I guess you weren't around during the PowerPC days... Because that's a laughable statement.


I have no idea who's down voting you. They were lying through their teeth about CPU performance back then.

A PC half the price was smoking their top of the line stuff.


That's funny you say that, because this is precisely the time, I started buying Macs (I got a Pismo PowerBook G3 gifted and then bought an iBook G4). And my experience was that for sure, if you put as much money into a PC than in a Mac you would get MUCH better performance.

What made it worth it at the time (I felt) was the software. Today I'm really don't think so, software has improved overall in the industry and there is not a lot of things "Mac specific" that makes it a clear-cut choice.

As for the performance I can't believe all the Apple silicon hype. Sure, it gets good battery life given you use strictly Apple software (or software optimized for it heavily) but in mixed workload situation it's not that impressive.

Using the M2 MacBook Pro of a friend I figured I could get maybe 4-5 hours out of its best case scenario which is better than the 2-3 hours you would get from a PC laptop but also not that great considering the price difference.

And when it comes to performance it is extremely unequal and very lackluster for many things. Like there is more lag launching Activity Monitor on a 2K++ MacBook Pro than launching task manager on a 500 PC. This is a small somewhat stupid example but it does tell the overall story.

They talk a big game but in reality, their stuff isn't that performant in the real world.

And they still market games when one of their 2K laptops plays Dota 2 (a very old, relatively ressource efficient game) worse than a cheapo PC.


> Using the M2 MacBook Pro of a friend I figured I could get maybe 4-5 hours out of its best case scenario which is better than the 2-3 hours you would get from a PC laptop but also not that great considering the price difference.

Any electron apps on it?


Yes, but I stopped caring about electron apps some time ago. You can't just drop or ignore useful software to satisfy Apple marketing. Just like you can't just ignore Chrome for Safari to satisfy the autonomy claims, because Chrome is much more useful and better at quite a bit of things.

I went the way of only Apple and Apple optimized software for quite a while but I just can't be bothered anymore, considering the price of the hardware and nowadays the price of subscription software.

And this is exactly my argument I you use the hardware in a very specific way, you get there but it is very limiting, annoying and inacceptable considering the pricing.

It's like saying that a small city car gets more gas mileage when what one needs is actually a capable truck. It's not strictly wrong but also not very helpful.

I think the Apple Silicon laptops are very nice if you can work within the limitations, but the moment you start pushing on those you realize they are not really worth the money. Just like the new iPad Pro they released; completely awesome hardware but how many people can actually work within the limitations of iPad OS to make the price not look like a complete ripoff. Very few I would argue.


or VMs. they should be getting way better out of that.


Apple switched to Intel chips 20 years ago. Who fucking cares about PowerPC?

Today, Apple Silicon is smoking all but the top end Intel chips, while using a fraction of the power.


Oh those megahertz myths! Their marketing department is pretty amazing at their spin control. This one was right up there with "it's not a bug; it's a feature" type of spin.


All I remember is tanks in the commercials.

We need more tanks in commercials.


Before macOS became NextStep it was practically a different company. I’ve been using Apple hardware for 21 years, when they got a real operating system. Even the G4 did better than the laptop it replaced.


Apple is always honest but they know how to make you believe something that isn’t true.


Yeah, the assumption seems to be that using less battery by one component means that the power will just magically go unused. As with everything else in life, as soon as something stops using a resource something else fills the vacuum to take advantage of the resource.


Quickly looking at the press release, it seems to have the same comparisons as in the video. None of Apple's comparisons today are between the M3 and M4. They are ALL comparing the M2 and M4. Why? It's frustrating, but today Apple replaced a product with an M2 with a product with an M4. Apple always compares product to product, never component to component when it comes to processors. So those specs are far more impressive than if we could have numbers between the M3 and M4.


Didn't they do extreme nitpicking for their tests so they could show the M1 beating a 3090 (or M2 a 4090, I can't remember).

Gave me quite a laugh when Apple users started to claim they'd be able to play Cyberpunk 2077 maxed out with maxed out raytracing.


I'll give you that Apple's comparisons are sometimes inscrutable. I vividly remember that one.

https://www.theverge.com/2022/3/17/22982915/apple-m1-ultra-r...

Apple was comparing the power envelope (already a complicated concept) of their GPU against a 3090. Apple wanted to show that the peak of their GPU's performance was reached with a fraction of the power of a 3090. What was terrible was that Apple was cropping their chart at the point where the 3090 was pulling ahead in pure compute by throwing more watts at the problem. So their GPU was not as powerful as a 3090, but a quick glance at the chart would completely tell you otherwise.

Ultimately we didn't see one of those charts today, just a mention about the GPU being 50% more efficient than the competition. I think those charts are beloved by Johny Srouji and no one else. They're not getting the message across.


Plenty of people on HN thought that M1 GPU is as powerful as 3090 GPU, so I think the message worked very well for Apple.

They really love those kind of comparisons - e.g. they also compared M1s against really old Intel CPUs to make the numbers look better, knowing that news headlines won't care for details.


> not component to component

that's honestly kind of stupid when discussing things like 'new CPU!' like this thread.

I'm not saying the M4 isn't a great platform, but holy cow the corporate tripe people gobble up.


They compared against really old intel CPUs because those were the last ones they used in their own computers! Apple likes to compare device to device, not component to component.


You say that like it's not a marketing gimmick meant to mislead and obscure facts.

It's not some virtue that causes them to do this.


It's funny because your comment is meant to mislead and obscure facts.

Apple compared against Intel to encourage their previous customers to upgrade.

There is nothing insidious about this and is in fact standard business practice.


Apple's the ONLY tech company that doesn't compare products to their competitors.

The intensity of the reality distortion field and hubris is mind boggling.

Turns out, you fell for it.


No, they compared it because it made them look way better for naive people. They have no qualms comparing to other competition when it suits them.

You're explanation is a really baffling case of corporate white knighting.


Yes, can't remember the precise combo either, there was a solid year or two of latent misunderstandings.

I eventually made a visual showing it was the same as claiming your iPhone was 3x the speed of a Core i9: Sure, if you limit the power draw of your PC to a battery the size of a post it pad.

Similar issues when on-device LLMs happened, thankfully, quieted since then (last egregious thing I saw was stonk-related wishcasting that Apple was obviously turning its Xcode CI service into a full-blown AWS competitor that'd wipe the floor with any cloud service, given the 2x performance)


It’s an iPad event and there were no M3 iPads.

That’s all. They’re trying to convince iPad users to upgrade.

We’ll see what they do when they get to computers later this year.


I have a Samsung Galaxy S7 FE tablet, and I can't figure any use case where I may use more power.

I agree that iPad has more interesting software than android for use cases like video or music editing, but I don't do those on a tablet anyway.

I just can't imagine anyone updating their ipad M2 for this except a tiny niche that really wants that more power.


I don't know who would prefer to do music or video editing on smaller display, without keyboard for shortcuts, without proper file system and with problematic connectivity to external hardware. Sure, it's possible, but why? Ok, maybe there's some usecase on the road where every gram counts, but that seems niche.


The A series was good enough.

I’m vaguely considering this but entirely for the screen. The chip has been irrelevant to me for years, it’s long past the point where I don’t notice it.


A series was definitely not good enough. Really depends on what you're using it for. Netflix and web? Sure. But any old HDR tablet, that can maintain 24Hz, is good enough for that.

These are 2048x2732 with 120Hz displays, that support 6k external displays. Gaming and art apps push them pretty hard. From the iPad user in my house, goin from the 2020 non M* iPad to a 2023 M2 iPad made a huge difference for the drawing apps. Better latency is always better for drawing, and complex brushes (especially newer ones), selections, etc, would get fairly unusable.

For gaming, it was pretty trivial to dip well below 60Hz with a non M* iPad, with some of the higher demand games like Fortnight, Minecraft (high view distance), Roblox (it ain't what it used to be), etc.

But, the apps will always gravitate to the performance of the average user. A step function in performance won't show up in the apps until the adoption follows, years down the line. Not pushing the average to higher performance is how you stagnate the future software of the devices.


You’re right, it’s good enough for me. That’s what I meant but I didn’t make that clear at all. I suspect a ton of people are in a similar position.

I just don’t push it at all. The few games I play are not complicated in graphics or CPU needs. I don’t draw, 3D model, use Logic or Final Cut or anything like that.

I agree the extra power is useful to some people. But even there we have the M1 (what I’ve got) and the M2 models. But I bet there are plenty of people like me who mostly bought the pro models for the better screen and not the additional grunt.


The AX series, which is what iPads were using before the M series, were precisely the chip family that got rebranded as the M1, M2, etc.

The iPads always had a lot of power, people simply started paying more attention when the chip family was ported to PC.


Yeah. I was just using the A to M chip name transition as an easy landmark to compare against.


AI on the device may be the real reason for an M4.


Previous iPads have had that for a long time. Since the A12 in 2018. The phones had it even earlier with the A11.

Sure this is faster but enough to make people care?

It may depend heavily on what they announce is in the next version of iOS/iPadOS.


That’s my point - if there’s a real on-device LLM it may be much more usable with the latest chip.


That's because the previous iPad Pros came with M2, not M3. They are comparing the performance with the previous generation of the same product.


> They are ALL comparing the M2 and M4. Why?

Well, the obvious answer is that those with older machines are more likely to upgrade than those with newer machines. The market for insta-upgraders is tiny.

edit: And perhaps an even more obvious answer: there are no iPads that contained the M3, so the comparison would be more useless. The M4 was just launched today exclusively in iPads.


because previous ipad was M2. So 'remember how fast was your previous ipad', well this one is N better.


They know that anyone who has bought an M3 is good on computers for a long while. They're targeting people who have m2 or older macs. People who own an m3 are basically going to buy anything that comes down the pipe, because who needs an m3 over an m2 or even an m1 today?


I’m starting to worry that I’m missing out on some huge gains (M1 Air user.) But as a programmer who’s not making games or anything intensive, I think I’m still good for another year or two?


You're not going to be missing out on much. I had the first M1 Air and recently upgraded to an M3 Air. The M1 Air has years of useful life left and my upgrade was for reasons not performance related.

The M3 Air performs better than the M1 in raw numbers but outside of some truly CPU or GPU limited tasks you're not likely to actually notice the difference. The day to day behavior between the two is pretty similar.

If your current M1 works you're not missing out on anything. For the power/size/battery envelope the M1 Air was pretty awesome, it hasn't really gotten any worse over time. If it does what you need then you're good until it doesn't do what you need.


I have a 2018 15" MBP, and an M1 Air and honestly they both perform about the same. The only noticeable difference is the MBP takes ~3 seconds to wake from sleep and the M1 is instant.


I have an M1 Air and I test drove a friend's recent M3 Air. It's not very different performance-wise for what I do (programming, watching video, editing small memory-constrained GIS models, etc)


I wanted to upgrade my M1 because it was going to swap a lot with only 8 gigs of RAM and because I wanted a machine that could run big LLMs locally. Ended up going 8G macbook air M1 -> 64G macbook pro M1. My other reasoning was that it would speed up compilation, which it has, but not by too much.

The M1 air is a very fast machine and is perfect for anyone doing normal things on the computer.


Doesn't seem plausible to me that Apple will release a "M3 variant" that can drive "tandem OLED" displays. So probably logical to package whatever chip progress (including process improvements) into "M4".

And it can signal that "We are serious about iPad as a computer", using their latest chip.

Logical alignment to progresses in engineering (and manufacturing) packaged smartly to generate marketing capital for sales and brand value creation.

Wonder how the newer Macs will use these "tandem OLED" capabilities of the M4.


The iPads skipped the M3 so they’re comparing your old iPad to the new one.


I like the comparison between much older hardware with brand new to highlight how far we came.


> I like the comparison between much older hardware with brand new to highlight how far we came.

That's ok, but why skip the previous iteration then? Isn't the M2 only two generations behind? It's not that much older. It's also a marketing blurb, not a reproducible benchmark. Why leave out comparisons with the previous iteration even when you're just hand-waving over your own data?


In this specific case, it's because iPad's never got the M3. They're literally comparing it with the previous model of iPad.

There were some disingenuous comparisons throughout the presentation going back to A11 for the first Neural Engine and some comparisons to M1, but the M2 comparison actually makes sense.


I wouldn't call the comparison to A11 disingenuous, they were very clear they were talking about how far their neural engines have come, in the context of the competition just starting to put NPUs in their stuff.

I mean, they compared the new iPad Pro to an iPod Nano, that's just using your own history to make a point.


Fair point—I just get a little annoyed when the marketing speak confuses the average consumer and felt as though some of the jargon they used could trip less informed customers up.


personally I think this is a comparison most people want. The M3 had a lot of compromises over the M2.

that aside, the M4 is about the Neural Engine upgrades over anything (which probably should have been compared to the M3)


What are such compromises? I may buy an M3 mbp, so would like to hear more


The M3 Pro had some downgrades compared to the M2 Pro, less performance cores and lower memory bandwidth. This did not apply to the M3 and M3 Max.


Yes, kinda annoying. But on the other hand, given that apple releases a new chip every 12 months, we can grant them some slack here. Given that from AMD, Intel or nvidia we see usually a 2 year cadence.


There’s probably easier problems to solve in the ARM space than x86 considering the amount of money and time spent on x86.

That’s not to say that any of these problems are easy, just that there’s probably more lower hanging fruit in ARM land.


And yet they seem to be the only people picking the apparently "Low Hanging Fruit" in ARM land. We'll see about Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now, but you still can't buy one to actually test.

And don't underestimate the investment Apple made - it's likely at a similar level to the big x86 incumbents. I mean AMD's entire Zen development team cost was likely a blip on the balance sheet for Apple.


They don't care as much for the ARM stuff because software development investment vastly outweighs the chip development costs.

Sure, maybe they can do better but at what cost and for what? The only thing Apple does truly better is performance per watt which is not something that is relevant for a large part of the market.

x86 stuff is still competitive performance wise, especially in the GPU department where Apple attempts are rather weak compared to what is on offer across the pond. The Apple Silicon switch cost a large amount of developer effort for optimisation, and in the process a lot of software compatibility was lost, it took a long time to get even the most popular softwares to get properly optimized and some software house even gave up on supporting macOS because it just wasn't worth the man hour investment considering the tiny market.

This is why I am very skeptical about the Qualcomm ARM stuff, it needs to be priced extremely well to have a chance, if consumers do not pick it up in droves, no software port is going to happen in a timely manner and it will stay irrelevant. Considering the only thing much better than the current x86 offering is the performance per watt, I do not have a lot of hope, but I may be pleasantly surprised.

Apple aficionados keep raving about battery life but it's not really something a lot of people care about (appart for smartphones, where Apple isn't doing any better than the rest of industry).


> Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now

Launching at Computex in 2 weeks, https://www.windowscentral.com/hardware/laptops/next-gen-ai-...


Good to know that it's finally seeing the light. I thought they're still in legal dispute with ARM about Nuvia's design?


Not privy to details, but some legal disputes can be resolved by licensing price negotiations, motivated by customer launch deadlines.


speaking of which, whatever happened to qualcomm's bizarre assertion that ARM was pulling a sneak move in all its new licensing deals to outlaw third-party IP entirely and force ARM-IP-only?

there was one quiet "we haven't got anything like that in the contract we're signing with ARM" from someone else, and then radio silence. And you'd really think that would be major news, because it's massively impactful on pretty much everyone, since one of the major use-cases of ARM is as a base SOC to bolt your custom proprietary accelerators onto...

seemed like obvious bullshit at the time from a company trying to "publicly renegotiate" a licensing agreement they probably broke...


Again, not saying that they are easy (or cheap!) problems to solve, but that there are more relatively easy problems in the ARM space than the x86 space.

That’s why Apple can release a meaningfully new chip every year where it takes several for x86 manufacturers


> We'll see about Qualcomm's Nuvia-based stuff, but that's been "nearly released" for what feels like years now, but you still can't buy one to actually test.

That's more bound by legal than technical reasons...


Maybe for GPUs, but for CPU both intel and AMD release with yearly cadance. Even when Intel has nothing new to release, generation is bumped.


> Apple always compares product to product, never component to component when it comes to processors.

I don't think this is true. When they launched the M3 they compared primarily to M1 to make it look better.


From their specs page, battery life is unchanged. I think they donated the chip power savings to offset the increased consumption of the tandem OLED


I’ve not seen discussion that Apple likely scales performance of chips to match the use profile of the specific device it’s used in. An M2 in an iPad Air is very likely not the same as an M2 in an MBP or Mac Studio.


The GeekBench [1,2] benchmarks for M2 are:

Single Core: iPad Pro (M2): 2539 Macbook Air (M2): 2596 Macbook Pro (M2): 2645

Multi Core: iPad Pro (M2 8-core): 9631 Macbook Air (M2 8-core): 9654 Macbook Pro (M2 8-core): 9642

So, it appears to be almost the same performance (until it throttles due to heat, of course).

1. https://browser.geekbench.com/ios-benchmarks 2. https://browser.geekbench.com/mac-benchmarks


Surprisingly, I think it is: I was going to comment that here, then checked Geekbench, single core scores match for M2 iPad/MacBook Pro/etc. at same clock speed. i.e. M2 "base" = M2 "base", but core count differs, and with the desktops/laptops, you get options for M2 Ultra Max SE bla bla.


A Ryzen 7840U in a gaming handheld is not (configured) the same as a Ryzen 7840U in a laptop, for that matter, so Apple is hardly unique here.


The manufacturer often targets a tdp that is reasonable for thermals and battery life, but the cpu package is often the same.


Yeah, but the difference is that you usually don't get people arguing that it's the same thing or that it can be performance competitive in the long run. When it comes to Apple stuff, people say some irrational stuff that is totally bonkers...


Likely there is also a smaller battery as the iPad Pro is quite a bit thinner


I don't know, but the M3 MBP I got from work already gives the impression of using barely any power at all. I'm really impressed by Apple Silicon, and I'm seriously reconsidering my decision from years ago to never ever buy Apple again. Why doesn't everybody else use chips like these?


I have an M3 for my personal laptop and an M2 for my work laptop. I get ~8 hours if I'm lucky on my work laptop, but I have attributed most of that battery loss to all the "protection" software they put on my work laptop that is always showing up under the "Apps Using Significant Power" category in the battery dropdown.

I can have my laptop with nothing on screen, and the battery still points to TrendMicro and others as the cause of heavy battery drain while my laptop seemingly idles.

I recently upgraded my personal laptop to the M3 MacBook pro and the difference is astonishing. I almost never use it plugged in because I genuinely get close to that 20 hour reported battery life. Last weekend I played a AAA Video Game through Xbox Cloud Gaming (awesome for mac gamers btw) and with essentially max graphics (rendered elsewhere and streamed to me of course), I got sucked into a game for like 5 hours and lost only 8% of my battery during that time, while playing a top tier video game! It really blew my mind. I also use GoLand IDE on there and have managed to get a full day of development done using only about 25-30% battery.

So yeah, whatever Apple is doing, they are doing it right. Performance without all the spyware that your work gives you makes a huge difference too.


Over the weekend, I accidentally left my work M3 unplugged with caffeinate running (so it doesn't sleep). It wasn't running anything particularly heavy, but still, on Monday, 80% charge left.

That's mindblowing. Especially since my personal laptop is a Thinkpad X1 Extreme. I can't leave that unplugged at all.


Apple quote 18h of Apple TV playback or 12h of web browsing so I will call a large amount of bullshit on that.

Even considering the marketing, best case scenario you would be between 27 and 41% battery consumption for 5h of runtime. The actual number will be lower than that because you probably don't use the MBP at the low brightness they use for marketing benchmarks and game streaming constantly requires power for the wifi chip (video can buffer, hence the lower consumption).

There is no way to say this nicely, but can you stop lying ?


For the AAA video game example, I mean, it is interesting how far that kind of tech has come… but really that’s just video streaming (maybe slightly more difficult because latency matters?) from the point of view of the laptop, right? The quality of the graphics there have more-or-less nothing to do with the battery.


I think the market will move to using chips like this, or at least have additional options. The new Snapdragon SOC is interesting, and I would suspect we could see Google and Microsoft play in this space at some point soon.


> is it REALLY half the power use of all times (so we'll get double the battery life)

I'm not sure what you mean by "of all times" but half the battery usage of the processor definitely doesn't translate into double the battery life since the processor is not the only thing consuming power.


Any product that uses this is more than just the chip, so you cannot get a proportional change in battery life.


Sure, but I also remember them comparing M1 chip to 3090 GTX and my MacBook M1 Pro doesn't really run games well.

So I've become really suspicious about any claims about performance done by Apple.


It's not just games. There is in fact not a lot of stuff that Apple Silicon can run well. In theory you get great battery life, to use software nobody wants to use or to take longer to run stuff that is not running well.

The problem is two-fold, first the marketing bullshit does not match the reality and second the Apple converted will lie without even thinking about it to justify the outrageous price.

There are a lot of things I like about Apple hardware but the reality is that they can charge so much because there is a lot of mythology around their products and it just doesn't add up.

Now if only they be bothered to actually make software great (and not simpleton copies of what already exists), there would be an actual valid reason to unequivocally recommend their stuff but they can't be bothered since they already make too much money as it is.


I mean, I remember Apple comparing the M1 Ultra to Nvidia's RTX 3090. While that chart was definitely putting a spin on things to say the least, and we can argue from now until tomorrow about whether power consumption should or should not be equalised, I have no idea why anyone would expect the M1 Pro (an explicitly much weaker chip) to perform anywhere near the same.

Also what games are you trying to play on it? All my M-series Macbooks have run games more than well enough with reasonable settings (and that has a lot more to do with OS bugs and the constraints of the form factor than with just the chipset).


They compared them in terms of perf/watt, which did hold up, but obviously implied higher performance overall.


That is fault of the devs. Because optimization for dedicated graphic cards is a either integrated in the game engine or they just have a version for rtx users.


Apple might use simplified and opaque plots to drive their point, but they all too often undersell the differences. Indepedent reviews for example find that they not just hit the mark Apple mentions for things like battery but that often do slightly better...


Well battery life would be used by other things too right? Especially by that double OLED screen. "best ever" in every keynote makes me laugh at this point, but it doesn't mean that they're not improving their power envelope.


> so we'll get double the battery life

This is an absurd interpretation. Nobody hears that and says "they made the screen use half the energy".


You wouldn’t necessarily get twice the battery life. It could be less than that due to the thinner body causing more heat, a screen that utilizes more energy, etc


CPU is not the only way that power is consumed in a portable device. It is a large fraction, but you also have displays and radios.


Isn't 15% more battery life a huge improvement on a device already well known for long battery life?


Apple is one of the few companies that underpromise and overdeliver and never exaggerate.

Compared to the competition, I'd trust Apple much more than the Windows laptop OEMs.


If there is any dishonesty, I would wager it is a case of it can double the battery life in low power scenarios. Can go twice as long when doing word processing for instance. Can potentially idle a lot lower


Actually, TSMC's N3E process is somewhat of a regression on the first-generation 3nm process, N3. However, it is simpler and more cost-efficient, and everyone seems to want to get out of that N3 process as quickly as possible. That seems to be the biggest reason Apple released the A17(M3) generation and now the M4 the way they did.

The N3 process is in the A17 Pro, the M3, M3 Pro, and M3 Max. The A17 Pro name seems to imply you won't find it trickle down on the regular iPhones next year. So we'll see that processor only this year in phones, since Apple discontinues their Pro range of phones every year; only the regular phones trickle downrange lowering their prices. The M3 devices are all Macs that needed an upgrade due to their popularity: the Macbook Pro and Macbook Air. They made three chips for them, but they did not make an M3 Ultra for the lower volume desktops. With the announcement of an M4 chip in iPads today, we can expect to see the Macbook Air and Macbook Pro upgraded to M4 soon, with the introduction of an M4 Ultra to match later. We can now expect those M3 devices to be discontinued instead of going downrange in price.

That would leave one device with an N3 process chip: the iMac. At its sale level, I wouldn't be surprised if all the M3 chips that will go into it will be made this year, with the model staying around for a year or two running on fumes.


N3E still has a +9% logic transistor density increase on N3 despite a relaxation to design rules, for reasons such as introduction of FinFlex.[1] Critically though, SRAM cell sizes remain the same as N5 (reversing the ~5% reduction in N3), and it looks like the situation with SRAM cell sizes won't be improving soon.[2][3] It appears more likely that designers particularly for AI chips will just stick with N5 as their designs are increasingly constrained by SRAM.

[1] https://semiwiki.com/semiconductor-manufacturers/tsmc/322688...

[2] https://semiengineering.com/sram-scaling-issues-and-what-com...

[3] https://semiengineering.com/sram-in-ai-the-future-of-memory/


SRAM has really stalled. I don’t think 5nm was much better than 7nm. On ever smaller nodes, sram will be taking up a larger and larger percent of the entire chip. But the cost is much higher on the smaller nodes even if the performance is not better.

I can see why AMD started putting the SRAM on top.


It wasn't immediately clear to me why SRAM wouldn't scale like logic. This[1] article and this[2] paper sheds some light.

From what I can gather the key aspects are that decreased feature sizes lead to more variability between transistors, but also to less margin between on-state and off-state. Thus a kind of double-whammy. In logic circuits you're constantly overwriting with new values regardless of what was already there, so they're not as sensitive to this, while the entire point of a memory circuit is to reliably keep values around.

Alternate transistor designs such as FinFET, Gate-all-around and such can provide mitigation of some of this, say by reducing transistor-to-transistor variability by a factor, but can't get around root issue.

[1]: https://semiengineering.com/sram-scaling-issues-and-what-com...

[2]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9416021/


The signs certainly all point to the initial version of N3 having issues.

For instance, Apple supposedly required a deal where they only paid TSMC for usable chips per N3 wafer, and not for the entire wafer.

https://arstechnica.com/gadgets/2023/08/report-apple-is-savi...


My read on the absurd number of Macbook M3 SKUs was that they had yield issues.


There is also the fact that we currently have an iPhone generation where only the Pro models got updated to chips on TSMC 3nm.

The next iPhone generation is said to be a return to form with all models using the same SOC on the revised version of the 3nm node.

> Code from the operating system also indicates that the entire iPhone 16 range will use a new system-on-chip – t8140 – Tahiti, which is what Apple calls the A18 chip internally. The A18 chip is referenced in relation to the base model iPhone 16 and 16 Plus (known collectively as D4y within Apple) as well as the iPhone 16 Pro and 16 Pro Max (referred to as D9x internally)

https://www.macrumors.com/2023/12/20/ios-18-code-four-new-ip...


On one hand, it's crazy. On the other hand, it's pretty typical for the industry.

Average performance per watt doubling time is 2.6 years: https://newsroom.arm.com/blog/performance-per-watt#:~:text=T....


M2 was launched in June 2022 [1] so a little under 2 years ago. Apple is a bit ahead of that 2.6 years, but not by much.

[1] https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with...


If they maintain that pace, it will start compounding incredibly quickly. If we round to 2 years vs 2.5 years, after just a decade you're an entire doubling ahead.


Note that performance per watt is 2x higher at both chips peak performance. This is in many ways an unfair comparison for Apple to make.


It's a shame performance per watt doesn't double every 2.6 years for modems and screens.


Watts per pixel probably did something close for a long time for screens.

Same for Watts per bit.

There's just a lot more pixels and bits.


And here it is in an OS that can't even max out an M1!

That said, the function keys make me think "and it runs macOS" is coming, and THAT would be extremely compelling.


We've seen a slow march over the last decade towards the unification of iOS and macOS. Maybe not a "it runs macOS", but an eventual "they share all the same apps" with adaptive UIs.


Unfortunately I think "they share all the same apps" will not include a terminal with root access, which is what would really be needed to make iPad a general purpose computer for development

It's a shame, because it's definitely powerful enough, and the idea of traveling with just an iPad seems super interesting, but I imagine they will not extend those features to any devices besides macs


I mean, it doesn't even have to be true "root" access. Chromebooks have a containerized linux environment, and aside from the odd bug, the high end ones are actually great dev machines while retaining the "You spend most of your time in the browser so we may as well bake that into the OS" base layer.


I actually do use a Chromebook in this way! Out of all the Linux machines I've used, that's why I like it. Give me a space to work and provide an OS that I don't have to babysit or mentally maintain.


Been a while since I've used a chromebook but iirc there's ALSO root access that's just a bit more difficult to access, and you do actually need to access it from time to time for various reasons, or at least you used to.


You're thinking of Crouton, the old method of using linux on a Chromebook (which involved disabling boot protection and setting up a second linux install in a chroot, with a keybind that allowed you to toggle between the two environments). It was including a

Crostini is the new containerized version that is both officially supported and integrated into ChromeOS


The writing was on the wall with the introduction of Swift, IMO. Since then it's been over complicating the iPad and dumbing down the macOS interfaces to attain this goal. So much wasted touch/negative space in macOS since Catalina to compensate for fingers and adapative interfaces; so many hidden menus and long taps squirreled away in iOS.


They probably saw the debacle that was Windows 8 and thought merging a desktop and touch OS is a decade-long gradual task, if that is even the final intention.

Unlike MS that went with the Big Bang in your face approach that was oh-so successful.


At this point, there's two fundamentally different types of computing that will likely never be mergeable in a satisfactory way.

We now have 'content consumption platforms' and 'content creation platforms'.

While attempts have been made to try and enable some creation on locked-down touchscreen devices, you're never going to want to try and operate a fully-featured version of Photoshop, Maya, Visual Studio, etc on them. And if you've got a serious workstation with multiple large monitors and precision input devices, you don't want to have dumbed-down touch-centric apps forced upon you Win8-style.

The bleak future that seems likely is that the 'content creation platforms' become ever more niche and far more costly. Barriers to entry for content creators are raised significantly as mainstream computing is mostly limited to locked-down content consumption platforms. And Linux is only an option for as long as non-locked-down hardware is available for sensible prices.


Kinda weird to exclude Procreate, Affinity, Final Cut, Logic, etc. from your definition of content creation. The trend has clearly been more professional and creative apps year over year and ever more capable devices to run them on. I mean, you're right that nobody wants to use Photoshop on the iPad, but that's because there are better options.

Honestly, the biggest barrier to creativity is thinking you need a specific concept of a "serious workstation" to do it. Plenty of people are using $2k+ desktops just to play video games.


In these cases, it still seems that tablet-based tools are very much 'secondary tools', more of a sketchpad to fiddle with ideas while on the move, rather than 'production tools'.

Then there's the whole dealing with lots of files and version control side of things, essential for working as part of a team. Think about creating (and previewing, and finally uploading) a very simple web page, just HTML and a couple of images, entirely on an iPad. While it's probably quite possible these days, I suspect the workflow would be abysmal compared to a 'proper computer' where the file system isn't hidden from you and where you're not constantly switching between full-screen apps.

And that's before you start dealing with anything with significant numbers of files in deep directory structures, or doing more technical image creation (e.g. dealing with alpha channels). And of course, before testing your webpage on all the major browsers. Hmm...


There are so many artists who exclusively work on their iPad. It does seem cumbersome for a whole studio to use iPads, but they can be a powerhouse for an individual


It seems weirdly arbitrary to say that tools people have been using in production aren't "production tools".


But nobody is using iPads as a sole production tool. It's part of the production tooling but it's not exactly an essential part or a part that can't get rid of or replace easily, unlike a "real" computer.

It's rather disingenuous to pretend that an iPad can be sufficient. At its price tag it is still a rather extremely expensive accessory and people pretending otherwise are just full of it. There are enough reviews/testimonies saying as much (even from the diehard fans) for it to be an obvious fact.


> At this point, there's two fundamentally different types of computing that will likely never be mergeable in a satisfactory way.

This is a completely artificial creation by Apple and Google to extract more money from you. Nothing technical prevents one from using a full OS on a phone today.

Sent from my Librem 5 running desktop GNU/Linux.


On the other hand, a $4000 mid-game Macbook doesn’t have a touchscreen and that’s a heresy. Granted, you can get the one with the emoji bar, but why interact using touch on a bar when you could touch the screen directly?

Maybe the end game for Apple isn’t the full convergence, but just having a touch screen on the Mac.


Why would you want greasy finger marks on your Macbook screen?

Not much point having a touchscreen on a Macbook (or any laptop really), unless the hardware has a 'tablet mode' with a detachable or fold-away keyboard.


Mouse and keyboard is still a better interface for A LOT of work. I have yet to find a workflow for any of my professional work that would be faster or easier if you gave me a touchscreen.

There are plenty of laptops that do have touchscreens, and it has always felt more like a gimmick than a useful hardware interface.


> Barriers to entry for content creators are raised significantly as mainstream computing is mostly limited to locked-down content consumption platforms. And Linux is only an option for as long as non-locked-down hardware is available for sensible prices.

Respectfully, I disagree partially. It has never been easier or more affordable to get into creating content. You can create cinema grade video with used cameras that sell for a few hundred dollars. You can create pixar level animation on open source software, and a pretty cheap computer. A computer that can edit a 4k video costs less than the latest iPhone. There are people that create plenty of content with just a phone. Simply put it is orders of magnitude cheaper and easier to create content than it was less than two decades ago, which is why we are seeing so much content getting made. I used to work for a newspaper and it used to be a lot harder and more expensive to produce audio visual media.

My strong feeling is that the problem of content being locked into platforms has precious little to do with consumption oriented hardware, and more to do with the platforms. Embrace -> extinguish -> exlcusivity -> enshittify seems to be the model behind basically anything that hosts user content these days.


I'd be very surprised if Apple is paying attention to anything that's happening with windows. At least as a divining rod for how to execute.


People have complained about why Logic Pro / Final Cut wasn't ported to the iPad Pro line. The obvious answer is that making workflows done properly take time.


You're right about the reason but wrong about the timeline: Jobs saw Windows XP Tablet Edition and built a skunkworks at Apple to engineer a tablet that did not require a stylus. This was purely to spite a friend[0] of his that worked at Microsoft and was very bullish on XP tablets.

Apple then later took the tablet demo technology, wrapped it up in a very stripped-down OS X with a different window server and UI library, and called it iPhone OS. Apple was very clear from the beginning that Fingers Can't Use Mouse Software, Damn It, and that the whole ocean needed to be boiled to support the new user interface paradigm[1]. They even have very specific UI rules specifically to ensure a finger never meets a desktop UI widget, including things like iPad Sidecar just not forwarding touch events at all and only supporting connected keyboards, mice, and the Apple Pencil.

Microsoft's philosophy has always been the complete opposite. Windows XP through 7 had tablet support that amounted to just some affordances for stylus users layered on top of a mouse-only UI. Windows 8 was the first time they took tablets seriously, but instead of just shipping a separate tablet OS or making Windows Phone bigger, they turned it into a parasite that ate the Windows desktop from the inside-out.

This causes awkwardness. For example, window management. Desktops have traditionally been implemented as a shared data structure - a tree of controls - that every app on the desktop can manipulate. Tablets don't support this: your app gets one[2] display surface to present their whole UI inside of[3], and that surface is typically either full-screen or half-screen. Microsoft solved this incongruity by shoving the entire Desktop inside of another app that could be properly split-screened against the new, better-behaved tablet apps.

If Apple were to decide "ok let's support Mac apps on iPad", it'd have to be done in exactly the same way Windows 8 did it, with a special Desktop app that contained all the Mac apps in a penalty box. This is so that they didn't have to add support for all sorts of incongruous, touch-hostile UI like floating toolbars, floating pop-ups, global menus, five different ways of dragging-and-dropping tabs, and that weird drawer thing you're not supposed to use anymore, to iPadOS. There really isn't a way to gradually do this, either. You can gradually add feature parity with macOS (which they should), but you can't gradually find ways to make desktop UI designed by third-parties work on a tablet. You either put it in a penalty box, or you put all the well-behaved tablet apps in their own penalty boxes, like Windows 10.

Microsoft solved Windows 8's problems by going back to the Windows XP/Vista/7 approach of just shipping a desktop for fingers. Tablet Mode tries to hide this, but it's fundamentally just window management automation, and it has to handle all the craziness of desktop. If a desktop app decides it wants a floating toolbar or a window that can't be resized[4], Tablet Mode has to honor that request. In fact, Tablet Mode needs a lot of heuristics to tell what floating windows pair with which apps. So it's a lot more awkward for tablet users in exchange for desktop users having a usable desktop again.

[0] Given what I've heard about Jobs I don't think Jobs was psychologically capable of having friends, but I'll use the word out of convenience.

[1] Though the Safari team was way better at building compatibility with existing websites, so much so that this is the one platform that doesn't have a deep mobile/desktop split.

[2] This was later extended to multiple windows per app, of course.

[3] This is also why popovers and context menus never extend outside their containing window on tablets. Hell, also on websites. Even when you have multiwindow, there's no API surface for "I want to have a control floating on top of my window that is positioned over here and has this width and height".

[4] Which, BTW, is why the iPad has no default calculator app. Before Stage Manager there was no way to have a window the size of a pocket calculator.


Clip Studio is one Mac app port I’ve seen that was literally the desktop version moved to the iPad. It uniquely has the top menu bar and everything. They might have made an exception because you’re intended to use the pencil and not your fingers.


Honestly, using a stylus isn't that bad. I've had to support floor traders for many years and they all still use a Windows-based tablet + a stylus to get around. Heck, even Palm devices were a pleasure to use. Not sure why Steve was so hell bent against them, it probably had to do with his beef with Sculley/Newton.


> Palm devices were a pleasure to use.

RIP Graffiti.


Even with the advantage of time, I don't think Microsoft would have been able to do it. They can't even get their own UI situated, much less adaptive. Windows 10/11 is this odd mishmash of old and new, without a consistent language across it. They can't unify what isn't even cohesive in the first place.


>Unlike MS that went with the Big Bang in your face approach that was oh-so successful.

It was kind of successful, touchscreen laptops see pretty big sales nowadays. I don't know what crack they were smoking with Windows 8.0 though.


I will settle for: you can connect 2 monitors to iPad and select audio device sound is going through. If can run IntelliJ and compile rust on the iPad, I would promise to upgrade to the new iPad Pro as soon as it is released every time.


Agreed, this will be the way forward in the future. I've already seen one of my apps (Authy) say "We're no longer building a macOS version, just install the iPad app on your mac".

That's great, but you need an M series chip in your mac for that to work so backwords compatibility only goes back a few years at this point, which is fine for corporate upgrade cycles but might be a bit short for consumers at this time. But it will be fine in the future.


Until an "iPhone" can run brew, all my developer tools, steam, epic games launcher, etc it's hardly interesting.


Maybe not a "it runs macOS", but an eventual "they share all the same apps" with adaptive UIs

M-class MacBooks can already run many iPhone and iPad apps.


I think so too. Especially after the split from iOS to ipados. Hopefully they'll show something during this year's WWDC


> And here it is in an OS that can't even max out an M1

Do you really want your OS using 100% of CPU?


They mean that this OS only runs iPad apps, it doesn't let you run the kind of software you expect to take full advantage of the CPU.


What function keys?


The new Magic Keyboard has a laptop-style row of function keys (and esc!).


Ultimately, does it matter?

Michelin-starred restaurants not only have top-tier chefs. They have buyers who negotiate with food suppliers to get the best ingredients they can at the lowest prices they can. Having a preferential relationship with a good supplier is as important to the food quality and the health of the business as having a good chef to prepare the dishes.

Apple has top-tier engineering talent but they are also able to negotiate preferential relationships with their suppliers, and it's both those things that make Apple a phenomenal tech company.


Qualcomm is also with TSMC and their newer 4nm processor is expected to stay competitive with the M series.

If the magic comes mostly from TSMC, there's a good chance for these claims to be true and to have a series of better chips coming on the other platforms as well.


“Stay” competitive implies they’ve been competitive. Which they haven’t.

I’m filing this into the bin with all the other “This next Qualcomm chip will close the performance gap” claims made over the past decade. Maybe this time it’ll be true. I wouldn’t bet on it.


Point taken. I used "stay" as in, their next rumored/leaked chip wouldn't be an single anomalous success but the start of a trend that could expand to the X2, X3 Elite chips coming after.

Basically we'd need some basis to believe they'll be progressively improving at more or less the same pace as Intel's or Apple's chips to get on board with ARM laptops for Windows/linux.

Otherwise I don't see software makers care enough to port their build to ARM as well.


This info is much more useful than a comparison to restaurants.


Does Qualcomm have any new CPU cores besides that one they can't make ARM due to licensing?


The one being announced on May 20th at Computex? https://news.ycombinator.com/item?id=40288969


Potentially > 2x greater battery life for the same amount of compute!

That is pretty crazy.

Or am I missing something?


Sadly, this is only processor power consumption, you need to put power into a whole lot of other things to make an useful computer… a display backlight and the system's RAM come to mind as particular offenders.


backlight is now the main bottleneck for consumption heavy uses. I wonder what are the main advancements that are happening there to optimize the wattage.


If the usecases involve working on dark terminals all day or watching movies with dark scenes or if the general theme is dark, may be the new oled display will help reduce the display power consumption too.


AMD gpus have "Adaptive Backlight Management" which reduces your screen's backlight but then tweaks the colors to compensate. For example, my laptop's backlight is set at 33% but with abm it reduces my backlight to 8%. Personally I don't even notice it is on / my screen seems just as bright as before, but when I first enabled it I did notice some slight difference in colors so its probably not suitable for designers/artists. I'd 100% recommend it for coders though.


Strangely, Apple seems to be doing the opposite for some reason (Color accuracy?), as dimming the display doesn't seem to reduce the backlight as much, and they're using a combination of software dimming, even at "max" brightness.

Evidence can be seen when opening up iOS apps, which seem to glitch out and reveals the brighter backlight [1]. Notice how #FFFFFF white isn't the same brightness as the white in the iOS app.

[1] https://imgur.com/a/cPqKivI


The max brightness of the desktop is gonna be lower than the actual max brightness of the panel, because the panel needs to support HDR content. That brightness would be too much for most cases


This was a photo of my MBA 15" which doesn't have an HDR capable screen afaik. Additionally, this artifacting happens at all brightness levels, including the lowest.

It also just doesn't seem ideal that some apps (iOS) appear much brighter than the rest of the system. HDR support in macOS is a complete mess, although I'm not sure if Windows is any better.


Please give me an external ePaper display so I can just use Spacemacs in a well-lit room!


Onyx makes a HDMI "25 eInk display [0]. It's pricey.

[0] https://onyxboox.com/boox_mirapro

edit: "25, not "27


I'm still waiting for the technology to advance. People can't reasonably spend $1500 on the world's shittiest computer monitor, even if it is on sale.


Dang, yeah, this is the opposite of what I had in mind

I was thinking, like, a couple hundred dollar Kindle the size of a big iPad I can plug into a laptop for text-editing out and about. Hell, for my purposes I'd love an integrated keyboard.

Basically a second, super-lightweight laptop form-factor I can just plug into my chonky Macbook Pro and set on top of it in high-light environments when all I need to do is edit text.

Honestly not a compelling business case now that I write it out, but I just wanna code under a tree lol


I think we're getting pretty close to this. The Remarkable 2 tablet is $300, but can't take video input and software support for non-notetaking is near non-existent. There's even a keyboard available. Boox and Hisense are also making e-ink tablets/phones for reasonable prices.


A friend bought it & I had a chance to see it in action.

It is nice for some very specific use cases. (They're in the publishing/typesetting business. It's… idk, really depends on your usage patterns.)

Other than that, yeah, the technology just isn't there yet.


If that existed as a drop-in screen replacement on the framework laptop and with a high refresh rate color gallery 3 panel, then I'd buy it at that price point in a heart beat.

I can't replace my desktop monitor with eink because I occasionally play video games. I can't use a 2nd monitor because I live in a small apartment.

I can't replace my laptop screen with greyscale because I need syntax highlighting for programming.


Maybe the $100 nano-texture screen will give you the visibility you want. Not the low power of a epaper screen though.

Hmm, emacs on an epaper screen might be great if it had all the display update optimization and "slow modem mode" that Emacs had back in the TECO days. (The SUPDUP network protocol even implemented that at the client end and interacted with Emacs directly!)


QD-oled reduces it by like 25% I think? But maybe that will never be in laptops, I'm not sure.


QD-OLED is an engineering improvement, i.e. combining existing researched technology to improve the result product. I wasn't able to find a good source on what exactly it improves in efficiency, but it's not a fundamental improvement in OLED electrical→optical energy conversion (if my understanding is correct.)

In general, OLED screens seem to have an efficiency around 20≈30%. Some research departments seem to be trying to bump that up [https://www.nature.com/articles/s41467-018-05671-x] which I'd be more hopeful on…

…but, honestly, at some point you just hit the limits of physics. It seems internal scattering is already a major problem; maybe someone can invent pixel-sized microlasers and that'd help? More than 50-60% seems like a pipe dream at this point…

…unless we can change to a technology that fundamentally doesn't emit light, i.e. e-paper and the likes. Or just LCD displays without a backlight, using ambient light instead.


Is the iPad Pro not yet on OLED? All of Samsung's flagship tablets have OLED screens for well over a decade now. It eliminates the need for backlighting, has superior contrast and pleasant to ise in low-light conditions.


The iPad that came out today finally made the switch. iPhones made the switch around 2016. It does seem odd how long it took for the iPad to switch, but Samsung definitely switched too early: my Galaxy Tab 2 suffered from screen burn in that I was never able to recover from.


LineageOS has an elegant solution for OLED burn in: imperceptibly shift persistent UI elements my a few pixels over time


I'm not sure how OLED and backlit LCD compare power-wise exactly, but OLED screens still need to put off a lot of light, they just do it directly instead of with a backlight.


that's still amazing, to me.

I don't expect an M4 macbook to last any longer than an M2 macbook of otherwise similar specs; they will spend that extra power budget on things other than the battery life specification.


Thanks. That makes sense.


Comparing the tech specs for the outgoing and new iPad Pro models, that potential is very much not real.

Old: 28.65 Wh (11") / 40.88 Wh (13"), up to 10 hours of surfing the web on Wi-Fi or watching video.

New: 31.29 Wh (11") / 38.99 Wh (13"), up to 10 hours of surfing the web on Wi-Fi or watching video.


A more efficient CPU can't improve that spec because those workloads use almost no CPU time and the display dominates the energy consumption.


Unfortunately Apple only ever thinks about battery life in terms of web surfing and video playback, so we don't get official battery-life figures for anything else. Perhaps you can get more battery life out of your iPad Pro web surfing by using dark mode, since OLEDs should use less power than IPS displays with darker content.


Yeah double the PPW does not mean double the battery, because unless you're pegging the CPU/SOC it's likely only a small fraction of the power consumption of a light-use or idle device, especially for an SOC which originates in mobile devices.

Doing basic web navigation with some music in the background, my old M1 Pro has short bursts at ~5W (for the entire SoC) when navigating around, a pair of watts for mild webapps (e.g. checking various channels in discord), and typing into this here textbox it's sitting happy at under half a watt, with the P-cores essentially sitting idle and the E cores at under 50% utilisation.

With a 100Wh battery that would be a "potential" of 150 hours or so. Except nobody would ever sell it for that, because between the display and radios the laptop's actually pulling 10~11W.


On my M1 air, I find for casual use of about an hour or so a day, I can literally go close to a couple weeks without needing to recharge. Which to me is pretty awesome. Mostly use my personal desktop when not on my work laptop (docked m3 pro).


So this could be a bit helpful for heavier duty usage while on battery.


Ok, but is it twice as fast during those 10 hours, leading to 20 hours of effective websurfing? ;)


Isn't this weird, a new chip consumes 2 times less power, but the battery life is the same?


No, they have a "battery budget". It the CPU power draw goes down that means the budget goes up and you can spend it on other things, like a nicer display or some other feature.

When you say "up to 10 hours" most people will think "oh nice that's an entire day" and be fine with it. It's what they're used to.

Turning that into 12 hours might be possible but are the tradeoffs worth it? Will enough people buy the device because of the +2 hour battery life? Can you market that effectively? Or will putting in a nicer fancy display cause more people to buy it?

We'll never get significant battery life improvements because of this, sadly.


The OLED likely adds a fair bit of draw; they're generally somewhat more power-hungry than LCDs these days, assuming like-for-like brightness. Realistically, this will be the case until MicroLEDs are available for non-completely-silly money.


This surprises me. I thought the big power downside of LCD displays is that they use filtering to turn unwanted color channels into waste heat.

Knowing nothing else about the technology, I assumed that would make OLED displays more efficient.


OLED will use less for a screen of black and LCD will use less for a screen of white. Now, take whatever average of what content is on the screen and for you, it may be better or may be worse.

White background document editing, etc., will be worse, and this is rather common.


Can’t beat the thermodynamics of exciton recombination.

https://pubs.acs.org/doi/10.1021/acsami.9b10823


It's not weird when you consider that browsing the web or watching videos has the CPU idle or near enough, so 95% of the power draw is from the display and radios.


this


Wait a bit. M2 wasn't as good as the hype was.


That's because M2 was on the same TSMC process generation as M1. TSMC is the real hero here. M4 is the same generation as M3, which is why Apple's marketing here is comparing M4 vs M2 instead of M3.


Actually, M4 is reportedly on a more cost-efficient TSMC N3E node, where Apple was apparently the only customer on the more expensive TSMC N3B node; I'd expect Apple to move away from M3 to M4 very quickly for all their products.

https://www.trendforce.com/news/2024/05/06/news-apple-m4-inc....


Yeah and M2 was on N5P vs M1's N5, but it was still N5. M4 is still N3.


Saying tsmc is a hero ignores the thousands of suppliers that improved everything required for tsmc to operate. Tsmc is the biggest, so they get the most experience on all the new toys the world’s engineers and scientists are building.


It's almost as if every part of the stack -- from the uArch that Apple designs down to the insane machinery from ASML, to the fully finished SoC delivered by TSMC -- is vitally important to creating a successful product.

But people like to assign credit solely to certain spaces if it suits their narrative (lately, Apple isn't actually all that special at designing their chips, it's all solely the process advantage)


Saying TSMC's success is due to their suppliers ignores the fact that all of their competitors failed to keep up despite having access to the same suppliers. TSMC couldn't do it without ASML, but Intel and Samsung failed to do it even with ASML.

In contrast, when Apple's CPU and GPU competitors get access to TSMC's new processes after Apple's exclusivity period expires, they achieve similar levels of performance (except for Qualcomm because they don't target the high end of CPU performance, but AMD does).


Tsmc being the biggest let them experiment at 10x the rate. It turns out they had the right business model that Intel didn’t notice was there, it just requires dramatically lower margins and higher volumes and far lower paid engineers.


I thought M3 and M4 were different processes though. Higher yield for the latter or such.


And why other PC vendors not latching on to the hero?


Apple often buys their entire capacity (of a process) for quite a while.


Apple pays TSMC for exclusivity on new processes for a period of time.


2x efficiency vs a 2 year old chip is more or less in line with expectations (Koomey's law). [1]

[1] https://en.wikipedia.org/wiki/Koomey%27s_law


Is the CPU/GPU really dominating power consumption that much?


Nah, GP is off their rocker. For the workloads in question the SOC's power draw is a rounding error, low single-digit percent.


They don't mention which metric is 50% higher.

However, we have more CPU cores, a newer core design, and a newer process node which would all contribute to improving multicore CPU performance.

Also, Apple is conservative on clock speeds, but those do tend to get bumped up when there is a new process node as well.


That doesn't seem to reflect in the battery life of these. They have the same exact battery life. Does it mean it's not entirely accurate? Since they don't indicate the battery capacity in their specs, it's hard to confirm this.


I haven't paid too much attention today, but what I did see with the iPad Pro was that they're using an OLED display (maybe even some kind of double layer OLED for increased brightness if I'm understanding the marketing jargon?).

I believe that OLED is much more power hungry than the previous display type (LED backlit LCD of some type?). I could be wrong, but in TV land that's the case...

Could explain, at least partly, why run time isn't greatly increased.


They made the battery on the 13" 5% smaller than the previous generation. They also write that they tested the device with auto-brightness disabled and brightness set at 50%. Not sure who the brightness slider works on the new iPads since the iPhones don't get max brightness unless auto-brightness is enabled. So 50% might be 1000/2=500 nits on the M4 iPad Pro and 600/2=300 nits on the M2 iPad Pro, or they might both be about 300 nits.


Also the thousands of suppliers that have improved their equipment and supplies that feed into the tsmc fabs.


They mention just M2 and M4 - curious, how does M3 fit into that?

I.e. would it sit between, or closer to M2 or M4?


> I wonder how much of that is Apple engineering and how much is TSMC improving their 3nm process.

I think Apple's design choices had a huge impact on the M1's performance but from there on out I think it's mostly due to TSMC.


It is almost certainly half as much power in the RMS sense, not absolute.


Considering the cost difference would that still make M4 better. Whatever the savings in power are offset by the price?


Breathtaking


> And compared with the latest PC chip in a thin and light laptop, M4 can deliver the same performance using just a fourth of the power

It can deliver the same performance as itself at just a fourth of the power than it's using? That's incredible!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: