Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel 48 core press release (intel.com)
28 points by nearestneighbor on Dec 3, 2009 | hide | past | favorite | 28 comments


In 2003, Intel was saying they'd have 10ghz processors by the end of the decade.

http://www.design-reuse.com/news/4850/intel-building-blocks-...


Here's an announcement from a different company for a 64 core processor from 2007. http://arstechnica.com/hardware/news/2007/08/MIT-startup-rai...


Tilera's chip isn't really comparable though: it's really just a big DSP, with an extremely limited memory access model more reminiscent of the Cell SPUs than an ordinary CPU. This, combined with a custom instruction set and the lack of decent caches and vector units let you get away with many more cores on much less silicon using much less power--albeit at a clear cost.


I think Tilera's cores are closer to MIPS than to DSPs. It's CPU-ish enough to run Linux and a LAMP stack. That's enough for most of us.


I'm pretty sure their memory access model permanently disqualifies their chips from being "normal CPUs".

You can run Linux on a TI OMAP DSP too.


OMAP isn't a DSP, it's a dual-core with an ARM on one side and a DSP on the other. I don't know of any port of Linux to an actual DSP.


The OMAP is not a DSP. It's an ARM coupled to a DSP thingie. Linux runs on the ARM side.


(As I said yesterday) Actually Tilera is better than Cell or this Intel chip because Tilera is cache-coherent SMP and runs regular Linux with pthreads.


The difference is that this 48-core chip is actual hardware, in the here and now.


Maybe it's useful for science, but it seems like the moment has past when most consumers were blocked by CPU power.

My next upgrade will be to a SSD. Making my CPU go a bit faster is pretty useless when it's usually waiting on disk/network anyway.

I'm betting on a move the other way - toward low power 'internet thin clients' like a netbook with chrome on it. That'll do most people.


I don't see it as a common desktop chip in the near future, but for heavy web loads, it seems quite adequate.

Think of what happens when a thousand kids click "send" on their twitter pages at roughly the same time. This seems to be a good processor for that scenario.


The announcement mentions vision, although I personally doubt that this would be driving the market. On the other hand, gamers still want the latest and greatest.


Definitely gamers, but aren't they a pretty small minority these days? (PC gamers).


It's pretty steady I think. Everytime new consoles are released there's talk about the death of PC gaming but after a couple of years the PC catches up again.

Not to mention the console release cycles are getting longer and one of the biggest past problems with PC gaming (patching games) is largely solved by Steam, Impulse, etc.


People keep saying this but I don't think it's true. If your computing begins and ends inside a browser window then I suppose you don't need much. But for a lot of people processing power still matters. It matters how fast I can compile and run unit tests, it matters how well I can play games, it matters how quickly I can trans-code video, it matters how long it takes to process a filter on a large 300dpi image in photoshop.

And too there is the other side of the browser, all those sites running applications chewing through CPU cycles (facebook, gmail, twitter, amazon, even hacker news). The prospect that 5 or 10 years from now we'll be able to cram the computing power of an entire rack (say 1TB ram, 100 cores) into a single 1U pizza box server at a reasonable price has a lot of geeks salivating, including me.

Don't mistake the average case with every case.


  * compiling unit tests
  * PC games
  * transcode video
  * photoshopping
These are exceptions IMHO. I don't think the average user does any of the above.

Of course some will still want beefy CPU power, but I think it'll end up as a niche.


If you think of transcoding video as video chat and photoshopping as automatic photo color correction and indexing when you import your photos, just about everybody does it.

I am very happy with my Atom-based netbook, but, you know, there is no such thing as a computer that's too fast.


transcoding video as video chat and photoshopping as automatic photo color correction

The former can be done more efficiently in dedicated hardware, which is also bound to be cheap. I suspect that this could apply to the latter as well. Just take the most common operations, such as the ones supported by iPhoto on OS X and support those in hardware.

There may be no such thing as a computer that's too fast, but there's also computers which are too hot, too heavy, too bulky, or useless because they no longer have power.

I'd like a cheap thing with the form factor of the MacBook Air, which can also act as a virtual terminal to my Smart Phone. The amount of computing power on a current smart phone is way more than enough for a majority of work-related computing. Add in sync backup of the smart phone to the cloud, and I think that would be ideal. You'd have none of the disadvantage of cloud computing, plus all of the advantages. You'd also have all of the advantages of the phone/pocket computer form factor, but none of the disadvantages when you need a better input device.


"I'd like a cheap thing with the form factor of the MacBook Air, which can also act as a virtual terminal to my Smart Phone."

http://en.wikipedia.org/wiki/Palm_Foleo


>> "there is no such thing as a computer that's too fast."

Right, but my point was that these days it's more likely to be the HDD, or network that's the bottleneck most of the time.

My Macbook pro is absolutely never CPU bound. It could have half the CPU power and I wouldn't really particularly care. Half the CPU power but SSD would be a massive performance gain.


I absolutely agree about today's mainstream usages.

A future app that's very popular and needs the power would change that. It's the kind of thing that is likely to happen as clever people play with that extra power (consider PARC research labs).


The title of the video is "single-chip cloud computing". At least it's buzzword compliant.


This is referring to the fact that you can programmatically power off most of the cores.


Dynamically scalable applications are only a small subset of the massive number of things that have fallen under the umbrella of "cloud computing". The definition cloud computing is sufficiently vague that phrases like "single chip cloud computing" don't really have a single intuitive definition.


The "Demo Fact Sheet"[1] is really pie-in-the-sky. My favourites are "Cloud Programming on a Chip: Hadoop Web Search" and "Programming for the 3D Internet: JavaScript Server Farm on a Chip"

[1] http://download.intel.com/pressroom/pdf/rockcreek/Demo_facts...


This is fantastic. I guess I agree that most people don't need the latest and greatest processor - just like a car. But the faster you can run a simulation or compile code the better. It's not always just about web browsing and games on a pc.


I am sure games will have a good use for that much cores...

As for browsing, may with 48 cores at their disposal, Adobe can finally speed up Flash on Linux...

Boom badoom tsss


Flash applets can't be trivially parallelized.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: