CPUs: The Next Generation

11 replies [Last post]
CaryMG's picture
Offline
Joined: Dec 20 2003
Posts: 161

I read somewhere that CPUs are about to hit a wall.

Because of "Moore's Law" -- CPUs doubling in power every 18 months -- if CPUs have get any smaller, quantum tunneling problems will kick in.
If they stay the same size, but with just more transistors put on them, they'll need liquid nitrogen to keep from overheating.

My questions....
1] Why can't they just be made bigger?
Would an extra square centimeter/millimeter or 2 be that big of a deal?

2] I always read how Gallium Arsenide was supposed to replace silicon. What happened?
3] What's next? BioCPUs? HoloCPUs? NanoCPUs?

Later!
Cool Mac Cool Mac Cool Mac

__________________

"The Only Thing Necessary For Evil To Triumph Is For Good Men To Do Nothing." -- Sir Edmund Burke

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
moros's picture
Offline
Joined: Jan 21 2005
Posts: 65
Answer to 1...

They have got bigger, line a 6502 or 8088 up against a G3 and you'll see what i mean.

Then again i s'poise it could just be extra connectors for things like a 32/64 bit address, data, control bus. Does anyone know if this is the case?

Joel

moosemanmoo's picture
Offline
Joined: Aug 17 2004
Posts: 686
Current data suggests that CP

Current data suggests that CPUs made of silicon could go down to around 35nm and still work reasonably well. That's only a few years away, though. The problem with making a bigger CPU is that there are a lot of timing and latency problems involved with an enormous die-- copper wiring can only work well for so far. As for gallium arsenide, I've heard that it was extremely expensive to work with. This coupled with the fact that it's a poison in the first place doesn't give it a lot of momentum.
Two proposed solutions to the upcoming silicon wall are artificial doped diamond transistors and carbon nanotubes. Artificially grown diamonds doped with certain chemicals can be made into transistors with a much higher heat tolerance and smaller theoretical size than silicon can. This method is possible with the technology that is in the world today, and copper wiring would most likely be replaced with nanotube wiring. Nanotube transitors can be made as small as nanotubes can be made and coupled together with the required layers. The first transistor-like nanotube construction was just recently made, and the technology just isn't here yet for a mass-produced product. The way that CPUs are designed is changing, and in the future parallelism and efficiency will be extremely important.

__________________

Join the chat on irc.freenode.com, channel #applefritter.

FunnymanSE30's picture
Offline
Joined: Aug 22 2005
Posts: 403
Cell processors

ibm is developing and has a working proto type of the "cell processor" thats whats going into the Playstation 3. that has speed of up to around 4.2 ghz right now.

__________________

iMac Summer 2001: 700mhz, 256mb RAM, 80 Gb, OS 10.4 [main desktop computer]
Powerbook Pismo 400mhz G3 running 10.4, 30gb HD, 640mb of ram[Main Laptop Computer]

Jon's picture
Jon
Offline
Joined: Dec 20 2003
Posts: 2804
And don't fall into the Intel

And don't fall into the Intel marketing trap of MHz-MHz-MHz. The Pentium M can preform as well or better than a P4 at much higher clock rates, and use 1/5 the power. Efficient CPUs were mostly ignored in favor of high clock rates that could be used as marketing tools. Intel learned a hard lesson with the Netburst arch and is now backtracking and going down the road they should have been on all along. The P4 never looked that great to me, and now I'm being shown that my gut feelings were right. Intel missed the 4GHz mark, and has abandoned the P4 line. It had been projected to go to 10GHz, remember? The main problem is, why? Who needs 10GHz when you have to lower the efficiency of the CPU just to get it to go that fast? "What's Ghz got to do, got to do with it?"

__________________

I am not in this world to live up to other people's expectations, nor do I feel that the world must live up to mine. - Fritz Perls

Dr. Webster's picture
Offline
Joined: Dec 19 2003
Posts: 1687
Re: And don't fall into the Intel

Jon wrote:

It had been projected to go to 10GHz, remember? The main problem is, why? Who needs 10GHz when you have to lower the efficiency of the CPU just to get it to go that fast? "What's Ghz got to do, got to do with it?"

There was an article on Slashdot a few months back about some dude who got a P4 overclocked to something like 7GHz, using liquid nitrogen. It was a big story because he didn't just overclock it, jump into the CMOS and take a pic of it saying "7GHz," but was able to actually boot into Windows and run benchmarks.

__________________

Applefritter Admin

CaryMG's picture
Offline
Joined: Dec 20 2003
Posts: 161
Thanks, moosemanmoo !!

moosemanmoo wrote:

Current data suggests that CPUs made of silicon could go down to around 35nm and still work reasonably well. That's only a few years away, though. The problem with making a bigger CPU is that there are a lot of timing and latency problems involved with an enormous die-- copper wiring can only work well for so far. As for gallium arsenide, I've heard that it was extremely expensive to work with. This coupled with the fact that it's a poison in the first place doesn't give it a lot of momentum.
Two proposed solutions to the upcoming silicon wall are artificial doped diamond transistors and carbon nanotubes. Artificially grown diamonds doped with certain chemicals can be made into transistors with a much higher heat tolerance and smaller theoretical size than silicon can. This method is possible with the technology that is in the world today, and copper wiring would most likely be replaced with nanotube wiring. Nanotube transitors can be made as small as nanotubes can be made and coupled together with the required layers. The first transistor-like nanotube construction was just recently made, and the technology just isn't here yet for a mass-produced product. The way that CPUs are designed is changing, and in the future parallelism and efficiency will be extremely important.

That answers my question!

Thanks again!!

Later!
Cool Mac Cool Mac Cool Mac

__________________

"The Only Thing Necessary For Evil To Triumph Is For Good Men To Do Nothing." -- Sir Edmund Burke

Offline
Joined: Jan 20 2005
Posts: 700
booting into windows at 7ghz

booting into windows at 7ghz is nuts... but i bet some dual core AMD stuff would do what that one did...... imagine if you over clock that?

__________________

-Justin
(dead) PB100, Q650, PB Wallstreet, beige G3, Imac snow 600, Ibook g4, SE30, (dead) IIsi, TiBook 667VGA, PBg4 1.5, MDD dual 1.25ghz, 1.8ghz g5 tower.
OS7.5 to 10.3.9. I LOVE CLASSIC MACS!

CaryMG's picture
Offline
Joined: Dec 20 2003
Posts: 161
Optical CPU UpDate

Here's what I thought I knew about optical CPUs > "Optical CPUs"

Cool Mac Cool Mac Cool Mac

__________________

"The Only Thing Necessary For Evil To Triumph Is For Good Men To Do Nothing." -- Sir Edmund Burke

The Czar's picture
Offline
Joined: Dec 20 2003
Posts: 287
Re: And don't fall into the Intel

Jon wrote:

And don't fall into the Intel marketing trap of MHz-MHz-MHz. The Pentium M can preform as well or better than a P4 at much higher clock rates, and use 1/5 the power. Efficient CPUs were mostly ignored in favor of high clock rates that could be used as marketing tools. Intel learned a hard lesson with the Netburst arch and is now backtracking and going down the road they should have been on all along. The P4 never looked that great to me, and now I'm being shown that my gut feelings were right. Intel missed the 4GHz mark, and has abandoned the P4 line. It had been projected to go to 10GHz, remember? The main problem is, why? Who needs 10GHz when you have to lower the efficiency of the CPU just to get it to go that fast? "What's Ghz got to do, got to do with it?"

This is similar to what happened in the automotive industry during the last 50 years. The big three American auto manufacturers were hell-bent of making their cars go faster. Their answer? Increase engine displacement and put bigger carburators on these engines. This remained the status-quo until the 1970's when the Japanese and European auto makers really started to make inroads into the American market, and such, their innovations started to attract the attention of the Americans. Technology such as multi-valve systems, fuel-injection, weight-reduction techniques, etc. became alternatives to traditional thinking.

IMHO, this is what needs to happen in the computer market: Software needs to be refined to require fewer and fewer processor cycles. Processors need to move more and more bits per cycle. I think, most importantly, people need to realize that there is a limit to what computers can do.

People don't expect to drive 1000mph in their Honda Civic, why should they expect the equivalent from their PC?

Cheers,

The Czar

__________________

iBook 14" 1.33Ghz/768MB/60GB/10.4.8
Quicksilver 2x1.6Ghz/1536MB/600GB/10.4.8 Server

coius's picture
Offline
Joined: Aug 25 2004
Posts: 1975
Multiple Core CPU's

The real big leap is not going to be in speed of the CPU in single-core, but as in relative as to what the benchmarks are for each core in the individual CPU. Theorhetically, you could have 4-6 Cores on a CPU in a cube like Chip, with a wall on each side for a connector. Kinda like a Borg Cube. Who knows, these could actually be light based as someone suggested. It might even be using Fibre optic boards in the future. It might actually be that there is a whole new type of CPU being used. My prediction will be that there will be multiple CPU's on one chip pretty soon doing lower speeds, but able to achieve higher benchmarking. Also, within the next 7 years. I predict maybe 128-bit CPU's using cell based processing. Now, it is up to the OS to make it usable.

__________________

See my PB540c 33Mhz serve a website! http://yui-ikari.coius.info/

coius's picture
Offline
Joined: Aug 25 2004
Posts: 1975
actually...

I read some while ago, that there are processors that actually "Learn" as the data is processed, to make the most out of the CPU Clock cycle. These generally involve a doped pathway that is standard, and evolves the pathways in addition as the data is processed in order to gain efficiency. In effect, it is sort of like a brain. Starts off as a simple processesor, and "Learns" the best way to calculate the data, as it reaches out thru the die, and makes more connections. It repeats until it is at maximum potential, and then sets in for the data process.

I have seen the article somewhere on several sites, the first being /. (slashdot) and then in some other places. It is really cool and has a lot of potential, and I would love to see it implemented.
For instance. In starting out with video, it will be average performance, but maybe 1 minute into the data calculation, it would have made the best pathways, and becomes far above standard for video en/decoding. But when it switches data type (Say photo) it starts over and reverts to the standard die, and builds from there. this would actually eliminate the need for a standard of special instruction sets for media, as the CPU would develop the best way for moving the data thru the chip.

that, i believe, is the way CPU's will be by the time it is 20 years from now. I have also heard of "Organic" data processing too. But I doubt that will take off in even 100 years. Probably because some "Bacteria rights" organization will step in Tongue

__________________

See my PB540c 33Mhz serve a website! http://yui-ikari.coius.info/