How will the Megahertz Wars be remembered?

6 posts / 0 new
Last post
Hawaii Cruiser's picture
Offline
Last seen: 7 years 3 weeks ago
Joined: Jan 20 2005 - 16:03
Posts: 1433
How will the Megahertz Wars be remembered?

A couple of nights ago I watched the whole hour-and-a-half Quicktime streaming video of the Apple developer's conference presentation held in early August (there's a link right there in the splash window when you open Quicktime online). During the presentation, Steve Jobs proudly announced that the G5's are now replaced with the Intel Mac Pros and Apple's age of the Powermac is officially over. So what really happened during those 12 years of the Power PC processor? What was all that talk about real speed and megahertz speed? I remember watching another one of those Quicktime conference videos years ago where someone--was it Jobs?--I can't remember--went on and on about how the Power processor was actually so much faster than the others because the Power PC bit channel was so concise and clean while the others were stumbling over so much garbage in and garbage out, or something to that effect. Sounded pretty convincing at the time. Was it all bogus? Or did Apple consistently have genuinely faster machines even though their megahertz speeds were constantly less than the competition? It was also interesting to see Jobs do the selling of the new Mac Pros. He compared them to Dells(!), and the comparison came across very much as like versus like, with Apple winning out mostly because of price. I saw the new form of comparison as actually rather sad--like we were suddenly all at K-mart together. With the switch to Intel, Apple said it had reached the point where it had to throw in the towel on the Megahertz Wars, but in truth, was Apple actually losing the war of speed all along?

Jon
Jon's picture
Offline
Last seen: 12 years 10 months ago
Joined: Dec 20 2003 - 10:38
Posts: 2804
Any computer will runs slowly

Any computer will run slowly, with a sufficiently crappy compiler. One real issue with speed is that to maximize it you must use a compiler that outputs optimised code that takes advantage of the good features of a particular CPU and minimizes the weaknesses. That's one reason why programming demi-god Donald Knuth still thinks that teaching machine language and knowing the computer architecture is still important (see Why have a machine language? section):

Expressing basic methods like algorithms for sorting and searching in machine language makes it possible to carry out meaningful studies of the effects of cache and RAM size and other hardware characteristics (memory speed, pipelining, multiple issue, lookaside buffers, the size of cache blocks, etc.) when comparing different schemes.

I still don't know the truth of the Mhz War, but if both were using relatively similarly well done compilers, the Mhz might actually matter.

moosemanmoo's picture
Offline
Last seen: 9 years 3 months ago
Joined: Aug 17 2004 - 15:24
Posts: 686
Apple was definitely publishi

Apple was definitely publishing some inflated numbers during the whole MHz wars. I remember that Apple advertised the dual 1.42GHz G4 as benchmarking up to 22 GFLOPS, and the closest I could get to that number with the Altivec Fractal thing was 14 or 17GFLOPS (which is still fast). Intel's chips have become much faster, too. The Core 2 architecture is significantly faster than the Pentium 4 architecture, and SMP is a great thing to have.

If the G4 never came out, then I think Apple would either still be using PowerPC processors or they would be out of business. IBM could push the G3 to 900MHz before Motorola ran into the 500MHz wall. I think the Altivec performance was what saved them, really.

Jon
Jon's picture
Offline
Last seen: 12 years 10 months ago
Joined: Dec 20 2003 - 10:38
Posts: 2804
If Intel hadn't dropped NetBu

If Intel hadn't dropped NetBurst (P4) then AMD would have several more leads than Opteron. The Mhz Wars are one reason drove Intel to use NetBurst, as they could publish the multi-Ghz numbers, even if the CPu was really slower. Know why the Tualatin P3s were dropped? One reason is that unless all code was recompiled and optimised for P4 it really wasn't faster, and in some cases it was slower. Luckily they didn't fully shelve the design, and now we get the ICA with a more promising future.

mmphosis's picture
Offline
Last seen: 2 days 12 hours ago
Joined: Aug 18 2005 - 16:26
Posts: 433
Apple was losing the war on "marketing" all along

hmmm, perhaps future operating systems / compilers / hand assembly will put some speed into aging PowerPC Macs. Yellowdog

Eudimorphodon's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
Re: How will the Megahertz Wars be remembered?

So what really happened during those 12 years of the Power PC processor? What was all that talk about real speed and megahertz speed? I remember watching another one of those Quicktime conference videos years ago where someone--was it Jobs?--I can't remember--went on and on about how the Power processor was actually so much faster than the others because the Power PC bit channel was so concise and clean while the others were stumbling over so much garbage in and garbage out, or something to that effect. Sounded pretty convincing at the time. Was it all bogus?

Yeah, sort of.

Flash back to the early days of the PowerPC, and you'll find yourself positively overwhelmed with discussions of RISC vs. CISC archtecture, all of which of course is heavily slanted towards the idea that RISC is "cleaner" and fundimentally superior to CISC. Which is horse hockey from a number of angles, but it sounded good. (Despite the fact that arguably the PowerPC doesn't really qualify as a true RISC architecture.) The other point harped on was that x86 designs paid this huge penalty by retaining hardware compatibility with legacy 16 bit code, and that somehow that complicated their architecture so much that it was impossible for them to ever be efficient. (And therefore, impossible for them to ever be "fast", even though the two arn't necessarily the same thing.)

Anyway. There was some truth to these arguments back in 1995. The original Pentium design had to sacrifice a fairly large chunk of its silicon real-estate to accomodate some very fancy decoding and instruction reordering hardware in order to achieve the goal of being a "Superscaler" (multiple instructions-per-clock) processor and still handle variable length non-aligned machine code instructions. The PowerPC was able to offer similar if not better performance per clock with substantially fewer transistors and lower power consumption, and of course would look more "scalable" by comparison. When Intel introduced the Pentium Pro it actually reinforced some of the accusations of the PowerPC crowd when it was found it was actually *slower* then its predecessor in many benchmarks. Clearly x86 was doomed.

In truth, really, the advantage of PowerPC was largely illusionary. In designing the Pentium Pro Intel decided to sacrifice some performance with 16 bit code in order to optimize its 32 bit performance, which was actually quite good. However, at the time most people were still running 16 bit DOS and Windows 3.1 programs, so it looked bad. However, it still ran "legacy code" a lot faster then a Macintosh, which had to use a software 68040 emulator to run the bulk of the programs on the market. (Including most of the OS, sadly enough.) As time went on more and more software was targeted at the 32 bit Intel ISA, which is at a much smaller efficency disadvantage compared to native PowerPC software, and thus a comparatively smaller and smaller percentage of silicon real estate is dedicated to legacy code. If you compare a modern PowerPC CPU to a comparable x86 CPU, such as the AMD Opteron Vs. the PPC 970 (G5), the transistor counts are almost identical, and the chips provide similar performance. Ironically the "transistor bloat" in the G5 is largely due to having to maintain compatibility with and provide good performance for what for the G5 is "legacy" binary code.

You can actually make a good argument that the "RISC philosophy" is fundimentally incompatible with the consumer computer market. For "RISC" to work at its best you need to be able to recompile optimized object code for every new processor design that comes along. If you're in the position where you have to make a new processor run "legacy" object code, even if it's just for last year's model, you'll find yourself having to resort to the same brute-force on-chip performance enhancers that x86 designs have to use, such as instruction reordering/out-of-order execution. The consumer market consists of people who want to run off-the-shelf software *in compiled, object-code format* fast, and further and they want their new computer to run all the old software they already have faster then their old computer did. For that you need a CISC-y design, by definition. At this point no x86 design on the market actually runs x86 instructions in their core. They translate them to "micro-ops" which are executed by the hardware processor, basically akin to an emulator in hardware. You can almost think of the x86 ISA as having evolved into a "psuedo-machine language", akin to P-Code or compiled Java. It's the gold standard for portable software, and to be successful in the consumer market you need to be able to run it fast.

Or did Apple consistently have genuinely faster machines even though their megahertz speeds were constantly less than the competition?

Apple could always find a benchmark their machines ran faster. (Even if it was just a lame set of Altivec-accellerated Photoshop benchmarks.) That's really what it boils down to. In the early days, meaning the PowerPC 601 through 604 era, PowerPC could generally comfortably best Intel clock-for-clock, at least at floating point performance. Integer performance, however, was basically a wash, and that's more important for most consumer software.

You might find this interesting reading. It contains references to SPEC benchmark results over the early PPC era, and also points out some of the contributing factors to Apple losing the "performance crown". (A major factor being the mediochre motherboard chipset and memory bus designs.)

So how is it that the Intel (and AMD) managed to creep up on and then exceed PowerPC in performance? There's this old saw about economies of scale: "Everyone who buys a Chevy now could be driving a Cadillac for the same price, if only they'd agree on which Cadillac to buy.". The x86 ISA is the Chevy of CPU instruction sets, but the massive demand to run it faster has produced inexpensive Cadillac hardware to do it. If PowerPC had ever achieved true mass-market status (if, for instance, IBM and Apple had actually gotten along and released a real, inexpensive, widely available and licensed alternative to Windows, rather then Apple hoarding its IP and gouging for hardware margins) PowerPC would be the cheap Cadillac. As it is, it's the Lincoln Continental of ISAs. Nice, cushy, *expensive*, and a bit slow off the line compared to its less elegant but now *much* cheaper competition.

It was also interesting to see Jobs do the selling of the new Mac Pros. He compared them to Dells(!), and the comparison came across very much as like versus like, with Apple winning out mostly because of price. I saw the new form of comparison as actually rather sad--like we were suddenly all at K-mart together. With the switch to Intel, Apple said it had reached the point where it had to throw in the towel on the Megahertz Wars, but in truth, was Apple actually losing the war of speed all along?

"Losing". Well, maybe not quite, but they were definately barking up the wrong tree. There's nothing fundimentally wrong with PowerPC, and if history had played out differently and it'd been adopted by the industry as a whole it probably could be faster then x86 is now. But in that alternate history Dell would also be selling PowerPC-based Precision workstations for Steve Jobs to compare the Power Macintosh G6 to. We'd still all be together, but I suppose you could pretend you're at Target instead of Kmart. Either way, well... you don't get to be special anymore.

I suppose that's what hurts the Macintosh "true believers" the most. "Thinking Different" just didn't translate to being "better".

--Peace

Log in or register to post comments