Snow Leopard

25 posts / 0 new
Last post
Macinjosh's picture
Offline
Last seen: 4 years 5 months ago
Joined: Feb 12 2004 - 13:05
Posts: 212
Snow Leopard

I... think this went in the right forum. Apologies if not.

So, Steve says "hey the new OS X is Snow Leopard. Preview to follow..."

Do we get to find out if any of the other speculation about it is true, like it possibly being free/low-cost, a stabilization of Leopard, et cetera?

It's my understanding that the devs are under NDA, so they won't be able to say a thing about what they see. If they are, I can totally respect that, but it would sure be nice to get some, y'know Mac-related information during a WWDC.

Anyone that can confirm? Add additional info? (Don't add info that you shouldn't, thats not what I'm trying to start here.)

-- Macinjosh

Macinjosh's picture
Offline
Last seen: 4 years 5 months ago
Joined: Feb 12 2004 - 13:05
Posts: 212
Press Release

http://www.apple.com/ca/press/2008_06/snow_leopard.html

Edit: Which is dead, at the mo...

http://www.macworld.com/article/133839/2008/06/snowleopard.html as of right now, contains the information that was in the here-one-second, gone-the-next press release.

Jon
Jon's picture
Offline
Last seen: 12 years 10 months ago
Joined: Dec 20 2003 - 10:38
Posts: 2804
Being a PPC-only Mac owning p

Being a PPC-only Mac owning person, I'm very interested to know for sure about the Intel-only rumors for 10.6 are true or not. My G4 mini isn't REALLY old, it's a final generation G4 (1.5GHz), and with the jump to 867MHz for 10.4, I wonder if they will jump to 1.25GHz to cover the original minis or just go for G5s or not PPCs at all. I'd think all those high-end G5 tower owners would be awful sore if they miss out for this round. I'm not putting too much into hoping for G4 support though.

Macinjosh's picture
Offline
Last seen: 4 years 5 months ago
Joined: Feb 12 2004 - 13:05
Posts: 212
PPCs

I think they should come off some answers, and that's probably the number one reason why. If you are going to dump an architecture, say so. I'm on the other side of the fence, but if I were on the PPC side I'd want to know sooner rather than later.

Then again, everything the 'mill spit out matches up pretty much perfectly, so far.

I want to know what it's going to cost. That's what will have me either all right with it, or incredibly displeased.

Not to drag my other topic(s) in it, but I'm about to spend ~$25 to fix a $1000 problem... going to use this instead of my built-in AirPort Extreme..

http://www.newegg.com/Product/Product.aspx?Item=N82E16833340004

do I wait until 10.6 for that to get fixed? Do I replace my router? For what? When everything else in the house works flawlessly with it...

-- Macinjosh

moosemanmoo's picture
Offline
Last seen: 9 years 3 months ago
Joined: Aug 17 2004 - 15:24
Posts: 686
I think Apple wants to keep 1

I think Apple wants to keep 10.6 under wraps for now because they don't want to get people's hopes up (whiny Mac fans throwing a fit after WWDC's demonstration of only 10 new things and complaining about the lack of mind-reading and marriage counselling in the OS all the way to October 2007) and not wanting to spill the beans on new hardware features like multitouch.
I wouldn't mind if it's a low-cost upgrade with most of the changes under the hood, but isn't that what point releases are for? I think Apple's reached the point where they have a hard time thinking of more goodies to pack in. It's easier to sell upgrades based on things like Time Machine and Dashboard than ZFS support and everything being 64-bit.

Macinjosh's picture
Offline
Last seen: 4 years 5 months ago
Joined: Feb 12 2004 - 13:05
Posts: 212
Server
Eudimorphodon's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
PPC Macs...

Being a PPC-only Mac owning person, I'm very interested to know for sure about the Intel-only rumors for 10.6 are true or not. My G4 mini isn't REALLY old, it's a final generation G4 (1.5GHz), and with the jump to 867MHz for 10.4, I wonder if they will jump to 1.25GHz to cover the original minis or just go for G5s or not PPCs at all. I'd think all those high-end G5 tower owners would be awful sore if they miss out for this round. I'm not putting too much into hoping for G4 support though.

Pure speculation, of course, but... just guessing, I don't think Apple would have a problem with cutting off *all* PPCs next year. (Assuming a mid-2009 intro for their next OS.) After all, it was barely two years between when they sold their last new 68040 machine and the introduction of OS 8.5.

The introductory paragraph on the "Snow Leopard" page on Apple's website contains this semi-ominous sentence: "Snow Leopard dramatically reduces the footprint of Mac OS X, making it even more efficient for users, and giving them back valuable hard drive space for their music and photos.". A *really easy* way to cut down on the bloat of OS X would be to ditch "Universal Binaries" and go Intel only...

--Peace

resman's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: Feb 9 2006 - 12:41
Posts: 217
If I were Steve..

well, this would be my take on Snow Leopard. Previously I heard rumors of PE format binaries and virtual machine technology being integrated into the kernel. Something like LLVM would allow JIT compilation to Intel, PPC, or ARM depending on your hardware. Same binary, different CPU. Makes sense when you are trying to support so many CPU architectures. With a little creativity, you can extend this to heterogeneous CPUs in one machine, i.e. GPUs. Thus, OpenCL. Apple would only have to implement the PPC translator once to support it in future releases.

Apple has more experience than anyone when it comes to switching CPU architectures. I would think Steve has had enough of the 6502->MC68000->PPC->Intel+ARM transitions (some he did twice - NeXT and Apple). Going forward I would use VM technology to allow greater freedom of CPU choices.

My .02, and probably what its worth.
Dave...

Eudimorphodon's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
Re: If I were Steve..

... With a little creativity, you can extend this to heterogeneous CPUs in one machine, i.e. GPUs.

All indications seem to be that OpenCL is *only* for GPUs, IE, it seems to be an proposed standard for implementing GP GPU techniques. The end goal seems to be using spare GPU cycles inside a specialized math/vector/whatever library, *not* for general CPU code.

If your goal is to *improve* general performance the last thing you're going to do is switch from compiled machine code to a JIT virtual machine for everything. ;^b

--Peace

Jon
Jon's picture
Offline
Last seen: 12 years 10 months ago
Joined: Dec 20 2003 - 10:38
Posts: 2804
I made a point of LLVM transi

I made a point of LLVM transition long ago, when the first Intel Macs debuted. If Apple can get Intel to dump the x86 translator from a chip and run code in LLVM from some sort of JIT, they can reduce the transistor count and power consumption. Plus, as you mention, they can support multiple CPU archs with the same VM code.

resman's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: Feb 9 2006 - 12:41
Posts: 217
Re: If I were Steve..

... With a little creativity, you can extend this to heterogeneous CPUs in one machine, i.e. GPUs.

All indications seem to be that OpenCL is *only* for GPUs, IE, it seems to be an proposed standard for implementing GP GPU techniques. The end goal seems to be using spare GPU cycles inside a specialized math/vector/whatever library, *not* for general CPU code.

If your goal is to *improve* general performance the last thing you're going to do is switch from compiled machine code to a JIT virtual machine for everything. ;^b

--Peace

If your goal is to support many heterogeneous CPUs, the last thing you want to do is compile directly to every possible hardware combination. There is an interesting trade off between between an intermediate code representation and machine dependent code. Even improving general performance with a JIT VM is easily possible if done properly. This is the direction of future development. Don't look at Java as the proper way to to it.

OpenCL seems to be targeted at GPUs (and probably other cell type processors). Have you looked at how programmable GPUs are implemented? They have many similarities yet are designed by competing companies. If you were to develop a consistent framework across these GPUs, an intermediate code representation would make the most sense. My point is that a JIT compiler, perhaps not the same one as used for general purpose code, but a similar concept applied to GPUs, could be implemented.

Dave...

moosemanmoo's picture
Offline
Last seen: 9 years 3 months ago
Joined: Aug 17 2004 - 15:24
Posts: 686
Intel's engineers have said b

Intel's engineers have said before that the portion of the chip dedicated to translating the native code to x86 is so small, efficient, and is of such importance that they had the chance to dispense with it altogether for the new Atom processors and decided against it. Getting rid of x86 would get rid of Boot Camp as well, which has been a huge point of leverage for people buying Macs for the first time. I think Apple did a good job with Rosetta, and we won't see another architecture change until at least the next milestone (OS 11?)

Eudimorphodon's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
Re: If I were Steve..

If your goal is to support many heterogeneous CPUs, the last thing you want to do is compile directly to every possible hardware combination.

Since when is Apple interested in supporting many heterogeneous CPUs? History shows that the reason they've had to support the same code on multiple platforms (with hacks like Fat/Universal binaries) is they've constantly picked the *wrong horse* in the CPU architecture stakes and had to switch mid-stream, not because there's some fantastic benefit to be had in mixing-and-matching CPUs across your product lines.

Nothing is replacing x86 anytime soon. Like it or hate it, for general purpose consumer computing it's *won*. So don't look for Apple to be switching out the CPU in your MacBook soon, or ever again, really. Yes, Apple runs ARM in the iPhone, and the iPhone runs a specialized version of the OS X code base/API, *and* it's possible Apple might decide to go to some other CPU in the iPhone in the future. So what? Windows CE/Mobile is a specialized version of the Windows code base/API and has run on multiple CPU architectures (ARM, MIPS, SuperH, and x86) forever. Those devices don't run the same *binary software* as their desktop counterparts and have no real reason to. The huge differences in resource constraints, input/output systems, screen sizes, etc, are more then enough justification for providing specialized application versions for each platform. Which means multiple compile cycles anyway. So you might as well benefit from running native code for your processor of choice, rather then crippling your entire product line's performance in the name of some nebulous code portability advantage.

The idea of a psuedo-code "portable" environment for personal computers (UCSD P-System, Smalltalk, Java, etc) keeps coming up, and for general use it just never sticks. The perceived (and real) performance disadvantages compared to code compiled to work on the "bare metal" of the target hardware platform have just been too onerous for most segments of the market to accept. The closest thing I can think of to what you're describing that's ever really sold in "significant" numbers is the IBM AS/400 series of mini/small mainframe computers, which was designed by IBM specifically to address customers who might need to run the same (highly custom and hugely expensive) application for 20 years with no change. (IBM's other big mainframes include similar emulation/virtualization features as well, but the AS/400/System i stands out for being built entirely around the concept.) In that rarefied segment of the market it's worth sacrificing raw performance for future scalability, but... it just isn't going to happen on your Powerbook. Nope.

Apple's in the business of selling you a *new copy* of iLife every two years, not putting themselves at a huge performance disadvantage compared to Microsoft just to gain a virtual platform that lets you run (badly) the same iPhoto '09 binary on your Powerbook and iPhone. (And presumably on your 2038 MacProDoublePlusGoodAwesomeBook, which finally switched from x86 to the UltraSparc XXIV architecture shortly after Taco Bell won the Restaurant Wars.)

--Peace

resman's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: Feb 9 2006 - 12:41
Posts: 217
Re: If I were Steve..

Since when is Apple interested in supporting many heterogeneous CPUs?

I dunno, G4 G5 x86-32 x86-64 ARM? Of course its quite possible to support these architectures with fat binaries. And ARM is iPhone/iTouch only for now. However, Apple says that 10.6 is supposed to set the foundation for the next wave of advancement. Even if you say x86 has won, you are still left with a large combination of x86 extensions to contend with. The latest being SSE4. Not only that, but the 64 bit extensions came from AMD, not Intel, and fairly recently.

If you look at the issue from the compilation toolchain, instead of as a JIT VM, perhaps I can make a stronger argument (perhaps not). The traditional VMs have tried to create a complete environment. As I mentioned earlier, don't look at Java. Instead, follow the compiler toolchain from source, intermediate representation, high level optimization, target CPU representation, low-level optimization, and linking. Every CPU architecture for OSX uses the gnu toolchain to build the applications. If you removed the target CPU representation from the executable build process and used an intermediate code representation that retained the higher level semantics of the code while being compact and easily translatable, you could theoretically have the best of both worlds. By doing the target CPU translation in the executable loader, there should be no difference between code compiled for the CPU directly vs the intermediate representation. In fact, if the intermediate representation kept the semantics of vectorized algorithms, your brand new Penryn with SSE4 would run your video compressor software using the new instructions without you having to wait for your application vendor to update their code.

What if Steve wants to build that ever-rumored Mac tablet using a multi-core ARM chip? It would fall somewhere between an iTouch and a MacBook in terms of resource constraints. It would be nice to run off-the-shelf Mac software, though.

Now I'm not saying that any of this will come to pass and I have not seen 10.6, so of course, I am presenting my view of the future. There are no technical reasons it couldn't be done, and many reasons it makes sense from a marketing standpoint. But Steve didn't ask me Smile

Dave...

Eudimorphodon's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
Re: If I were Steve..

I dunno, G4 G5

Deprecated. Haven't been sold in two years.

x86-32 x86-64

They're going to have to stick with fat-ness for this, and probably will for a long time. Although this will only apply to applications *which use 64-bit code*. 64 bit OS X runs 32 bit binaries seamlessly, and unless you need more then 4GB of RAM for one process there are advantages (code memory footprint, cache loading, etc.) for remaining 32bit.

ARM?

Special purpose architecture, runs its own apps, doesn't count.

*snip*... If you removed the target CPU representation from the executable build process and used an intermediate code representation that retained the higher level semantics of the code while being compact and easily translatable, you could theoretically have the best of both worlds. By doing the target CPU translation in the executable loader, there should be no difference between code compiled for the CPU directly vs the intermediate representation.

"half-compiling" source code into some sort of universal "intermediate representation" is a huge step backwards if efficiency is your goal. The different CPU architectures in play here all have vastly different ideas as to what "efficiently written" code is. How is this "intermediate representation" going to deal with, say, the completely different ways you use optimize register vs. load-store memory operations on PowerPC vs. x86? You might as well just encrypt the C code and run gcc at execution time and slam a fresh new binary into a cache. Of course, that initial 30 minute application load time will tend to put people off...

In fact, if the intermediate representation kept the semantics of vectorized algorithms, your brand new Penryn with SSE4 would run your video compressor software using the new instructions without you having to wait for your application vendor to update their code.

The way you deal with this is you define OS-level libraries for things like floating point (like, say, the Accelerate Framework in OS X), and have your application link to them rather then compiling the code in-line. It restricts you to data types supported by the library, which might be a bummer, but it lets (in theory) Apple tune up a "best of breed" generic code library that's going to beat the pants off a JIT engine, while still letting the developer "go native" if he *really* wants to. (at the cost of perhaps having suboptimal code when the next SSE comes out.)

What if Steve wants to build that ever-rumored Mac tablet using a multi-core ARM chip? It would fall somewhere between an iTouch and a MacBook in terms of resource constraints. It would be nice to run off-the-shelf Mac software, though.

The magic of ARM is in code compactness and low absolute power draw. If you start talking about MIPS vs. WATTs it's not that great. In that for a given level of CPU grunt its power requirements start overlapping with x86 options like AMD Geode, VIA Nano/C7/Eden/whatever, Intel Atom, etc. If you need to run "desktop applications" at a low power running binary code on one of those is going to beat the pants off dynamic recompilation on ARM.

Now I'm not saying that any of this will come to pass and I have not seen 10.6, so of course, I am presenting my view of the future. There are no technical reasons it couldn't be done, and many reasons it makes sense from a marketing standpoint.

On top of the *significant* technical challenges the best marketing/financial reason there could possibly be is diametrically opposed to the idea: Why sell a customer something *once* when you can sell it to him *twice*? Or three times? Or force him to upgrade every year and a half? Personally, I think *that* is more how Mr. Steve Jobs thinks of these things. ;^)

--Peace

resman's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: Feb 9 2006 - 12:41
Posts: 217
Re: If I were Steve..

G4 G5

Deprecated. Haven't been sold in two years.

Not sold != not supported. I haven't heard definitely if PPC will be supported or not. Have you?

x86-32 x86-64

They're going to have to stick with fat-ness for this, and probably will for a long time. Although this will only apply to applications *which use 64-bit code*. 64 bit OS X runs 32 bit binaries seamlessly, and unless you need more then 4GB of RAM for one process there are advantages (code memory footprint, cache loading, etc.) for remaining 32bit.

Actually, the data points a nice performance pop running with the 64 bit binaries due to reduced register pressure.

ARM?

Special purpose architecture, runs its own apps, doesn't count.

Um it runs OSX, Safari, Mail, has an SDK etc. Last time I checked, ARM was a general purpose architecture. How does that not count?

"half-compiling" source code into some sort of universal "intermediate representation" is a huge step backwards if efficiency is your goal. The different CPU architectures in play here all have vastly different ideas as to what "efficiently written" code is. How is this "intermediate representation" going to deal with, say, the completely different ways you use optimize register vs. load-store memory operations on PowerPC vs. x86?

Well, thats exactly what the gnu compiler does internally. Seems to work OK. Perhaps not as efficient as a vendor supplied compiler, but this is how most OSX applications are built.

You might as well just encrypt the C code and run gcc at execution time and slam a fresh new binary into a cache. Of course, that initial 30 minute application load time will tend to put people off...


Nothing like a little exaggeration Smile

The magic of ARM is in code compactness and low absolute power draw. If you start talking about MIPS vs. WATTs it's not that great. In that for a given level of CPU grunt its power requirements start overlapping with x86 options like AMD Geode, VIA Nano/C7/Eden/whatever, Intel Atom, etc. If you need to run "desktop applications" at a low power running binary code on one of those is going to beat the pants off dynamic recompilation on ARM.

ARM code is only compact with the Thumb extension. However, I don't see the low power x86 options with a significant performance/power ratio over ARM when applications are compiled for each. We're also not talking about recompilation, ala Rosetta.


On top of the *significant* technical challenges the best marketing/financial reason there could possibly be is diametrically opposed to the idea: Why sell a customer something *once* when you can sell it to him *twice*? Or three times? Or force him to upgrade every year and a half? Personally, I think *that* is more how Mr. Steve Jobs thinks of these things. ;^)

--Peace

I think we have different definitions of *significant*. We agree that Steve's goal is to sell you many types of gadgets. I believe you can look to the introduction of the iPhone as the major issue driving 10.6. OS development resources allocated to Leopard had to be redirected to the iPhone. Apple is a large company, but having lots of devices in need of an OS and accompanying applications is a huge resource drain. If there were one OS and application framework for all these devices, it would streamline development. To what extent it affects outside developers remains to be seen.

If you look at Windows/Windows CE/Embedded Windows you see different OSs targeted at different device categories and CPUs. Mabye that makes sense for MS, but I think it would overwhelm Apple.

I am definitely curious to see what 10.6 and future Apple products bring.

Dave...

Eudimorphodon's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
Re: If I were Steve..


Not sold != not supported. I haven't heard definitely if PPC will be supported or not. Have you?

"Not a going concern". Whether PPC is "supported" in 10.6 or not (rumor says "no", but rumors can be wrong) its presence is scarcely a justification for backwardly rearchitecting the entire code model so all "current" machines run equally badly.

Actually, the data points a nice performance pop running with the 64 bit binaries due to reduced register pressure.

Apple's thoughts on 64 bitness. See Myth #5

Um it runs OSX, Safari, Mail, has an SDK etc. Last time I checked, ARM was a general purpose architecture. How does that not count?

It runs its own versions of those apps, not the desktop version. That's why it doesn't count, in the same sense PPC still "counts".

Well, thats exactly what the gnu compiler does internally. Seems to work OK. Perhaps not as efficient as a vendor supplied compiler, but this is how most OSX applications are built.

Unfortunately, here I beg to differ. With *normal* GCC there are four *discrete* stages from high-level code to binary: preprocessing, compiling, assembly, and linking. Only the preprocessing stage, which is basically trivial syntax checking and parsing, produces as the next step something that you can take to another architecture machine. The next stage, "compilation", produces *assembly language macros for the target CPU*. It's the actual compilation where all the (computationally) expensive translation and optimization happens.

Yes, using a GCC front end on top of LLVM you can end up with a stream of "language independent low-level instructions" to be JIT compiled, interpreted or statically recompiled instead of assembly code but, well, like it or not that's what Java does. Or Rosetta, for that matter. (Just in that case the input stream is PowerPC binary instead of "generic" psuedocode.) What makes you think this is automagically going to be so much faster across the board at runtime then a Java VM? Putting a different name on something doesn't change what it is. At the *very least* to improve on a JIT Java machine much the OS would have to do extensive runtime profiling of the executable and *without human intervention* figure out how to optimize future runs. Which would mean gathering massive quantities of data, sorting through it, and caching "optimized" code fragments for future use. So much for trying to speed up OS X and cut down on disk usage...

There's also likely a side issue here with LLVM as well, which is the question of whether commercial software vendors would even allow their applications to be distributed in the form of a "meta language" which could be readily converted back into C. If commercial software vendors were willing to stomach the idea of distributing their wares in a pliable form then this whole discussion becomes irrelevant anyway. Just invoke the Xcode framework and compile Photoshop during the install process. There you go, optimized binary, no LLVM or runtime translation required.

Oh, I know, let's just slap a draconian DRM system into the OS core. That'll make it all work better...


I think we have different definitions of *significant*. We agree that Steve's goal is to sell you many types of gadgets. I believe you can look to the introduction of the iPhone as the major issue driving 10.6. OS development resources allocated to Leopard had to be redirected to the iPhone. Apple is a large company, but having lots of devices in need of an OS and accompanying applications is a huge resource drain. If there were one OS and application framework for all these devices, it would streamline development. To what extent it affects outside developers remains to be seen.

There is already one "OS and application framework" for all these devices. It's at the API level. Sharing binary code between them buys you essentially *nothing*, and creates a whole slew of new problems. It's really silly to think that pushing runtime compilation into customer's hands is going to cut down on how many QA engineers you'll need to run a new product through before it goes out into the wild.

If you look at Windows/Windows CE/Embedded Windows you see different OSs targeted at different device categories and CPUs. Mabye that makes sense for MS, but I think it would overwhelm Apple.

Overwhelm Apple? OS X on iPhone is already != to OS X on Macintosh, the same way Windows Mobile != Windows XP or Vista. The reason programmers are attracted to Windows Mobile (or to the iPhone) is they can share *high level* programming code and techniques with their desktop applications thanks to similar APIs in the respective families of operating environments.

This idea of running the same program binaries across all of Apple's product line in some sort of magic translator is a serious red herring and at cross-purposes to their stated goal of greater efficiency and smaller OS footprint.

Of course, I could be wrong too, and 10.6 may come with your very own Magic Pixie in every box which can turn your morning Cheerios into Gold with a wiggle of her nose and make your SE/30 able to view YouTube videos (in full color, oddly enough) thanks to its magic LLVM. We'll just see. ;^)

--Peace

resman's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: Feb 9 2006 - 12:41
Posts: 217
Re: If I were Steve..


Actually, the data points a nice performance pop running with the 64 bit binaries due to reduced register pressure.

Apple's thoughts on 64 bitness. See Myth #5

Did you read Myth #5? We were talking x86 here. Thanks for making my point.


*snip
Well, thats exactly what the gnu compiler does internally. Seems to work OK. Perhaps not as efficient as a vendor supplied compiler, but this is how most OSX applications are built.

Unfortunately, here I beg to differ. With *normal* GCC there are four *discrete* stages from high-level code to binary: preprocessing, compiling, assembly, and linking. Only the preprocessing stage, which is basically trivial syntax checking and parsing, produces as the next step something that you can take to another architecture machine. The next stage, "compilation", produces *assembly language macros for the target CPU*. It's the actual compilation where all the (computationally) expensive translation and optimization happens.


Honestly, a little research goes a long way. Sheesh.

http://en.wikibooks.org/wiki/GNU_C_Compiler_Internals/GNU_C_Compiler_Architecture_4_1


Yes, using a GCC front end on top of LLVM you can end up with a stream of "language independent low-level instructions" to be JIT compiled, interpreted or statically recompiled instead of assembly code but, well, like it or not that's what Java does. Or Rosetta, for that matter. (Just in that case the input stream is PowerPC binary instead of "generic" psuedocode.) What makes you think this is automagically going to be so much faster across the board at runtime then a Java VM? Putting a different name on something doesn't change what it is.

Wow. That shows a very shallow understanding of the process. Have you ever even looked at Java bytecode? Do you understand the difference between a stack based machine and register based machine? Code translation certainly isn't a new science, but there are better ways to implement it than others.

There's also likely a side issue here with LLVM as well, which is the question of whether commercial software vendors would even allow their applications to be distributed in the form of a "meta language" which could be readily converted back into C. If commercial software vendors were willing to stomach the idea of distributing their wares in a pliable form then this whole discussion becomes irrelevant anyway. Just invoke the Xcode framework and compile Photoshop during the install process. There you go, optimized binary, no LLVM or runtime translation required.

Oh, I know, let's just slap a draconian DRM system into the OS core. That'll make it all work better...


This is why we have copyright laws. They apply to binary translations of code. It isn't terribly hard to dissect machine code in binary form, even for x86.
*snip*
Of course, I could be wrong too, and 10.6 may come with your very own Magic Pixie in every box which can turn your morning Cheerios into Gold with a wiggle of her nose and make your SE/30 able to view YouTube videos (in full color, oddly enough) thanks to its magic LLVM. We'll just see. ;^)

--Peace

Let me guess, you're 15 years old? I don't know if 10.6 will have any form intermediate code representation, but your arguments are based on wrong information and preconceived notions. There's not much room for advancement with that kind of thinking.

Dave...

Eudimorphodon's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: Dec 21 2003 - 14:14
Posts: 1207
Re: If I were Steve..


Let me guess, you're 15 years old? I don't know if 10.6 will have any form intermediate code representation, but your arguments are based on wrong information and preconceived notions. There's not much room for advancement with that kind of thinking.

Oooh, nice. I'm glad you went with the plain insult rather then invoking Hitler.

Edit: Removed technical arguments, thought the better of it, arguing the technicalities merits or lack of it just doesn't make any sense anymore. Believe what you want. A conversation this useless deserves to be over.

--Peace

seth_381's picture
Offline
Last seen: 14 years 3 months ago
Joined: Jul 15 2007 - 12:50
Posts: 103
I've read the rumors on many sites

and if Apple wants to end support for PPC I think they would lose most of their user base I mean Intel Macs have only been on the market for 2 years! So there are still many PPC machines out there that are very capable for example the Powermac G5's they had Dual or Quad processors and had 64 bit technology and if they are cut out now some people might be very pìssed off.

catmistake's picture
Offline
Last seen: 2 years 4 months ago
Joined: Dec 20 2003 - 10:38
Posts: 1100
Re: If I were Steve..

Wow... that was a pretty interesting back and forth until the insult. Actually, it started out pretty lame and got more interesting, then took that left turn.

Eudi, you never seemed to mind before when I hijacked a thread... so I guess I'll take that for granted, but one thing came up that just didn't sound right to me:

OS X on iPhone is already != to OS X on Macintosh, the same way Windows Mobile != Windows XP or Vista.

You know I'm no expert, but I always looked at Windows Mobile as a completely different OS from the flagship stuff, as in barely, if at all, related. But I've been commando-line on iPhone for the better part of a year, and except for lots of stuff missing (and the interface being completely different), iPhone OS appears to be so close to Mac OS that I'd call the iPhone a palmtop Mac. Apple of course is pushing it to a totally different space, but the devs I'm cheering on seem to be treating it like a sort of embedded Mac.

Can you help me understand what you are saying here? Is this a blunt statement to make a point? Or do you really see iPhone OS as radically different from Mac OS X?

The closest thing I could say to that is "OS X on iPhone is already != to OS X on Macintosh" in the same way that fluxbuntu on Thinkpad != Kubuntu on PowerBook.

resman's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: Feb 9 2006 - 12:41
Posts: 217
An apology

First, I want to offer an apology to Eudimorphodon. I will admit to getting unnecessarily incensed over over-exaggerated hyperbole when making technical points. I really didn't mean it as a personal attack, but a way to reign in the discussion to focus on facts. My bad. A little time always adds needed perspective. Sorry.

Second, Windows CE/Mobile Windows/Embedded Windows are different code-bases than Vista and XP. OS X 10.6 certainly looks to be incorporating lessons learned form shoe-horning OSX into the iPhone. They didn't write a new embedded OS from scratch.

Third,
http://www.macrumors.com/2008/06/15/snow-leopards-grand-central-and-opencl-details/
This will be interesting to see how they define the bytecode to provide enough abstraction to run across the different GPUs yet still get the performance required to make it worthwhile.

Dave...

Reverend Darkness's picture
Offline
Last seen: 8 years 5 months ago
Joined: Dec 20 2003 - 10:38
Posts: 502
To PPC, or not to PPC?, that is the question...

I have it on pretty decent authority that PPC support is being dropped in v10.6. That will make Snow Leopard a snow job for a lot of users.

resman's picture
Offline
Last seen: 3 weeks 2 days ago
Joined: Feb 9 2006 - 12:41
Posts: 217
This looks interesting

A little light on details, but looks promising

http://www.appleinsider.com/articles/08/06/20/apples_other_open_secret_the_llvm_complier.html

Dave...

GEOS's picture
Offline
Last seen: 1 year 4 months ago
Joined: Dec 20 2003 - 10:38
Posts: 334
Okay, I really don't have any

Okay, I really don't have anything to add here cuz I'm not really as educated on software as all of you, but I just wanted to say this has been the most interesting thread I've read online in a VERY long time. Probably the most interesting thing I've read on Applefritter 2.0

Log in or register to post comments