I was recently watching Action Retro on Youtube and saw he had a new IIe case and keyboard he put a GS motherboard in. It got me thinking about if it was possible to build a new Apple IIe. So the case and keyboard can basically be covered with new products being made, as can the power supply and expansion cards, but what about the motherboard? I see there is a new II+ motherboard for sale, but why no IIe motherboard? Is it much more challenging to make? Would it be possible to add enough upgrades to a II+ to essentially make it a IIe? Looking for advise or thoughts.
The problem with trying to clone the //e motherboard is the two big custom chips... the IOU and MMU. Those were VLSI ASIC chips that were made for Apple. They haven't been made in years. There are no stocks of new chips available anywhere. They aren't fully documented as to what they do and how they work internally. It might be possible to reverse engineer them and make a replacement using CPLD or FPGA technology, but it would be a fairly major project. And there just isn't much of a market for it. //e were made in massive quantities for over 10 years. Apple litterally made several million of them. They are still widely available used, and they are fairly cheap. The ][ and ][+ were actually made in far smaller quantities than the //e, so there are fewer of them on the used market. And because all the chips on them were generic, they are much easier to clone. Clones of the ][+ were very common back in the day and even though some of the chips aren't as easy to get these days, it still isn't a terribly difficult thing to do.
Anyway, I'd love to see someone make replacements for the IOU and MMU... I've got a //e board that is missing one of those. But I don't expect it to happen any time soon. And I've got another spare //e board as well as 4 working //e units... so it really isn't a critical thing for me to get that one board working.
John McMaster can decap a chip and image the metal layer for about $200. That would be a first step in getting them cloned. I'm sure within the Apple II community $400 (or more, I suspect the more raised = the higher he prioritizes that chip in his backlog) could easily be raised to get both the IOU and MMU imaged.
Of course someone would have to provide the chips as well. And then someone else would have to read out the image. After that someone could in theory design a FPGA or some kind of replacement. As I understandit, most of these early 80s chips only need the metal layer image and it wouldn't require additional delayering.
At least that's how I understand the process. The Adam community recently had him do the NCR 8338D used for memory and IO in the Adam. And this is all what I gathered from looking over some messages about that process.
So have the Adam people actually been able to create a CPLD/FPGA replacement for that chip? Seems like having an image of the chip would help, but that it would still be a huge amount of work to make replacements for chips as complex as the MMU and IOU. It might also be easier to focus on other chips like the IWM and SWIM which are used in more machines/cards like Macs. As I said before, tdue to easy and cheap availability of //e and //c units, there really probably won't be a lot of demand for these chips and probably even less for new motherboards.
No, nothing has come out of it as far as I am aware other than the die image. This all just happened last month. I'm not sure if they even found someone to read out the image yet.
Even if the chips are still somewhat readily available, it would be nice to not part out systems. And it would be a good idea to at least get the images made so the info is there while the chips are not super scarce and there is a resource available to to do the decap and imaging. Then later on the focus could be on reproducing them.
As an retired IC designer, please allow me to point out a few fallacies in the above:
1. In the general case, it is NOT feasable to duplicate an IC by just looking at the topmost metal layer. This was possible only in very few cases of early so-called "gate arrays" where all the logic was defined just by one metal mask. The Ferranti ULA used in the ZX-81 is one example. But even then you would need to have access to the (internal) documentation of the gate array manufacturer to know the circuitry beneath the metal layer. In case of the ULAs, the internal circuits were available because they (Ferranti) thought that designers would use pencil and paper over their "master slice" footprint to actually design and draw the metal layer by hand. Ridicolous ! I don't think that anyone succeeded with that. You had to plunk down the "spare change" for the CAD tools to have any chance to succeed with your ULA design on the first spin.
2. In all other cases you need to analyze all layers. For the simple technologies of the 1970s and the early 1980s this can be done by hobbyists on a budget. Greg James and the Silverman brothers (Barry and Brian) have demonstrated this with the 6502 reverse engineering project which led to the "visual 6502". This thing works. I can vouch for it as I use the netlist which came out of this project in my own cycle exact Apple-1 emulator. Their reverse engineering of the 6502 using improvised tools and techniques is absolutely outstanding and possibly the finest achievement for hobbyists in the field of reverse engineering I have ever seen in my 40+ years of professional career. It is not for everyone, though. The chemicals needed for the etching processes are dangerous and the fumes, if inhaled, can kill you. These guys knew what they where doing and took the proper precautions. Otherwise they could not have written their papers, because they would have been dead.
3. So far nobody has used the GDS which came out of this project to produce new NMOS 6502. Why ? Well, the process does not exist anymore. The semiconductor industry has moved on by a lot, in the most mind boggling way. The old wafer fabs all have been dismantled. The only chance I see to re-create a process used for these ancient ICs is to use one of the few "toy" wafer fabs they have at some universities. Provided these are still functional, you could re-create an old NMOS or CMOS process using cheap student slave labor, aka "PhD students". HarHarHar. The world is evil, isn't it ?
4. To recreate the process, you need a mask set with process control monitors. There are still some companies in Silicon Valley which can make you a primitive reticle set for a primitive process for maybe $10000 to $20000. A true bargain considering the costs for a reticle set in a leading edge deep submicron process, which can run up high 6 figures. This is why they fire IC designers which need too many spins to make their designs work. My designs always worked without major mask edits and most went into production after only a few (planned) centering tweaks in a (cheaper) top level metal mask. But this came at a price ... everybody hated me because my designs always took too long and never met the (impossible) milestones the manager scum insisted on to "look good" themselves and collect their "performance bonus" to spend on trophy wifes and the golf club. Parasites. Unproductive. Scum. My greatest successes I had came under a "good" manager (rare !) who trusted me and went to the "money well" again and again and he took the beatings from his superiors. After 3 years (1 year was the original project plan) the company launched my product which came out of this work and the performance blew the competition out of the waters (key performance figures were 10 x better than anything the competition had). So I showed you the two sides of the coin. The trick is to work for the "good" managers who trust you and share your vision. Avoid the grifters who are in that position only for the money. Oh, and the "good" manager I mentioned now is CTO at a large and famous U.S. based semiconductor company. Not only is he bright and skilled himself, he can smell who are the winners and who are the losers.
5. So, assuming you have recreated the process and you have paid for the reticle set, and your student slave laborers toil in the (no so clean) cleanroom to make you a few wafers. How many good die will come out ? For this you need to get your wafers to a tester. OK, service providers are there who have them. But you need a "load board", the "probe insert", and the test program. To develop the test program you need time on the tester to the tune of maybe $1000 per hour. The "probe insert" which carries the fine needles to probe the circuits on the wafer will set you back by a few $1000 each. You can't make them yourself. And you need more than one.
6. Now if you think you are smart, and test the die in the final package, to avoid the tester (wafer sort) costs, here is a little secret: you will pay $1 per pin on each package to get your die mounted in a side brazed ceramic package. This is the only way. Mass production with plastic encapsulation happens in Asia and as a hobbyist you won't order the volume of ICs that these service providers like CARSEM Malaysia would need to even talk to you for a minute. The only viable route is to use a Silicon Valley based service provider who mounts your die for $1 per pin. The whole semiconductor industry uses them ... even the big outfits have closed their departments for building up prototypes because even at $1 per pin, these service providers are cheaper. OK here is is math: you let them build up 25 ICs of 40 pins each. $1000 charge on your credit card. How many will work when there was no wafer sort (the dreaded "black dot" for bad die ?). If made by student slave labor in a University not-so-clean cleanroom ? I can't tell, but if you get one that works you will be lucky.
So these are the economics of making ICs even if we assume your own work to reverse engineer them and to make a new CAD database from which a GDS can be exported is free. Oh, and a Cadence Composer and Virtuoso seat can be rented, too, by the hour, but it's not cheap. And with the ancient open source CAD tools like MAGIC you still need to rent at least 10 hours on Cadence to get the DRC errors out.
I know all this because as a young man, I struck riches by writing and selling CAD software for PLDs. Then I had the not-so-bright idea to design my own full custom ICs in 1.2 um CMOS. This was in the early 1990s and already trailing edge process technology. A few quarter millions spent here and there later, I had functional prototypes nobody was interested in. So I ruefully became an employee in the semiconductor industry and proceeded to design great ICs on the dime of those pesky capitalistic exploiters of the working class. Now I'm retired since half a decade and still itching to design, produce and market my own ICs. But I know it's not viable. With making ICs you can blow though the millions, if not tens of millions, of US$ quickly before you have any product you could produce (only in a foundry !) and sell. If you want your own wafer fab you must be able to write checks over billions of US$. Oh, and buying a decrepit and obsolete fab - you cannot make competitive products with it. All too many attempts to do this have failed. And these failed entrepreneurs were not idiots. They had the know-how and the skills. But they ran out of money ...
It's "high tech = high stake" after all.
So forget it. It's a dream. As long as you can make a 'capitalistic exploiter' pay for your great IC designs, you being the employee, it's viable. But those won't bankroll the making of clones of ancient ICs from the 1980s.
Comments invited !
Love the extremely detailed insight! But I personally wasn't envisioning anyone tooling up to make new clone ICs using these die images. Rather designing some kind of replacement using an FPGA (where feasible).
I believe UncleBernie is spot on with his take and experience. The TL;DR is its complex situation at best, and a costly nightmare at the worst to produce an IC. Best we will ever get I believe is an HDL emulation. Which is one of the projects I am working on. Perhaps in the future 3D printers will advance to the micron level and be able to print dies. Sadly, that isn't any time in the perceivable future.
As nick3092 mentioned, there are some in the Community who have experience with such projects. I can't vouch for the costs quoted from my experiences. However John and I have met a few months back to discuss and review this project and I provided him with a few sample ICs. I also had also sent a lab an IC to be decapped and pics produced (the die image nick spoke of in his post) as a preliminary review about the project, scope, and feasibility. The dies are single metal depletion load NMOS, non-planar (meaning layers are not level) process from initial inspection. It's not a small project, nor as simple as I had hoped. So it's going to be a bit of time and energy and cost.
The good news is reproduccion modules seem feasible, and once a schematic is produced from the dies the coding and layout part of the project can commence. Hard to say a time frame however. But it looks promising at this point in the project that reproductions of both ASICs can be produced. I already have a schematic for the MMU and a proto module. I'm doing some reviews now as time permits around other projects and then coding and testing will commence. Then cloning the motherboard and BOM will be pretty simple, as it was with the II Plus project. I'll post the usual updates on the RM new blog and RM Wiki sites as the project gets more under way and I have something actual to show and report on.
I can assure you that this is being done as we speak!
A comment on Uncle Bernie's post #6 above:
$10,000 can buy you much more than a reticle set for "a primitive process", although I guess it depends on what you consider primitive these days. The ChipIgnite product from SkyWater Semi is $10,000 and gets you 100 diced, tested, and packaged ASICs on their 130nm process with the Open PDK.
Of course there's a catch: this is a multi-project shuttle, so you are constrained in die area and I/O placement: the die area is 10 sqmm. But this is certainly more than enough for cloning early 1980s ASICs. Would hobbyists be willing to pay $100 each, plus overheads, for an IOU or MMU?
Hard to justify $100 for an MMU or IOU given you can still easily buy complete working //e for under $300... Prices of //e are going up, albeit slowly... 10 or 20 years from now? Maybe it would make sense, but then it probably won't be $100 to make a chip either... Hard to say what demand would be like.
In post #10, Robespierre wrote:
"The ChipIgnite product from SkyWater Semi is $10,000 and gets you 100 diced, tested, and packaged ASICs on their 130nm process with the Open PDK.
Of course there's a catch: this is a multi-project shuttle"
Uncle Bernie answers:
I can' comment on the specific "ChipIgnite" product as I didn't know it, but the general issues I see with this are these:
- it's a multi project shuttle. Which means a shared reticle with lots and lots of projects of other parties on it. This of course brings the "mask costs" down from six figures to a per-participant price that is viable. Now, theoretically, if it's done right, you could use a shared reticle set to make production wafers, but alas, there are limits to this and you would still end up with third party circuitry on your wafers. Which is a huuuge legal issue. This is why larger semiconductor companies do company wide shared reticles, so they own all the IP on the reticle set. The first production wafers for any new product also are produced with this reticle set. Only when customers (hopefully) order gazillions of one product, the company bean counters may decide to buy a reticle set for this product.
- you can't put the masks for an ancient NMOS depletion load process into a 130nm CMOS process. You would need to redesign the whole circuitry and all of the layout. And then you would find out that there is a small speck of circuitry surrounded by a large pad ring and the chip area is mostly empty. This makes no sense.
- modern CMOS processes do not easily support +5V supply voltages. Some modular processes allow extra options for what they call "high voltage" transistors and then you could do +5V I/O (or more). But this is costly and requires more process steps and more masks. Which typically is NOT offered on multi project shuttles. I've also found that these "high voltage" transistors may be quirky and may have flaws because they don't have the proper pocket or halo implants you would find in older processes which were specifically developed for the higher voltages. This is not ineptitude of the device designers of the Taiwanese foundry. There is a thermal budget for the whole process flow which cannot be exceeded. Each process step nibbles away from the thermal budget. So they try to get away with the minimum added steps to make these "extra" devices and this means they are not as refined as in the older 1980s or 1990s process technologies. "Leading edge process" does not mean you get better devices. To the contrary. For digital it's not that bad but for analog work it's a major problem leading to very complex and elaborate circuit designs which cost a lot of development time, just to get the same performance for certain analog blocks as 20 years ago, which maybe needed one tenth of the design effort. Managers typically don't understand that. "We gave you the latest and greatest process technology and CAD tools and you incompetent clowns don't deliver designs that work and are on time and on budget. You should all be fired." --- After that a lot of Principal Engineers resigned and walked out of this company over the next few months. So, I've been there, done that. This was more than 20 years ago. So the problem with "leading edge" technology did already exist back then and it only got worse.
The semiconductor industry as a whole struggles with all of this. Nowadays it is incredibly hard to define a product that has a useful function set and a good use of the silicon real estate offered by the leading edge process and packaging technology available. Back in the day when integration density was much lower, you could only make limited functions per die and the combinations and permutations happened at the board level. Nowadays "Mr. Market" demands a perfect, cost efficient, system-on-chip solution that is just right for the application. Any additional feature or peripheral NOT needed by the customer is a waste of money and brings the figure-of-merit down.
For a semiconductor company it is difficult to define and launch products which will be ready for sale in 2 or 3 years and then sell in sufficient quantities to get the invested money back before it's obsoleted by further technical progress. This has turned into a huge problem / challenge and may be the reason why a) more and more smaller semiconductor companies get hoovered up and absorbed / assimilated into the Borg style semiconductor behemoths, and b) really large corporations who plan to produce 100's of millions of devices of one type (i.e. Apple) have started to build their own in-house full custom IC design departments. They use silicon foundries, of course. I also think that several other industries currently suffering under semiconductor supply chain issues may reverse course and start up their own semiconductor capabilities again. For instance, FORD (the car and truck maker) once had a large semiconductor division. To avoid being blackmailed / extorted by the large semiconductor outfits. But, of course, if you can't make each and every IC you need in house, then some vulnerabilities remain.
The semiconductor landscape has shifted enormously in the past 50 years. Even as an insider, I can't recognize it anymore. As far as I'm concerned I'm glad to be retired and out of there. And I've sold all my semiconductor stocks I had collected over the years from my various employers. But that's me --- don't take this as an investment advice, do your own research and make your own decisions.
I think regardless where you invest your money nowadays, you will lose. The question is only how much you will lose.
But back to the problem of making replacement ICs for our ancient hardware. Outside the MIC, this is infeasable / not viable. But what is viable is to use one of the more modern programmable logic devices - but not too modern, they must be able to work in a +5V/TTL environment - and design a functional equivalent for the unobtainable ancient IC, on a small daughtercard that plugs into the socket.
A more radical solution of course is to re-invent the whole machine in Verilog or VHDL and then synthesize it into one FPGA. This has been done with some home computers like the Atari 8-bit, but as far as I have looked into these projects there are small details which are lacking because of poorly understood quirks in the original full custom ICs designed back in the 1970s. It will take a long time to investigate and fully understand these "dirt effects" and then implement them in the RTL. But it's doable.
As far as the Apple II is concerned, I was working on a GAL based re-implementation of it when I was bitten by the Apple-1 bug. So far I did not recover from this infection ;-)
- Uncle Bernie
UncleBernie: A more radical solution of course is to re-invent the whole machine in Verilog or VHDL and then synthesize it into one FPGA.
This is all way above my head, but I've been watching the MiSTer project, hoping someone will do, or get close to doing exactly this.
Near the end of the misterfpga.org thread, AmintaMister posts: OK, tonight I've tested some games... The problem is that games that load, work, but there are many games that load with regular core and not with yours, eg Transylvania 1985, The Oregon Trail (hangs on first picture), "Birth Of The Phoenix (1981)(Phoenix Software) & Global Thermonuclear War (19xx)(-) & Hitchhiker's Guide To The Galaxy, The (1984)(Infocom)" disk or others that have problem, like Transylvania 1984 (loads, you can play but you get a long series of "I don't understand" from the parser without touching a key (in regular core it works flawlessly)...
Again, they are a far way off from full emulation, but this is far ahead of "poorly understood quirks in the original full custom ICs designed back in the 1970s". And that's specifically Apple IIs, not just computers like the Atari 8-bit computers.
The FPGA development in the MiSTer project is very exciting! Especially given the problems UncleBernie details, yes, re-invent the whole machine! This work has already begun!
In post #13, cuvtixo wrote:
The problem is that games that load, work, but there are many games that load with regular core and not with yours ..."
Uncle Bernie's take on that is:
The Apple II does not give the software running on it any feedback about what happens on the screen (unlike the Atari 8-bit or the Commodore C64) and the graphics modes of the Apple II are simple, easy to understand, and essentially foolproof once you figured out to get the color burst phase right.
The various "soft switches" in the Apple II also have a very simple and well documented functionality.
The only device in the Apple II system (as far as games are concerned) which is very tricky is the Disk II controller card and the physical floppy disk drive. The latter is incredibly hard to emulate. Basic functionality is easy to emulate but when it comes to all the side and dirt effects that were exploited by the various copy protections of back in the day it gets complicated and tedious.
If you go back to the TTL only Apple II with no IOU or MMU, then you have ALL the functionality openly spread out in front of you if you take the fold-out schematic page of the original Apple II manual.
The only dubious "full custom chip" is the 6502 itself. Which has its own quirks that some games and especially some copy protections do exploit to obfuscate the code. To the best of my knowledge (which is limited as my time on the internet is limited) there is no Verilog or VHDL core available which is a 100% perfect emulation of the original 6502 processor as built in a NMOS depletion load technology.
I saw one project which took the netlist of the "visual 6502" project and IIRC they had written a tool which automatically converted this netlist into Verilog code. The final trick needed is to manually add the charge storage and charge sharing effects on the dynamic nodes.
This is not trivial because you need to understand how this type of 1970's style two phase dynamic logic worked, which "modern" designers typically don't understand to the extent needed to make an emulation work.
So some of the "inofficial" instructions of the NMOS 6502 may not work the same way as in the physical silicon and this alone can crash some games which used them.
So, to succeed with a Verilog or VHDL implementation of the Apple II, you first need to get your 6502 emulation up to snuff. It is doable, but requires a lot of work. There are some torture tests out there (6502 machine code) to exercise a 6502 emulator and they are quite good to expose deviations from the original silicon.
After you got your 6502 emulation up to snuff, you have to get your Disk II emulation up to snuff and I'd think this is more work and it's more difficult than all the rest of the Apple II together.
But it can be done. I am sure. I could do it but I have no time for it. The Apple II probably is the 8-bit era machine which has the best chance to get a 100% perfect implementation in an FPGA before all the other projects for the Ataris or the C64 get there.
And as a bonus: when the great work is done and the RTL has gotten the "gold seal" of approval and is complete with automated test benches (and is well written up to the standards in the semiconductor industry), you can generate the GDS for a full custom implementation in any CMOS process technology within a few days of work. Assuming you have the professional CAD tools and the design kit for the process. All you need is some wealthy guy (or some crowdfunding effort) and you can produce Apple II-on-one-chip.
This is how the various "Atari Flashback" game consoles came into being.
So going over the FPGA route is no waste of time. It's the path to success !