DiskII & Smartport Logic Analyzer protocol decoder

2 posts / 0 new
Last post
Offline
Last seen: 1 month 2 days ago
Joined: Nov 19 2023 - 15:28
Posts: 306
DiskII & Smartport Logic Analyzer protocol decoder

Hello All, 

I have been developing 2 protocol decoder for sigrok / pulseview:

- Smartport

- DiskII

It can analyze each bit / byte/ drive sequence and log everything including nibble and denibble data on a log file and corresponding tag on the GUI

Sigrokprotocoldecoder

I have published my code on github and I hope it will help other people in this forum.

Vincent 

 

 

Offline
Last seen: 2 days 19 hours ago
Joined: Apr 1 2020 - 16:46
Posts: 1269
This was a very smart move ...

... so in other words, you have built a "communications channel analyzer" for "SmartPort", which did not exist before.

This is how your work leaves the hobbyist realm and becomes "professional grade".

The only feature you now need to add (presumably in your Floppy Emu first) is injection of errors into the channel.

And once you have injected an error (i.e. some timing jitter tailored to provoke synchronizer failure on the receiver side) you can  see if and how the 'SmartPort' routines on the Apple II side cope with that. Do they recognize the error ? Do they do a proper retry ? And if so, how often would they retry if the error persists ?

(Of course, do not overdo the errors ... one bad bit should be enough)

The same kind of test could be done when the Apple sends data to your Floppy Emu but to inject this kind of error you probably need extra hardware, but this could be simple, just a oneshot to set the position of the disturbance, and another one to set the duration of the disturbance, and then a mux to select between the original signal and the disturbed one, which could be made wiggly (in time).

Then you can see what your Floppy Emu does if its synchronizer flipflops go metastable. It should detect the bit error and react in a robust way ... alas we don't have documentation how Apple (the corporation) intended to handle such errors on the "Smart Peripheral" side. This should be part of the protocol specification. I did not find such a document.

If you want to look "over the fence" in a different "garden" to see how a proper communications protocol may be specified, look into the Serial Bus Protocol for the Atari 8-bit computers. This protocol was published in the Atari 400/800 Technical Reference Manual, including timing, timeouts, retries, etc. Of course the "SmartPort" is a different animal, but you could get an idea how such a communications protocol may be specified properly, and not in a haphazard way (which is my impression of 'SmartPort', and unless somebody shows me a proper 'SmartPort' protocol specification from Apple, I will think it's just a kludge which never worked well enough, so they abandoned it).

There is an unavoidable problem with all serial communications channels relying on asynchronous clocks, and this is the inevitable occasional synchronizer metastability leading to bit errors. The protocol must make sure that these bit errors can be detected reliably enough so bogus data is never accepted as true. And issue a retry ... telling the sender to re-send the data frame which was found to be impaired. This hinges on checksums. And I do believe that both the 8-bit EOR "checksum" used by Apple in the DISK II system (possibly inherited from their cassette interface checksum)  and Atari's "add and re-add the carry" checksum are questionable because they are weak. The 16 bit CRC checksum used in most floppy disk controllers of the time is stronger but still not absolutely failsafe. But they had to make a tradeoff due to the hardware expense and chose 16 bits CRC for a reason. Both Apple and Atari chose much weaker checksums which dispensed with the CRC hardware completely. However, the Atari floppy disk system used CRC (they used LSI FDC chips made by Western Digital), only their serial protocol had the weaker checksum. Apple used the weaker checksum throughout, even on the floppy disk itself.

Draw your own conclusions about the reliability of either system.

But we can't change it ... we just can try to work around the issues, and , of course, keep plenty of backups, at least for critical data you don't want to lose. None of these legacy storage systems had error correction capabilities which now are ubiquitious, thanks to the digital transistors in ICs (which are needed to implement ECC) nowadays cost almost nothing anymore. Today, ECC has gotten so powerful that they can actually sell large flash memory devices whose storage cells are not 100% reliable to keep the bit(s). They just fix toppled bits on the fly when you use such a solid state mass storage device (i.e. USB stick). This is totally transparent to the user who is clueless about what is going on. But when the stick is not used for prolonged periods of time, the ongoing bit rot (which is everywhere all over the flash memory array) may become uncorrectable by ECC and then your data is toast. Oh, and don't get me going on the PRML method they use in HDDs to push bit density. If you knew how that "magic" works then you would not want to entrust your data to a single HDD, ever, but use RAID arrays. Which are the backbone of "cloud storage". But then the greedy corporations who provide cloud storage skim, inspect, analyze and steal your data unless you use strong encryption on your side. They actually "train" their "AI" with stolen data they were entrusted with by gullible users of their "cloud". They also "train" their "AI" on open source software - this is how the "AI" can be so good in writing code, it's all stolen (except, of course, it's not really "theft" because the user's data is still there, unlike your car being stolen). There are lots of lawsuits going on at the time of this writing about intellectual works having been "stolen" by "AI". Let's see what comes out of it. But this is another topic - I just wanted to give you a warning about the pitfalls with current IT.

 

- Uncle Bernie

Log in or register to post comments