Since the volume of the audio seems to be critical for error-free tape reading, I would like to develop a specific tool for finding the optimal level. After discarding many other ideas, I came up with this candidate:
1) craft a "test" audio signal containing the following pattern of bytes: $FF $FF $FF $FF $FF $FE $00 $01 $02 $03 ... $1F $20, where:
- $FF is a much shorter version of the header
- $FE is the last section of the header which contains the start bit (LSB = 0)
- $00 to $20 is a fixed data block
2) the test audio signal is sent to the ACI and repeated over and over in loop
3) on the Apple1 a test program scans for at least 3 $FF bytes (24 long pulses)
4) once the $FF bytes are detected, the scanner waits for the start bit and read the next 32 bytes as in the normal ACI ROM read routine
5) after the data block is read it's compared with the reference and the index of the first unequal byte is output to display as a single character (0-9, A,B,C...). So the higher the character, the better the decoding.
6) read next pattern in loop (goto 3)
The point is that the user can adjust the volume on the playback device (PC, smartphone, iPod etc) and have a quick feedback on the screen of what is going on.
What do you think about this method, do you have any suggestion?