Tag Archives: audio

Red Hot Radio

As it turns out, just like in audio engineering, in 2-way radio you can’t just look at the power meter and assume your signal is great. It might actually be unintelligible.

Back in my early days (6 months ago), I noticed that my RF power meter seldom hit 100W on voice. I know that the duty cycle of voice on sideband is significantly less than 100%, but even the peaks weren’t hitting it. Frustrated with apparently not getting my signal out of the region, I turned on the built-in audio compressor, tweaked the compression amount, and got that average power a bit higher to somewhere that looked right.

As I learned recently on the regional AARC 10-Meter Net (Sundays 3pm CT on 28.410MHz USB), my fellow net participants complained that there must be some RF feedback into my mic or something because my vocal peaks were seriously hot and distorted. They had been complaining for a few weeks, and I assumed it was some insufficient grounding in my car. While discussing it during a net, I mentioned that I had compression turned on; they asked me to turn it off, and the distortion went away.


So, uh, remember that owner’s manual thing, and the part in it that tells the owner how to configure mic gain and compression? Yeah, so if I follow that, and look at the ALC (audio level control) meter instead of the RF power meter, and if I adjust things so the average and peaks stay within a specified range, then my signal should sound better.

I hooked up my dummy load, went to 10m sideband, spoke gibberish into the mic and tweaked the mic gain and compression amount to a range that makes sense (at least visually). I’ll try an A/B test on the next 10m net to see if it worked.

It’s not the output power that wins friends and gains contacts; it’s the signal quality. You can reach across the country on 10W if your antenna is good, the sky is right, and your signal is clean. Otherwise, you’re splattering your distorted RF energy across the band, you’re burning battery power, and you’re wasting someone else’s time.

Past TI 99/4A, Present Foray

My current project is one of steep learning curves, long memories, and hours of waiting and iterations.

Back in high school, I had a TI 99/4A personal computer. We were too poor for the disk drive modules, so I had to store my programs on cassette tapes. I spent a lot of my time working on programming that thing, from those little games that were published in the BASIC programming manual all the way to the big-idea experimental programs and games that I worked hard to complete (my biggest and most persistent project was a Pac-Man clone). Years of my youth were blown writing BASIC code on that machine, and now that all of my TI equipment is gone into the past, all I have left as a vestige is a short stack of audio cassette tapes and a few handwritten notes.

I decided some time ago that I needed to find a way to transcribe those disintegrating cassettes into sound files, and then transcribe those sound files back into the program code. Call it a sick drive to historically document and preserve the past, but I am intensely interested in seeing just how embarrassingly horrible my code was back then compared to now. BASIC was every programmer’s wet nurse, and she fed me well enough to grow up and see that not every design decision was right.

As a thought exercise, it’s obvious to me that something along these lines is doable. There are several TI 99/4A emulators out there, and many of them offer the ability to read tape dump audio files, so I know it’s doable. I’ve tried one of the dump converters (CS1er), and had no luck with my files. So then, this is a perfect opportunity to try my hand at rolling my own solution.

The first step is to play the cassettes. Easy enough, you’d think. But there are many, many problems that make this a non-trivial task. Do you know how hard it is to find a cassette player in this modern age? The Walkman I kept as my constant companion in college is so old, the rubber drive belt has stretched, dried out, and broken. The dual-deck recorder I bought at pawn two years ago when I first had a view of doing this project is likewise having belt problems, and is producing substandard audio. So this weekend I went to no less than four electronics stores before I found a dictation-style portable cassette recorder. Even the sales people got a chuckle when I asked them if they had cassette players. Yeah, I realize we’re a long way from the 80’s, but anything would help.

The next step is to connect that cassette player to a computer and record. I am advantaged to have in my possession some semi-professional recording gear. This was a no-brainer. The problem, however, is that the gear records with too much fidelity. See, when the TI outputs sound to be recorded onto tape, it uses a square wave; the cassette recorder tries to record that square wave onto magnetic tape by modulating a magnetic field in the recording head. This changes the shape of the waveform and adds its own “color” to it. Likewise, the magnetic tape also changes the shape by its own response to the head’s magnetic field. And on playback, the playback head also changes the shape by its response to the magnetic domains on the tape surface as they pass under the head.

By the time the audio comes back out of the system, the wave shape is so far removed that anything looking like a square wave that anything that records with high fidelity is going to misinterpret the signal. I had to go lo-fi and use the cheapest player and feed it into my cheapest audio interface with the crappiest settings. This method seems to give good approximations of the original square waves, since the overdriven inputs on the audio interface are clipping the wave tops. However, as I discovered tonight, the recorded files are unusable. Here’s the short story on why:

The TI (and most other data storage systems that use FSK) decode their signals by looking at the “zero crossings” of the waveform, the moment where the waveform goes from a positive voltage, crosses zero volts, and goes to a negative voltage (and vice-versa). The internal demodulation circuitry counts the amount of time between zero crossings — a phase — and uses that to determine the data bit value. With the TI, a bit of value 0 is one phase change of time T, and a bit of 1 is two phase changes each of T/2, meaning two changes within time T.

At the beginning of a data file, the TI encodes a few seconds of zeros so the internal demodulator circuitry can get a feel for how long a 0 is. This becomes time T. The timing header is then punctuated by a string of eight 1 bits to tell the circuitry that actual data is about to start, then there are a few bytes to tell how many data record blocks are in the file, and then the remainder of the file is those data records.

Some of the problems with the files I’ve transcribed so far is that of where the waveforms cross zero. Sometimes, the signal is too low that you’ll get zero crossings happening inside a zero bit, and sometimes you have the positive phases be longer in distance than the negative phases. This is an analog signal problem rooted in the AC gain and DC bias of the playback deck. The secret is to record the signal loud enough that the squiggly caps of the waveform get clipped and flattened, but also record it low enough that the phases themselves are even in length. Both of these are major problems I’m having on this front.

What I might do is record below the clipping point, and then use the audio editor to shift the waveform up or down so the phases cross equally. Not sure yet. But once I get the playback problems hammered out — and I suspect there’s no one-hammer-fits-all approach, since every file was recorded at different times on different tape types with different tape machines and settings — then I’ll have audio files with just enough fidelity to pull data with confidence. I’m crossing my fingers.

The next step in this process is one of programming the software to demodulate these files and output the binary program code. I’m using this as a perfect opportunity to relearn the C programming language which excels at this kind of low-level data operation. So far, I’ve learned how to use external libraries to read audio files, I’ve learned how to read those files and find all of the zero crossings, and I’ve thus far made rudimentary attempts at decoding the 1’s from the 0’s. It is at this stage that I’m having problems with the good tape dumps that I’ve made so far.

Apparently, most of the usable files have spurious data encoded in them. Remember when I said that 1 bits were two phase changes within time T? Well, if you see a phase change in T/2, you can assume that the next phase will also be in T/2 time. If so, then you have a 1 bit. Well, what if you have 3 consecutive phase changes that are T/2 each followed by one change in time T (a 0 bit)? What is that third T/2 phase change? I have no idea how that gets in there, and have no idea what to do with it. These are more problems to work through. Maybe they’re data markers. Maybe they’re tape playback variations. Maybe the TI I owned had some non-standard method for storing data. Who’s to know? I don’t, at least not at this point.

The final step in this process, assuming I work through all of the other milestones, is to take that decoded stream of ones and zeros and assemble them into data records, check them for consistency, and then decode those into bytes, and then letters, and then the actual source code that my skinny fingers hammered out on that full action keyboard by the light of my color TV oh so many years ago. This early in the game, this step seems like a stretch goal; I have so many milestones to pass in these most rudimentary levels to attain that sort of sophistication. I have to put myself into the mindset of how things were implemented with analog electronics to figure out how best to proceed with deciphering these signals from the past.

Maybe then I’ll be able to touch the face of that young, hopeful kid I once was.

Crystal Clear

I suppose the downside to using FLAC as a codec for storing your music is that the file sizes are much, much larger than MP3. Based on my current statistics, each album will average around 340MB on disk, which seems like a lot but it’s not bad considering the Red Book Standard for CDs declares 700MB total capacity per disc.

Here’s a sample comparison between MP3 and FLAC using Rush’s album “Presto”. The MP3s were generated with the LAME encoder at 192Kbit, 44.1KHz, stereo. The FLACs were generated with the FLAC encoder, medium compression setting.

  • MP3: 73282 bytes (71.5MB)
  • FLAC: 342188 bytes (334.2MB)
  • Overall storage growth: 467%

That extra quality comes at a cost. However, with the dropping prices of large hard drives, storage space becomes inconsequential.

The second drawback of using FLACs instead of MP3s is one of hard drive performance. With the smaller MP3 files, the audio player can read in the entire file and cache it in memory instead of hitting the disk constantly for the next data block to decode. FLAC players, unless they’re written to use a larger block of memory to cache the larger file, will have to hit the disk constantly throughout playback. You may run into situations, as I have, where the player will run out of audio data to send to the speakers if you’re doing something that’s creating extra disk activity. Saving files, copying files, anything to do with adding work to the disk may crowd the music player’s file accesses so it has to stand in line to read the data. This can be overcome with faster disks, larger caches, or smarter music players.

All that being said, I’m glad I’m switching to FLAC. I’m actually hearing the music clear as a bell, just as it’s mastered to the actual CD. All the little nuances, the sonic fluttering in the background, and tiny little noises in the studio, it’s all there. And FLAC, since it’s a perfect copy of the CD material, retains the aural phasing and panning between stereo channels, so if the material’s recorded to “come out of the speakers”, then it comes out of the speakers. MP3 processes all this and crunches it down aurally into only the important pieces of the sound and drops the rest.

It’s good to hear my music again.

Catching FLAC

Last weekend, I began the slow, arduous process of re-ripping my entire CD collection into files easily playable on my computer. This time, instead of ripping into 192kbit MP3 with the LAME codec (like I did last time), I’m ripping them into FLAC. This has important implications.

First and foremost is that FLAC is lossless, meaning no data is thrown away between the transition from CD to the final sound file. MP3 is a lossy codec, and uses tons of statistical mojo to analyze the sound data of the CD and throw away the bits that your ears can’t hear, crunching the file size tremendously. The problem with this method is that you’re losing the quieter nuances of your music. FLAC’s strength is that it’s able to take the input waveforms and chop them up into similar, easy-to-compress chunks, making the file smaller than the original uncompressed form but on playback the audio is a perfect, exact copy of its original form.

Secondly, since FLAC doesn’t compress the file sizes as well as MP3 (with the obvious quality tradeoffs), the overall space needed to store my music collection has grown tremendously. Instead of storing an entire album in roughly 80 megabytes of space, it now takes an average of 350 megabytes. That’s a large bite to swallow, but with the falling prices of high-capacity hard drives, it’s nothing nowadays. Considering the audio CD format stores around 700 megabytes, that’s not so bad.

I’ve been meaning to do this, because even with my bad ears I can still sometimes hear the strange audio artifacts of the MP3 compression — called “sizzle” in the industry — when I’m listening to my stuff. After I ripped my first disc and gave a listen, I was shocked at the quality difference. There were little pieces of the sound, stuff from the studio, or the audience, or quiet stuff put into the mix, that I never knew was there after listening to the MP3-encoded form for years. The sound came out of my speakers; FLAC saves the exact same stereo phasing that’s mixed into the CD in the final file, and no amount of MP3 bitrate is going to capture that level of nuance. I’m shocked.

So last weekend, I bought a 1 Terabyte hard disk (that’s roughly 1,000,000 Megabytes), installed it, and started ripping the CDs on my shelf. Within two days, I had the shelf of CDs I’ve acquired since 2007; about 60 discs total. And then I cracked open the 120-pound crate of CDs that I’ve collected since my first disc in 1991. These were packed up at my last place, and I’ve just now gotten around to digging them out. I’m about 1/8th of the way through my entire collection, so I expect this to take a while.

When it’s all said and done, my hope is that I will never have to break out a CD again to get quality audio. The end FLAC files can be used as perfect copies to produce any sort of MP3, OGG, or next-generation compressed audio file for ease of portability. Any other use (like for listening at home), I can rely on the FLAC.