Jun 9, 2014

Time-coding audio files

One day you'll need to include real-time UTC timestamps in audio. It's useful when reconstructing events from long, unsupervised surveillance microphone recordings, or when constantly monitoring and logging radio channels.

There's no standard method for doing this with WAV or FLAC files. One method would be to log the start time in the filename and calculate the time based on audio position. However, this is not possible with voice-activated or squelched recorders. It also relies on the accuracy and stability of the ADC clock.

I'll take a look at some ways to include an accurate timestamp directly in the in-band audio.

Least significant bit

Time information can be encoded in the least significant bit (LSB) of the 16-bit PCM samples. This "steganographic" method requires a lossless file format and lossless conversions. The script below truncates all samples of a raw single-channel signed-integer PCM stream to 15 bits and inserts a 20-byte ISO 8601 timestamp in ASCII roughly every second, preceded by a "mark" start bit. When played back, the LSB can be zeroed out to get rid of the timestamps. The WAV can also be played as such; the "ticking" sound will be practically inaudible at an amplitude of −96 dB. The outgoing PCM stream is then sent to SoX for WAV encoding.

Note that the start bit of the timestamp will mark the moment the sample reached this script, and it could differ hundreds of milliseconds from the actual moment of reception at the microphone. Also, the timestamp does not mark the start of a second, but is rather timed by an arbitrary sample counter. One could also poll and write the timestamps in a continuous manner.

The above script could be modified to interface with my squelch script, by only inserting timestamps when squelch is not active. The resulting audio could then be efficiently encoded as FLAC.

lsb-time-read.pl reads back the timestamps, also printing the sample position of each. Below is a sound sample of a clean signal followed by a timestamped one.

Lossy-friendly approach

Lossy compression, by definition, does not retain the numeric values of samples, so they can't be treated as bit fields. Instead, we can use an analog modulation scheme like binary FSK. MP3 and Ogg Vorbis encoders will, at a reasonable bit rate, retain the structure of a sufficiently slow FSK burst. This method will work even if the timestamping phase is followed by an analog conversion.

Using the ultrasonic part of the spectrum comes to mind; but unfortunately such high frequencies are mainly ignored by a LPF at the encoder. However, we can use the higher end of the remaining spectrum and filter it out afterwards, if the recording consists of narrow-band speech. In the case of squelched conversation, we could write the timestamp only in the beginning of each transmission. This way it could even be in the speech frequencies.

fsk-timestamp.pl embeds the timestamps into PCM data; they can be read back using minimodem --rx --mark 11000 --space 13000 --file stamped.wav -q 1200.

A sound sample follows.


  1. Hi Oona, What if you use the TC as the dither instead of using noise?

  2. Is this supposed to be communications channel audio or is this supposed to be music?

    If the former, consider using a subcarrier with WWVB tones and encoding (100 Hz). There are lots of chips out there that can pick this up and it is usually benign enough that you can filter it out of most audio streams. However, it takes an entire minute to send the time.

    The LSB stealing trick is not new. I have seen it used with T1 carriers. If you ever have to choose T1 framing methods, take a look at the difference between a plain Super Frame and an Extended Super Frame. The ESF frames have 4 KBPS channels which steal the LSB of several circuits.

  3. Consider using spread spectrum noise in the background. Your data could be sent as alternating Walsh Codes. A simple Walsh-Hadamard transform would then make it possible to extract the time from a background noise level. This ought to be more resilient.

    1. Thanks, very interesting insight.

    2. Spread spectrum signalling does work well for this.

      We use m-sequences for through-water comms and range measurement, providing time of flight resolution of about 0.2 milliseconds, with a signal that can be detected at -6dB SNR. We often detect on multiple ADC channels with only a single input channel connected, just from the ADC crosstalk!

      It also survives lossy compression well. I regularly use pre-canned MP3s of test signals playing back from my mobile phone through air into the receive hydrophone for testing. Spread spectrum is very tolerant to channel fading (ie what lossy audio codecs do in suppressing un-noticable frequencies), so long as correct phase is retained (which it is, otherwise music would sound terrible!)


To prevent spam, comments will only appear after moderation.