Audio Sample Collection

I thought I would get smart and set multiple interval timers in an attempt to collect more audio samples in a short period of time. My target was to collect them every 1 millisecond instead of every 3 to 4 milliseconds.

var SAMPLE_DELAY_MS = 1;
var sampleIntervalIds = [];
const SAMPLING_INTERVAL_COUNT = 3;
function startCollectingSamples() {
  for(let i = 0; i < SAMPLING_INTERVAL_COUNT; i++) {
    if(sampleIntervalIds[i]) continue;
    sampleIntervalIds[i] = window.setInterval(collectSample, SAMPLE_DELAY_MS);
  }
}
function stopCollectingSamples() {
  sampleIntervalIds.forEach(window.clearInterval);
  sampleIntervalIds = sampleIntervalIds.map(() => {});
}

It looked like it was working at 5 milliseconds until I saw the sample count was zero for some segments. I started highlighting sample counts under 3. It was unreliable. I started looking at the performance.now timer when collecting a sample. I found that most of the time, the timer hadn’t changed with the last call, so I started dropping the duplicate samples.

function collectSample() {
  const time = performance.now();
  if(frequencyOverTime.length !== 0) {
    if(time === frequencyOverTime[0].time) {
      console.log('duplicate time', time);
      return;
    }
  }
  // do stuff
}

The result was a whole string of 1 to 2 console logs with the same time. Yes – sometimes it was able to be called with a different time stamp, but more often than not, it was firing in succession at the same time.

If I couldn’t get an interval to occur less than 3 milliseconds, maybe I could stagger them to occur every 3, 4, and 5 milliseconds so that they don’t fire at the same time… I used the index as an offset instead.

function startCollectingSamples() {
  for(let i = 0; i < SAMPLING_INTERVAL_COUNT; i++) {
    if(sampleIntervalIds[i]) continue;
    sampleIntervalIds[i] = window.setInterval(
      collectSample,
      SAMPLE_DELAY_MS + i
    );
  }
}

Unfortunately this did not work either. It seems like the browser just calls all pending interval functions every 3 to 4 milliseconds. It just doesn’t have the precision to call them faster.

I could attempt to delay 1 millisecond before initiating the next interval, but the question is how? window.setTimeout has the same limitation where 3 to 4 milliseconds is the minimum it will actually delay. The browser is not ideal for such a high precision while keeping the thread available for user interaction with the interface.

So… I can’t execute a function any quicker than 3 to 4 milliseconds. Okay, let’s find another way to work around this issue. What about collecting the samples themselves. If I make multiple requests to the analyzer during one interval, will the analyzer give me different data based on the wave form changing in such a precise time – or is it locked-in once the interval function is executed? Let’s take a look.

Currently, I’m grabbing 1 sample like so:

const frequencies = new Uint8Array(analyser.frequencyBinCount);
 analyser.getByteFrequencyData(frequencies);

I made a function that makes a call to get the byte frequency data 10 times and return the average value.

function getProcessedFrequencies() {
  const samples = [];
  const count = 10;
  const binCount = analyser.frequencyBinCount;
  const frequencies = new Uint8Array(binCount);
  const processedFreqencies = [];
  for(let i = 0; i < count; i++) {
    analyser.getByteFrequencyData(frequencies);
    samples.push([...frequencies]);
  }
  let mismatch = 0;
  for(let i = 0; i < binCount; i++) {
    const values = samples.map(s => s[i]);
    mismatch += values.every(v => v === values[0]) ? 0 : 1;
    values.sort();
    const mean = values[Math.floor(values.length / 2)];
    processedFreqencies[i] = mean;
  }
  if(mismatch !== 0) {
    console.log('changed', mismatch);
  } else {
    console.log('same');
  }
  return processedFreqencies;
}

I first started out with an average, and then decided to get the mean of all values instead. My console often said “same” from 10 to 80 times, occasionally spitting out that around 2 to 35 values had changed without a signal, and 470 to 490 values had changed when the data was being sent. Stability suffered as I increased the number of samples to 100, 1k, and 10k.

Is it worth it to capture 80 samples in hopes that I can get a change? No. Especially with a median, I’ll be grabbing the majority of the value. With an average, the overall weight may shift a little, but it would be insignificant. I could capture the unique values… I would only have 2 at most out of 80 samples. Everything comes to timing. The longer I execute code in a function, the more unresponsive the front-end user interface becomes. This may call for a worker process that constantly executes in the background, but that seems a bit excessive to be in an endless loop. I just want to execute code any time the analyzer picks up a change in frequencies. This “real-time” analysis is too slow to analyze the data.

I decided to play with the interval timers again. Instead of offsetting each one with 1 millisecond, I decided to offset by a fraction of a millisecond.

function startCollectingSamples() {
  for(let i = 0; i < SAMPLING_INTERVAL_COUNT; i++) {
    if(sampleIntervalIds[i]) continue;
    sampleIntervalIds[i] = window.setInterval(
      collectSample,
      SAMPLE_DELAY_MS + (i/SAMPLING_INTERVAL_COUNT)
    );
  }
}

It still didn’t seem to help much. I started playing with the number of extra timers that I created. For a 30ms segment, 1 timer got around 7 unique signals, and 2 timers got around 12. Anything more didn’t have any significant impact. So 2 is the magic number.

1 interval timer
2 interval timers
3 interval timers
10 interval timers

What I do notice, is that regardless of timer count, the signals per segment greatly reduces over time. This may go hand-in-hand with how the data seems to get completely corrupted towards the end of the packet.

As the signal comes through, I’m building up a large array of data. Perhaps I need to optimize what I’m collecting. The other end of it is drawing the graphs. As more data is available, there are more things the charts need to draw. Could it be that this is impacting the sample collection as well? I’m not focused on optimizing for performance with graphics or memory. I was only focused on collecting data faster. Something is keeping me from doing that as more data is collected.

I’ve change the code so that I’m no longer saving the frequencies captures by the analyzer. I also reduced the channel samples to only store an array of low/high amplitudes. I haven’t seen a change the failure rate towards the end of my packets.

The next attempt is to stop drawing the channel graph that evaluates all the bits.

This does not make sense. It’s the graph. Turning it off, my data comes through mostly fine – over and over again. Turning on the graph, the end of the data is corrupted over and over again.

Throwing on a timer, it seems that the graph usually takes less than a millisecond to draw, but then gets upwards at about 9 milliseconds, and one frame took 21 milliseconds. That’s definitely going to impact my sample collection.

I commented out all of the calls to work with the canvas context. The rendering is still taking 5 to 20 milliseconds. This is not good. My logic is making a negative impact on performance.

Quicker Graph

I’ve changed things around and rewrote the whole rendering logic. I was looping through channels and then looping through each segment. This caused a lot of excessive function calls compared to looping through the segments first, as I would already have the data needed for each channel to reference. Rather than drawing the background for each cell, I’m only drawing a red square if the bit isn’t expected, and drawing a green rectangle across all channels. It reduces the calls to the green background to 1 out of 24 if all bits come through as expected. I also changed the segment logic to jump strait to the first rendered segment to the last rendered segment, instead of looping through all possible segments until I find one within view. It made some improvement. My rendering time is down to 1 to 4 milliseconds with the 3-4 milliseconds at the end of a packet. It’s still causing issues with capturing samples at the end of the packet.

function drawChannelData() {
  const now = performance.now();

  // Do/did we have a stream?
  if(!LAST_STREAM_STARTED) return;

  // will any of the stream appear?
  const packetBitCount = getPacketSizeEncodedBitCount();

  const packetDuration = getPacketDurationMilliseconds();
  const lastStreamEnded = LAST_STREAM_STARTED + packetDuration;
  const graphDuration = SEGMENT_DURATION * MAX_BITS_DISPLAYED_ON_GRAPH;
  const graphEarliest = now - graphDuration;
  // ended too long ago?
  if(lastStreamEnded < graphEarliest) return;

  const channels = getChannels();
  const channelCount = channels.length;

  const canvas = document.getElementById('received-channel-graph');
  
  clearCanvas(canvas);
  const ctx = canvas.getContext('2d');
  const {height, width} = canvas;

  // Loop through visible segments
  const latestSegmentEnded = Math.min(now, lastStreamEnded);
  for(let time = latestSegmentEnded; time > graphEarliest; time -= SEGMENT_DURATION) {
    // too far back?
    if(time < LAST_STREAM_STARTED) break;

    // which segment are we looking at?
    const segmentIndex = Math.floor(((time - LAST_STREAM_STARTED) / SEGMENT_DURATION));

    // when did the segment begin/end
    const segmentStart = LAST_STREAM_STARTED + (segmentIndex * SEGMENT_DURATION);
    const segmentEnd = segmentStart + SEGMENT_DURATION;

    // where is the segments left x coordinate?
    const leftX = ((now - segmentEnd) / graphDuration) * width;

    // what bits did we receive for the segment?
    const segmentBits = GET_SEGMENT_BITS(LAST_STREAM_STARTED, segmentIndex);

    // draw segment data background
    let expectedBitCount = channelCount;
    if(segmentEnd === lastStreamEnded) {
      expectedBitCount = packetBitCount % channelCount;
    } else if(segmentEnd > lastStreamEnded) {
      continue;
    }
    drawSegmentBackground(
      ctx,
      leftX,
      expectedBitCount,
      channelCount,
      width,
      height
    )

    for(let channelIndex = 0; channelIndex < channelCount; channelIndex++) {
      // get received bit
      const receivedBit = segmentBits[channelIndex];
      // identify expected bit
      const bitIndex = channelIndex + (segmentIndex * channelCount);
      if(bitIndex >= EXPECTED_ENCODED_BITS.length) break;
      const expectedBit = EXPECTED_ENCODED_BITS[bitIndex];

      drawChannelSegmentBackground(
        ctx,
        leftX,
        channelIndex,
        channelCount,
        height,
        width,
        receivedBit,
        expectedBit
      );

      drawChannelSegmentForeground(
        ctx,
        leftX,
        channelIndex,
        channelCount,
        height,
        width,
        receivedBit,
        expectedBit
      );
    }
  }
  drawChannelByteMarkers(ctx, channelCount, width, height);
  drawChannelNumbers(ctx, channelCount, width, height)
  console.log('time', Math.ceil(performance.now() - now));
}

I think maybe the next thing to attack would be the part where I get the bits received for the channel ( GET_SEGMENT_BITS ). Given that my channel bits don’t match what is received towards the end of the packet, I think this is my source of frustration. Since both the bit receiver and the graph use this function, I’m at a loss as to why there is a difference between the two.

I figured out where I went wrong. Here is some code along with a fix:

const sampleEnd = samples[0].time;
const sampleStart = streamStarted + (segmentIndex * SEGMENT_DURATION);
// const sampleStart = samples[samples.length-1].time;
const sampleDuration = sampleEnd - sampleStart;

// not long enough to qualify as a segment
if((sampleDuration / SEGMENT_DURATION) < LAST_SEGMENT_PERCENT) return;

I was trying to identify samples that were too short to qualify as a segment. When you get down to only collecting a handful of segments, the time span may be too small between them. I needed to consider the segments starting time to determine the range. In fact, I may also need to add 3 milliseconds as a buffer due to the interval restriction.

What ended up happening was that the bit receiver would discard all of the bits and continue receiving more bits – causing everything to be out of synch.

I cleaned up the graph to align the channel numbers back onto the left side and added a darker green between alternating segments, giving he graph a look of a watermelon.

Watermelon Graph

Perhaps I should make the unexpected bits pink, just to add to the theme.

Pink Watermelon Bits

Maybe we need some dark brown seeds in there as well…

Dark Brown Seeds

Well, you can’t make out that it’s a dark brown with the high contrast. I think we have done well enough. Looking back at the past three images, it appears that the first eight channels have problems detecting the correct bits. The radio is actually playing in the background, so that could be part of the problem.

What I would like to do is connect the oscillators directly to the analyser so that I don’t have to listen to the “noise” or worry about background noise affecting my detection. Work in a perfect environment to accomplish the task first, and then test it in the wild.

Well, that was pretty simple. You can see a difference in the frequencies as everything is much stronger with their amplitude overall.

Speakers & Microphone
Oscillators connected to analyzer

Although my microphone is still picking up the radio in the background, everything looks much cleaner. What’s interesting is that a direct connect is chock full of errors!

Oscillator Analyzer connection with many errors
Microphone has less errors

I expected the exact opposite to occur. Since I’m setting frequencies at a specific time, I’m going to see if I can set the gain as well, and split the channels into two oscillators each instead of swapping frequencies on one oscillator.

Paired oscillators controlled with gain

I’m running into trouble now. I’m trying to stick with one channel and draw out the difference between the direct connection to the analyzer, and using the computers speakers/microphone. The analyzer completes the signal every time. The microphone does not. I’ve tried many times, and I can’t get the full signal. The signal drops too far before the other oscillator can ramp up its amplitude, and ends up restarting the segment. By time the segment starts back up, the graph is paused, showing only two segments within the new stream.

UU direct via analyzer
UU via microphone (incomplete)

It seems like using gain with two oscillators needs some extra time between segments, or an additional persistent signal needs to be present to simply say – hey, we are currently sending data. I really don’t want to waste an additional frequency to preserve the connection.

The other thing I notice is the high/low bits in the direct stream. The high bits don’t drop significantly, but the low bits drop vary far close to the bottom. With the microphone, the high bits drop further, but the low bits do not. Is this a microphone/speaker quality issue, or is their some magic going on with a direct connection?

With two or more channels, I can get a signal that is always on. Here is the microphone signal again. Notice that the “High” frequency for channel zero in red, is never high. It’s never turned on – yet it’s always present at about 50% amplitude. That’s because its “Low” frequency channel is probably bleeding over into the frequency bin. The FFT size is 2 ^ 9 giving a frequency resolution of 93.75 Hz. Channel one uses 304 Hz and 1,710 Hz. These channels don’t feel like they are close at all, so the bleed over across the spectrum is a bit confusing. The Channel 1 low frequency (3,116 Hz) is just as far as Channel 0 low. We can see how the channel 1 low frequency changes from segment 2 to segment 3 – but their is no effect on Channel 0 high. If Channel 1 low has no effect, then why does Channel 0 low have an effect – or does it? Why is Channel 1 high showing up at all if the gain is set to zero and it never had a high value?

Channel 0 High in red shows as half amplitude when it was never flipped to be on.

I need to think through this. What exact problem am I trying to solve? What problems do I have?

  • Using a direct connection so that
    • I can test this without background noise
    • I have a perfect signal
    • I can listen to the radio without impacting my work
  • End of signal detected when only using 1 channel
  • Channel frequencies are present when they were never turned on (gain 0)
  • Direct connection has more errors in data transfer than a microphone

I’m discarding gain for now. On one channel, it seems that I still have the same problem regardless if oscillators are controlled as pairs via gain, or if I just have one oscillator that flips its frequency. The introduction of gain nodes doesn’t appear to have made a difference.

Playing around a bit, I got the signal through with just one channel on both the speakers and the analyzer.

One channel via speakers
One channel via analyzer

I’d prefer if the analyzer didn’t have as much amplitude for the direct connection. I suppose we are seeing the effect of air waves decreasing amplitude as the sound travels from the computer speakers to the microphone. Why is there any amplitude at all when that oscillator has already changed its frequency?

This brings me back to the FFT size again. If I drop it from 2^10 to 2^7, the direct connection has a quicker drop off. Or does it?

Direct amplitude drops quick with lower FFT Sizes.

Our amplitude appears to have dropped of fairly deep, but it has high points as well. What is bringing it back up? In fact, if I triple the segment duration, you’ll see more of the amplitude jumping up and down when the frequency isn’t in use, and solid lines when it is.

Frequency jumps up and down when not in use.

I feel like I’m just circling around to the same problems that I’ve had in the past without any clear knowledge of whats really going on. Is this another frequency (304 Hz) bleeding over into the 2,554 Hz range? Is there some kind of harmonics where it’s being picked up sometimes?

If I bring smoothing up to 0.5, it brings the amplitude up near the top – but its still up and down as well.

Smoothing at 0.5
Smoothing at 0.8

Smoothing seems to average out the amplitude over time. It doesn’t seem to be all that beneficial.

Let’s think this through. Why would a direct connection have bad data compared to over the air microphone/speakers? The difference is higher amplitudes for both the high and low frequencies of each channel. This would affect how I evaluate if a segment represents a bit as 1 or 0.

I’m adding up all of the amplitude values for a specific segments channel high/low frequencies, and then returning which frequency had the highest amplitude overall.

  samples.forEach(({pairs}) => {
    pairs.forEach((amps, channel) => {
      amps.forEach((amp, i) => {
        sums[channel][i] += amp;
      });
    });
  });
  const bitValues = sums.map((amps) => amps[0] > amps[1] ? 0 : 1);

I’m trying to match up a frequency pair in the graph, but without reducing the channels down to 1 or 2, it’s difficult to make out. I can’t really “see” what those values are. I’m going to wire up a mouse even on the graph so I can hover/click on a channel to highlight it in the mess of channels and write out the amplitudes.

Selected Channel
Selected Channel and Segment

I’ve setup the channel graph so that I may click on a channel/segment to select it. I can also hover over the graph to highlight various frequencies in the line graph as well. Once I click on one of the cells, I can review the samples captured for that specific channel and segment.

Amplitude for direct to analyzer

What I’m discovering is that most of the audio from the oscillators are at the maximum amplitude for both high and low frequencies. I’m a bit confused over it. In rare cases where the incorrect value is a zero, only a few samples were less than 255.

Expected High, but was Low

So… my code that evaluates these values doesn’t really have a chance against it. Somehow the analyzer thinks that both values are at the max. Why? Harmonics? Is the analyzers effective sampling rate actually lower than what it says it is? Is the frequency bleeding over into the next range? The only thing I have control over is the FFT Size and Smoothing Time Constant.

I’ve added some highlighting to max values, the bit index, expected/received bit, and the sum of all amplitudes. I’m still at a loss of how to solve this problem when connected directly to the analyzer.

More selected bit details

Looking at the incorrect data received from the microphone, I’m noticing groups of duplicate values that may be improperly influencing the overall evaluation of the bit. I drop duplicate values if the timestamp of performance.now() is already present with the last sample. From this, it looks like the values don’t change as often as the performance.now() timer changes. This bit in particular has four sets of duplicate values.

Removing the duplicate values would still cause the bit to be evaluated incorrectly as a high value. The median would still be 204 vs 211, and the Sums would be 999 vs 1040.

Looking more at the values direct to an analyzer, I set my segment duration to 300ms to get more samples.

I found that the high bit always remained at 255, while the low bit cycled between 158 to 255. What a relief. At least now I can see that the “off” frequency is cycling while the “on” frequency remains at its maximum the whole way through. The question is why is the “off” frequency still running? In this case, the high bit won – but just barely. Just looking at the median values, both the high/and low values were both the maximum of 255.

Let’s take another approach to this. I can only collect a sample every 3 milliseconds. We’ve established that. How long does it take to make a full wave form? Different frequencies take longer.

(Time * Frequency) / (1 second)
(3ms * 20Hz) / 1000 ms
60 / 1000 ms
0.06

Let’s make a table…

FrequencyWaves in 3 milliseconds
20 Hz0.06
200 Hz0.6
333 Hz0.999
1,000 Hz3
2,000 Hz6
20,000 Hz60

Lower frequencies are not good for me. There just isn’t enough time to identify a new frequency in just 3ms if it’s under 333 hz. Some of our samples occur every 4 or 5 ms. Let’s go at it in reverse. Let’s figure out what frequency is best based on minimum time.

hz = 1 second / time
hz = 1000 ms / 5 ms
hz = 200

MillisecondsFrequency
3333
4250
5200
6166

So maybe what we need to do is to wait at least X milliseconds after each segment starts before we take samples for a given frequency. Although this is only for the low frequencies that we are having an issue with. That explains why I’ve had lots of red in lower channels.

Well… those aren’t lower channels. The spectrum here is 137 Hz to 19,262 Hz, and the failing frequencies are 8,387 Hz to 17,762 Hz at 10ms, FFT 2^9, multiplying frequency resolution by 4. What’s helping the lower channels is that the bits are not changing between them – except the first 8 bits that represent the packet size, and that fails! It’s all 1’s or 0’s across the board, so their isn’t a need to wait for the frequency to change before getting an accurate reading. As far as the mid-range. This one is stomping me. All the bits are the same.

So here is a thought. What if the high and low channel frequencies are spread very far apart? At the moment, I’m making a full spectrum of frequencies, but I assign them in successive order. Instead, I could make all high frequencies higher than all low frequencies.

Channel BitSequentialHigh Frequencies High
Channel 0 Low500 Hz500 Hz
Channel 0 High1,000 Hz1,500 Hz
Channel 1 Low1,500 Hz1,000 Hz
Channel 1 High2,000 Hz2,000 Hz
function getChannels() {
  var audioContext = getAudioContext();
  const sampleRate = audioContext.sampleRate;
  const fftSize = 2 ** FFT_SIZE_POWER;
  const frequencyResolution = sampleRate / fftSize;
  const channels = [];
  const hzStep = frequencyResolution * FREQUENCY_RESOLUTION_MULTIPLIER;

  const frequencies = [];
  for(let hz = MINIMUM_FREQUENCY; hz < MAXIMUM_FREQUENCY; hz+= hzStep) {
    frequencies.push(hz);
  };
  const frequencyCount = frequencies.length;
  const channelCount = Math.floor(frequencyCount/2);

  for(let channelIndex = 0; channelIndex < channelCount; channelIndex++) {
    channels.push([
      frequencies[channelIndex],
      frequencies[channelIndex + channelCount]
    ]);
  }

  return channels;
}

I feel like that’s just made it worse for both the microphone and direct to the analyzer. Let’s bring back what we had.

function getChannels() {
  var audioContext = getAudioContext();
  const sampleRate = audioContext.sampleRate;
  const fftSize = 2 ** FFT_SIZE_POWER;
  const frequencyResolution = sampleRate / fftSize;
  const channels = [];
  const pairStep = frequencyResolution * 2 * FREQUENCY_RESOLUTION_MULTIPLIER;
  for(let hz = MINIMUM_FREQUENCY; hz < MAXIMUM_FREQUENCY; hz+= pairStep) {
    const low = Math.floor(hz);
    const high = Math.floor(hz + frequencyResolution * FREQUENCY_RESOLUTION_MULTIPLIER);
    if(low < MINIMUM_FREQUENCY) continue;
    if(high > MAXIMUM_FREQUENCY) continue;
    channels.push([low, high]);
  }
  return channels;
}

Yes – much better. What just happened there? Our frequencies are closer together. They do much better than when they are spread far apart with other active frequencies between them.

What if I pad the channels? I currently skip every N frequency resolution bins when assigning my frequencies. What if I also skip N bins after the channel. So for resolution of 2, I choose bins 1 & 4 for channel 0, and bins 7 & 10 for channel 1. I could skip N bins after channel 0, so that channel 1 starts with 8 & 10 if I skip 1. It would put more space between each pair of channel frequencies. The idea is that it may help with the frequency bleed over into other channels. Lets see if its an issue by checking to see if the transfer improves.

I think that helped? I’m getting error rates around 2% on the received bits. Let’s do a test.

There is definitely some improvement. However, is that just due to the padding, or because I had to use a higher frequency range to get to 24 channels? Let’s increase the frequency range and increase the multiplier.

No PaddingPad 1Pad 2No Padding
Multiplier
Pad 4
2.9%2.3%1.7%0.8%0.6%
2.2%2.2%1.7%1.6%0.4%
2.7%2.1%2%2.4%3.2%
3.5%1.3%1.1%1.3%0.8%
2.8%1%1%1.9%0.7%
4.1%1.3%1.6%0.8%0.3%
5%2%1%3.2%0.3%
5.5%1.6%1.6%1.1%1.2%
3.9%2.1%0.3%3.9%0.4%
5.2%1.8%0.3%1.4%0.2%
30ms
304 Hz
4890Hz
FFT 10
Multiplier 2
30ms
304 Hz
6970 Hz
FFT 10
Multiplier 2
30ms
304 Hz
9370Hz
FFT 10
Multiplier 2
30ms
304 Hz
9370 Hz
FFT 10
Multiplier 4
30ms
304 Hz
13344 Hz
FFT 10
Multiplier 2
3.78%1.77%1.23%1.84%0.81%

The higher frequency ranges do lower the error rates, but the channel padding seems to have more of an impact. It keeps the two channel frequencies closer to each other, and spaces other channel frequencies further away.

I added a new configuration that lets me play with the channel frequency resolution padding. I was able to get the error rate down to 0.81% on average with the encoded bits. If I took the multiplier down to 1 so that a channels high/low frequencies are rite next to each other in the frequency bins, I started getting errors across the board around 35%.

The channels need at least 1 frequency bin between them. I even lined their frequencies up so that they were in the middle of the frequency bin spectrum to help keep them from bleeding over into each others bins. Actually, I just saw something new. I decided to test the edge cases too. I lined the frequencies up to the right edge of their bins.

Frequency Bin Location

Suddenly, all of the bits came through just fine with only 1 multiplier. I’ve now got 32 channels sending data with around a 3% error rate at 1066 baud and an effective baud of 609. That’s blown past my target! I just need to work on getting that 3% down a bit more to improve the odds of error correction on the whole packet. I can get 28 channels through at 933 baud, but it only has an effected baud of 533. It’s almost up to my 550 target, but not quite there. I’d prefer to have a number of channels divisible by 8 to keep things easy to analyze.

Well, it’s getting late. I need to get some sleep. What did we do today?

  • Worked on collecting more samples per second.
  • Highlight a channel frequency line by clicking or mouse-over the channel graph.
  • Click on a cell to review its samples collected
  • Connected oscillators directly to the analyzer to work in silent mode
  • Identified the minimum duration in milliseconds a frequency needs before it can be fully identified
  • Discard duplicate timestamps
  • Recover/continue a stream if additional bits come through before the timer runs out.
  • Space out channel frequencies apart from other channels
  • Tried using high frequencies only for high channels
  • Identified errors at end of packet are related to the channel graph
Data Transfer over Web Audio API part 5
Current state of the application

Discover more from Lewis Moten

Subscribe now to keep reading and get access to the full archive.

Continue reading