Interfacing with TFModel

Delayed update

Apologies for the delayed update, I have been attempting numerous techniques to attempt to improve the SCF classifier’s performance.
Which are outlined in this blogpost.

Training SCF model

In [2] I trained a Multilayer Perceptron based on the code from [3], making use of the SCF Python code from previous weeks, training with 2PSK,4PSK,8PSK and FSK.

I trained the SCF model, with one hot data, meaning for PSK, I trained the output neurons to be [1,0,0,0], then
for 4PSK, I trained the output neurons to be [0,1,0,0], so the neuron with the highest output, represents the modulation
scheme it believes the input data to be (only one output is ‘hot’).

I trained then saved the SCF ANN model as /tmp/output_graph.pb

After the model has been trained, it tests it with testing SCF data for SNRs ranging from 20dB to -20dB. It should achieve around 84% accuracy for SNR 20dB.

SCF block

You will notice in [1] there is a file entitled tensor_scf_test.grc which contains the following flow graph:

The SCF block takes a vector of length 5120 complex inputs, therefore I make use of the stream to vector block
prior to the SCF block, to create a vector of such length.

I then set the vector length of the TFModel block to 760, to match the output of the SCF block.

tensor_scf_test.grc

Maxnet

I tried to implement a Maxnet type neural network in tensor_scf.py, to attempt to improve beyond 84%, by essentially using an ANN for each modulation scheme and then finding which ANN produces the highest output, in its output neuron.

With the training and testing of the Maxnet, instead of feeding whole 2D data from an SCF, a 1D projection is created, using the following code:

for v in scfdata:
    dat = []
    for z in range(v.shape[0]):
        dat.append(v[z][np.argmax(v[z])])

The projection can be thought simply as a shadow of the peaks of the 2D SCF graph, along the alpha axis. You can see a number of these 1D graphs at the bottom of this blog post.

This results in a far smaller amount of data that needs to be passed to the neural network.

For the 2PSK ANN in the maxnet for instance, it is trained with 2PSK data, and told to produce an output of 1 and trained with 4PSK,8PSK and FSK data, and told to produce and output of 0.

Training data

It takes around 20 minutes to generate 10 SCFs for each modulation scheme for each SNR, for both a training set and testing set.

This results in 10 * 4 * 9 * 2 SCFs in total (the dataset has data for 9 different SNRs, for each 4 modulation schemes and there are two pairs of data (one for testing one for training))

In order to get enough training data it was necessary to leave my laptop on over night, it utilises all 4 cores on my i7 throughout the generation of data, as I made use of Python’s multiprocessing support.

It was necessary to make use of multiple processes to generate SCFs more efficiently, rather than making use of threads, as the standard Python interpreter has a global interpreter lock (GIL) which prevents high performance threading.

You can see how in the tensor_scf.py file how I also tested the performance of a single ANN at classifying all modulation schemes.

Papers

I have been re-reading a number of papers, to try to determine the issues with my SCF implementation, these are outlined below.

I am wondering now if the main issue is related to not using a sliding DFT.

  1. Automatic Modulation Classification and Blind Equalization for Cognitive Radios – Barathram Ramkumar
  2. A New Approach to Signal Classification Using Spectral Correlation and Neural Networks – A. Fehske, J. Gaeddert and J. H. Reed
  3. Automatic modulation classification for cognitive radios using cyclic feature detection – Barathram Ramkumar

Sliding DFT

Based on the excellent tutorial on sliding DFTs here, I created a simple Python implementation below, which I am planning
on integrating back into my SCF implementation.

#!/usr/bin/python3
import math
import numpy as np

a = np.arange(1,100)
b = np.arange(1+1,100+1)

afft = np.fft.fft(a)

s = [(afft[k]-a[0]+b[len(b)-1])*np.exp(1j*2*math.pi*k/len(a)) for k in range(len(a))]

print (s)
print(np.fft.fft(a))
print(np.fft.fft(b))

Improving accuracy

Now 84% accuracy at 20 dB isn’t especially good, so I’m working on techniques to substantially improve this performance.

I have been generating SCFs from multiple parts of a signal and generating 1D alpha-profiles for them and examining them in
a Linux video player, to ensure they remain similar across the signal.

Symbol rate

I found that the symbol rate of the modulator used, seems to substantially effect my SCF results. I found that the graph
becomes more understandable at a higher symbol rate of around 20 samples per symbol. Looking more similar to that found in papers.

20 samples per symbol, 2FSK

0

5 samples per symbol, 2FSK

0

2 samples per symbol, 2FSK

At 2 samples per symbol, it has become unrecognisable.

0

  1. https://github.com/chrisruk/gr-inspector/tree/dev_amc/examples
  2. https://github.com/chrisruk/scf/blob/master/tensor_scf.py
  3. https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s