Keras, TensorFlow and GNU Radio Blocks

I had to switch from Tflearn to Keras (https://keras.io/) recently, as I was having problems with the accuracy from my trained graph files, I found with keras you are able to easily convert the graph to a version where the drop outs are removed etc. which means I get the same accuracy.

I am also now making use of TensorFlow Serving, which enables you to easily freeze the graph with a single input and output and then reload it easily.  This means you need to make use of at least TensorFlow 0.10, due to the imports we use.

There are now two automatic modulation classification blocks I’ve added, which are described below.

FAM model

You can run the FAM model using the flow graph below, which is found in the examples folder of gr-inspector, in the dev_amc branch.

amc_fam.grc

I have found that with the FAM model, it is very sensitive to the amount of interpolation you used while training, for instance, if an interpolation of 2 is used during training, and you don’t use any interpolation on the output of the modulator, the accuracy will be significantly reduced.

You can create the FAM model, by using the code below, to generate the TensorFlow graph file:

https://github.com/chrisruk/models

With the FAM model, I found, I had to prefix it with

export CUDA_VISIBLE_DEVICES=””;

To prevent TensorFlow using my GPU, as my GPU does not have enough memory to train the CNN.

You need to install the excellent gr-specest block from https://github.com/kit-cel/gr-specest in order to make use of the TensorFlow FAM model.

CNN model

For the CNN model, a ‘stream to vector’ block is placed before the CNN TensorFlow block, to produce a 128 sample input for the classifier.

amc_cnn.grc

Creating the models

You can create the models using the code in here.

You need two add two files to the directory you pull the code too, named music.wav and music2.wav, both of which should be
44100kHz samplerate files at 16 bit. These files are used for training and testing respectively with the analog modulation schemes.

Both the FAM / CNN models are trained with both 8 and 16 Samples per Symbol.

You can see in the data_generate.py file that I used the following modulation schemes for training and testing:

MOD = ["fsk", "qam16", "qam64", "2psk", "4psk", "8psk", "gmsk", "wbfm", "nfm"]

I recommend running like so:

export CUDA_VISIBLE_DEVICES=""
./fam_generate.py
./cnn_generate.py

The models will be written to /tmp/fam and /tmp/cnn.

Both should achieve 80%+ accuracy at the highest SNR level.

More work needs to be conducted on tweaking the image augmentation aspect, used in the CNN model.

Blocks

You can obtain the FAM and CNN blocks from here

ToDo

  • Determine why the TensorFlow model size is so large (more than 2GB)
  • Take PMT input with CNN block
  • Work on visualisation coding

Leave a comment