Posts by SV1BDS


    Some more results regarding noise level. 12.3 Hz is archived with variable noise level. Without noise 9.5 Hz is archived. With noise at 0.1 of amplitude of signal 16 Hz is archived.


    I noticed that 0.1 sec frequency evaluations follow normal distribution (as normally expected). For every file that I test there are some decades of 0.1 sec samples. The mean error of 25 Hz found until now is computed at each 0.1 sec sample. I compute the mean value for each voice file and then measure the difference from the real frequency. In this way there are more realistic results that they did not depend from the distribution of the error but only from the mean value of the all the samples. The new mean error is found to be 12.3 Hz only.


    The SSB offset frequency model is been added to a gnuradio embedded Python block. It takes as input 800 samples (0.1 sec) and return the frequency offset. It will be used in a complete SSB receiver for QO100.


    This is a : dropbox link

    A demo working package is build to demonstrate SSB offset frequency finding. This is not a practical application for everyday usage but a demo one that can be used without special hardware. I personally use WSL Ubuntu 22.04.

    There are 3 files, two Python3 programs and the AI model file. The model file is not permited to used in a commercial application. To use the programs the user must make a websdr recording from QO100 websdr web interface. The system can find frequency shifts from -300 Hz to +300 Hz. In order not to loose audio information the bandwidth can be increased. Before use the be sure that you have do pip3 install numpy librosa tensorflow (if pip3 does not exist load it as a package). Then you can find the frequency shift by calling python3 websdr-recording-xxx.wav .

    If all is OK the signed mean frequency will displayed. Then we can convert the downloaded file calling the gnuradio python3 converter (be sure that you have gnuradio installed). The syntax is for example:

    python3 -i websdr-recording-xxx.wav -o outfile.wav -f 177

    Please note that 177 is an example of frequency shift that the program do. It must be same value but opposite sign of the mean value founded before. The outfile.wav can be played in order to determine if is on proper frequency.

    George SV1BDS


    does the artificial intelligence understand the speech?


    Hi Detlev,

    I come back to your question. In general AI understanding the speech. There exist python modules that you can feed with voice in various languages and return in text the content. But probably these will fail decoding if the voice is shifted even and a few Hz. A new training for recognising voice that is shifted a few Hz is required. The results maybe will be more poor than the existing. Of course if you get text, translation can be implemented and cross language QSO become a new reality in ham radio communications. One more idea I had during development of the current solution is a voice activated squelch that will recognise human voice and open audio. In general with AI it seems that new applications can implemented that it was not possible to do until now.


    For comparison : I used CNN with 64,128,256 neurones and 64 final. 26000 different voices with 824000 training data of 0.1 sec (22.2 hours). No GPU and no external funding, during vacations!!!


    I want to upload a file that is less than 5 Mb that do artificial intelligence SSB frequency beat. The file limit is 1 Mb. Is any plan to increase file limit?


    A new model is implemented trained with variable levels of noise added to signals. It requires much more training time (9 hours vs 20 minutes) but with better results. It is better suited for audio and video. The produced model has now 25 Hz of mean error and 1376 of squared error. It must mentioned that data used for testing performance is not used for training.


    Noise is added to learning (up to 0.1 of voice amplitude) and now can find frequency shift in a noisy environment. Performance is dropped to 45 Hz of mean error which is 7.5%. A gnuradio flowchart is planned to created that will use this.


    I use Greek speech to check error and the same performance exists.

    By the way, the current mean error is dropped to about 32 Hz.

    Hi Detlev,

    In this case it is not understand the speech. It gives attention how to predict the frequency shift. The frequency prediction range is -300 to +300 Hz. Now I have train it with English voices and one of the tests it will be to test it with other languages. Also the time that give it to the model is now 0.1 sec which is too small to understand speech. Generally speaking it tries to predict what it is trained to do. In my case it is trained to predict the frequency shift.


    The current results from artificial intelligence machine learning SSB auto tune, is that with 0.1 sec speach sample, a mean error of 40 Hz is archived. Possible applications except auto tune SSB receiving is SSB automatic voice recording for example on the satellite QO100.


    I have involved with artificial intelligence and trying writing software. One idea I have is to use AI to automatically center an SSB receiver. A python code using tensorflow is written that trying to do machine learning in order to archive the result. For learning a set of MP3 files is found from Mozilla and an initial learning is done proving that it can learn. It's own sounds it can predict with an error of about 10-15 Hz. Unknown sounds currently it predicts with an error of about 100 Hz. It seems that errors are mainly from man and woman difference of pitch. As results are depend mainly from samples used for education next steps will be to download more training data and trying to improve it.

    GNUradio python code for spread spectrum communication using USRP

    Here is a sample GNUradio Python code for spread spectrum communication using USRP:

    #!/usr/bin/env python

    from gnuradio import gr, uhd

    from gnuradio import analog

    from gnuradio.filter import firdes

    class my_top_block(gr.top_block):

    def __init__(self):


    # Set up the USRP source and sink

    self.u = uhd.usrp_sink(device_addr="", stream_args=uhd.stream_args(cpu_format="fc32", channels=range(1),),)





    self.u2 = uhd.usrp_source(device_addr="", stream_args=uhd.stream_args(cpu_format="fc32", channels=range(1),),)





    # Generate a random bit stream to be transmitted

    self.src = analog.random_source(gr.sizeof_char, 100)

    # Convert the bit stream to a complex baseband signal using BPSK modulation

    self.mod = analog.modulate_waveform(bpsk_data, 2*np.pi*np.linspace(0, 1, 100))

    # Apply direct sequence spread spectrum to the signal

    self.spread = np.tile(np.random.randint(0, 2, 100), 10)

    self.spread[self.spread == 0] = -1

    self.signal = np.multiply(self.mod, self.spread)

    # Create a low-pass filter to smooth out the edges of the signal

    self.lpfilter = firdes.low_pass(1, 1e6, 100e3, 10e3, firdes.WIN_HAMMING)

    # Connect the signal to the USRP sink and source

    self.sink = self.u

    self.connect(self.src, self.signal, self.lpfilter, self.sink)

    self.source = self.u2

    self.connect(self.source, self.lpfilter)

    if __name__ == '__main__':



    except KeyboardInterrupt:


    This code generates a random bit stream, applies BPSK modulation to it, and then applies direct sequence spread spectrum using a random spreading code. The resulting spread spectrum signal is then low-pass filtered and transmitted using a USRP sink. The same signal is also received by a USRP source, low-pass filtered, and then processed. Note that this code is just an example and may need to be modified to suit your specific use case.

    Asked chatgpt: GNUradio python code for usb communication using Pluto for qo100

    Here's an example GNU Radio Python code for USB communication with a Pluto SDR for QO-100 satellite operation:

    #!/usr/bin/env python

    from gnuradio import blocks

    from gnuradio import gr

    from gnuradio import uhd

    import numpy as np

    class usrp_source(gr.top_block):

    def __init__(self, freq, samp_rate, gain):

    gr.top_block.__init__(self, "Pluto Source")

    self.u = uhd.usrp_sink(device_addr="", stream_args=uhd.stream_args(cpu_format="fc32", otw_format="sc16", channels=1))


    self.u.set_center_freq(freq, 0)

    self.u.set_gain(gain, 0)

    self.u.set_antenna("TX/RX", 0)

    self.source = blocks.null_source(gr.sizeof_gr_complex * 1)

    self.connect(self.source, self.u)

    if __name__ == '__main__':

    center_freq = 2400e6 # Center frequency of QO-100

    samp_rate = 2.4e6 # Sample rate in Hz

    gain = 40 # Transmit gain in dB

    tb = usrp_source(center_freq, samp_rate, gain)


    # Generate a continuous tone at 1 kHz

    sample_period = 1 / samp_rate

    tone_freq = 1000

    samples_per_cycle = int(samp_rate / tone_freq)

    sine_wave = np.sin(2 * np.pi * tone_freq * sample_period * np.arange(samples_per_cycle))

    while True:

    # Send the tone to the Pluto SDR


    In this code, we define a usrp_source class that creates a Pluto SDR source using the UHD library. We set the center frequency to 2400 MHz (the center frequency of QO-100), the sample rate to 2.4 MHz, and the gain to 40 dB. We then create a null source block and connect it to the Pluto SDR source.

    Finally, we generate a continuous tone at 1 kHz using NumPy and send it to the Pluto SDR using the set_data method of the null source block. This creates a continuous wave signal that can be transmitted on the QO-100 satellite.

    Note that this code is just an example, and you may need to modify it to suit your specific needs.