First Animal/Bird Language Translator Device

0
67

Enabling you to talk with nature.

All living beings, be they flora or fauna, have a unique way of communicating and expressing their emotions using various sounds and signals. We, humans, are blessed with the capability to observe, learn and speak many languages. We also possess an innate curiosity to know more about our ecosystem and the world around us. It is our wish to understand nature’s feelings. At the same time, we also want to share our feelings.

This made me think – why not observe, research and collect sounds of animals and birds, extract the emotions in those sounds and make a device that translates their language to ours and vice-versa? That would be amazing. Based on research by scientists, there are several open-source datasets regarding nature’s communication methods available for use by anyone.

So, by employing these open-source datasets, you can develop and train an ML model that understands the emotions in different animal sounds and classifies them accordingly. You will

also be able to deploy the ML model for translating human language into nature’s language and back, enabling efficient communication. 

 Doesn’t it sound awesome?? So without wasting any time, let’s start our beautiful journey.

Bill of Materials 

bill-of-material-8950008

Preparing Datasets

The ML needs to be fed with the correct data regarding sounds and emotions. You can download various open-source datasets of different birds and animals online. For instance, sounds made by an elephant for communicating motion, love, care, anger etc.

After downloading such animal sounds, compile them into datasets to train the ML model. You can use tools such as Tensorflow, Edge Impulse, SensiML, Teachable Machine and many others for this purpose. Here, I am using Edge Impulse.

Now, open the Raspberry Pi terminal and install the Edge Impulse dependencies. Then using the following commands, create a new project named Fauna Translator. 

curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm

After this, connect the Raspberry Pi project with Edge Impulse using 

Edge-impulse-linux

Next, open the terminal and select the project name, after which you are given a URL for feeding the datasets. In that URL, enter the captured sounds of animals and appropriately label them based on different emotions (angry, glad, hungry) or expressions (“let’s go”, “I love you”). 

NOTE:- Before proceeding with sound data capture, first put the AIY Voice bonnet onto the Raspberry Pi as shown in pic.

gfcggh-1153148-1749515
Fig 1. Setting the Voice bonnet
vhjvjhj-3548653
Fig 2. Connecting RPI with Project

Training ML Model

Select the learning and processing blocks for training the ML model. Here we use Spectrogram for the processing block and Keras for the learning block. Using these, extract different audio parameters for training the ML model to learn from datasets. Then test the model and keep refining it until you are satisfied with its accuracy level.

cggfgfgh-6727639
Fig 3. Setting processing blocks
vhbhbjhj-3447807
Fig 5.

 

cgcgch-8947353
Fig 6. Accuracy output during ML model testing
fcgcgcghgh-6731238
Fig 7

Deploying Model 

To deploy the ML model,  go to the deploy option and select Linux board. Install it and clone the SDK of edge impulse. 

sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev

pip3 install edge_impulse_linux -i https://pypi.python.org/simple

pip3 install edge_impulse_linux

git clone https://github.com/edgeimpulse/linux-sdk-python

Coding 

Create a .py file called animal_translate and import espeak in the Python code, which translates the animal sounds so that humans can understand them. Create an if condition for checking the accuracy of the emotion present in the output of the ML model. If the output accuracy of 98% or more for a particular label is detected, then the output matches the label description. For example, if an animal’s sound for the label “hello” meets the said percentage, then the animal is indeed saying “hello”.    

vhvhvj-1190879
Fig 8. Python code
ddd-9633942
Fig 9. Code checking the values of output and translating the sound into human language

Testing 

Download the .iem ML model file, open the terminal and run animal_translate.py followed by the path location of the ML model. Select an animal, say elephant. So whenever the elephant makes any sound, the ML model will capture that in real-time and translate what it is saying for you to understand.

Congrats !! You have created the world’s first animal and bird language translator. Now you can understand nature’s language and even talk with it. 

Note:-If you face error in downloading try to turn off the antivirus for a few min.

Download Code and ML model