Twitter icon
Facebook icon
LinkedIn icon
Google icon
Reddit icon
StumbleUpon icon
Del.icio.us icon

Hearing Substitution Using Haptic Feedback

Added to IoTplaybook or last updated on: 07/12/2021
Hearing Substitution Using Haptic Feedback

Story

Deaf People Parenting Hearing Children

Ever wonder how difficult it is for deaf people parenting hearing and speaking children, since infant to teen and beyond? Parents face unique challenges in each phase of parenting.

Newborn babies do not speak. Only communication they know is to cry. They cry when hungry, they cry when fussy, they cry when in pain. Imagine a deaf parent, little way from babies crib, they hear nothing! Baby's cry is unheard. Baby might be in real pain or discomfort which needs immediate attention but her scream will be unheard. How about a device which will listen to baby's cry and notify deaf parents via haptic feedback on a wearable device in real time?

Pre-schoolers are different than newborns. They learn new ways of communication, they learn to speak but problem with deaf parents remain the same. They are not able to hear what their little ones are trying to communicate. But on a positive side, kids can learn to use smart phone or tablets and can distinguish different image or icons. Yes, I am talking about Augmentative and Alternative Communication (AAC) for deaf parents not hearing impaired children!

How about an app running on a tablet through which young children can communicate with their deaf parents? App will send haptic signal to a wearable device and deaf parent would be able to recognize the signal pattern.

Older children may prefer to just talk instead of tapping on icons. How about mapping similar actions through voice?

Things used in this project

Hardware components

Nano 33 BLE Sense
Arduino Nano 33 BLE Sense
 
× 1 Arduino.cc
BME680 Breakout
Pimoroni BME680 Breakout
I used Sparkfun BME680 which is very precise
× 1

Pimoroni

Li-Ion Battery 1000mAh
Li-Ion Battery 1000mAh
 
× 1

Newark

Adafruit

 
TP4056 Charging module
 
× 1  
 
5V Voltage Boost Converter
 
× 1

Amazon

 
TP4046 with 5V boost converter
You can use this instead which has both charging module and 5V booster
× 1

Amazon

Neosensory Buzz
Neosensory Buzz
 
× 1

Neosensory

iPad
Apple iPad
Or any iOS device
× 1

Apple

Software apps and online services

Arduino IDE
Arduino IDE
 
  Arduino
Xcode
Apple Xcode
 
 

Apple

Hand tools and fabrication machines

3D Printer (generic)
3D Printer (generic)

Baby Connect

Introducing "Baby Connect", an app running on tablet. App is connected to Neosensory Buzz over the bluetooth and sends haptic feedback via vibration when an icon is tapped on the screen or a word is spoken holding down the on-screen record button.

iOS App running on iPad

iOS App running on iPad

The system comes with a device equipped microphone and environment sensor to monitor IAQ ( Indoor Air Quality), temperature, humidity, ambient light. Microphone records surrounding sounds and analyze them using machine learning algorithm to determine is baby is crying.

Device with microphone and BME680 ENV sensor

Device with microphone and BME680 ENV sensor

Rechargeable

Baby Connect device is powered by 1100mAh Lipo battery with TP4046 charging module and a 5V booster which powers Nano BLE Sense. I have tested the device connected to App over BLE and battery lasted for 32 hours which is pretty good in my opinion.

Thinking behind the app

Augmentative and alternative communication (AAC) is a way for a child to communicate when the child does not have the ability to use speech as a primary means of communication. While researching about AAC, I realized same technique can be applied to hearing impaired parents. But then I started thinking how to translate AAC actions ( icons or images ) to vibration. I was looking for to come up with a coded pattern which will represent different AAC actions. Recently I was watching a movie where a person stuck in an abandoned basement for months was trying to communicate using Morse code through via a bulb. In Morse code, there is DOT and DASH. Using just one channel with different pattern we can represent all the alphabets of english language. I decided to adopt the similar concept to represent AAC actions. Using Neosensory Buzz, we actually have 4 channels ( 4 motors ) and can make hundreds of different patterns.

Mapping actions to vibration

After combining the concept of AAC and Morse code, I mapped each AAC actions ( such as Like, Thank You, Hungry etc ) to a visual pattern with DOT and DASH. One difference from Morse code is that, in Morse code, signals are sequential such as DOT DOT DOT ( S) or DASH DASH DASH ( O) but here, as we have Buzz with 4 motors which can vibrate independently, we have 4 channels. So "LIKE" is represented by DOT, DOT, DASH ( motor 1, 2 and 3 vibrating simultaneously)

DOT is represented by low frequency 50 and DASH is presented by high frequency 255. So above "LIKE" action is represented by [50, 50, 255, 0] and repeat 40 times ( 40 frames )

Similarly I have mapped several other AAC actions to vibration as shown in above image.

Train Your Brain

All these mapping may sound complicated and you must be wondering how your brain will know what vibration pattern means what! But you will be surprised how quickly your brain can learn these patterns.

In the past, it was believed that human brain does not grow or change after a certain age, acts like static organ but recent study and research has proved that neural network changes over time, creating new pathways and deleting old pathways. This is called brain-plasticity. Researchers have studied drivers brain before and after taking taxi-driving test in London city and they observed new neural pathways developed in brain after the test.

Similarly, haptic feedback can be used to send some visual signal or audio signal to brain using regular visual or audio neural pathways. Over the time and practice, brain will accept haptic feedback and process them as if they are received from retina or ears, substituting a lost sense.

It's also proven that new senses can be developed. For example we don't usually feel anything except smell while in deep sleep but can train our brain to respond to some haptic feedback even while we are in sleep, creating new senses.

On the training page, app will present different cards to the user and also vibrate the buzz. User has to guess the correct card. If selection is not correct, card will be highlighted red. If selection is correct, card will be highlighted green and next set of cards will be shown. With practice over the time, user would be able able to map the vibration with the icon.

Baby's Activity Monitoring

The Baby Connect device is running Edge Impulse inference on Arduino Nano BLE Sense microcontroller which has in-built microphone. Microphone records 3s audio sample and classifies the audio into 4 labels - noise, hungry, fussy and pain.

I have collected 5 minutes of noise data from different background noises such as people talking, kitchen faucet, chopping vegetables, AC, Fan etc. About 3 minutes of hungry, fussy & pain.

For baby cry dataset, I have used https://github.com/gveres/donateacry-corpus repo which was part of a campaign. I also collected audio from youtube. While analyzing different types of baby cry, I found a pattern for hungry vs pain. When baby is in pain, the cry continuously with a break just to breathe. But when they are hungry, you will notice some gaps in between. But it's very hard to separate hungry vs fussy. At present my model is able to classify noise vs baby cry with around 85% accuracy but it's not quite good in separating hungry vs pain vs fussy. I need lot more data to make this work better. I will start a "donate" campaign pretty soon to collect hundreds of data from all over the world and enhance the model. I will also open source the EI project so that other can contribute.

When a cry is detected by BLE sense, it sends data to the app via bluetooth and app vibrates the Buzz. At the same time it turns all 3 LEDs to RED in case parent miss the vibration. LEDs are lite on until user presses "+" or "-" button on the buzz.

Baby's Environment Monitoring

Nano BLE Sense is connected to BME680 sensor which provides very precise reading for temperature, humidity, ambient light and indoor air quality ( IAQ). Use can press the power button on the Buzz to feel the data through vibration. Reading from sensor is mapped to 0-255 range and sent to the buzz.

Program The Buttons

With latest Neosensory firmware you can now program all 3 buttons and LEDs on the buzz. Advance users can communicate via Morse code using the Buzz buttons. Checkout the demo below.

Voice/Speech Translation

For elder kids or advance users, tapping on icons may seem time consuming. Instead user can tap and hold the microphone button on the app and speak the action. App will transcribe the captured voice into text and map that to the AAC action.

Speech To Emotion Using AWS Prediction

This is work in progress. I am integrating with AWS Amplify to leverage AWS Prediction.

Project Demo

Schematics

baby_connect_schematic_bb_3BdVQikvMt.png

Code

BabyConnectNanoBLE_V3_TinyML.ino - Arduino

 

#include <ArduinoBLE.h>
#include "bsec.h"
#include <Arduino_APDS9960.h>
#include <PDM.h>
#include <baby-cry_inferencing.h>

// If your target is limited in memory remove this macro to save 10K RAM
#define EIDSP_QUANTIZE_FILTERBANK   0
#define EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW 3


#define SERVICE_UUID                  "3D7D1101-BA27-40B2-836C-17505C1044D7"
#define ENV_WRITE_CHAR_UUID           "3D7D1102-BA27-40B2-836C-17505C1044D7"
#define ENV_READ_CHAR_UUID            "3D7D1103-BA27-40B2-836C-17505C1044D7"
#define CLASSIFICATION_CHAR_UUID      "3D7D1104-BA27-40B2-836C-17505C1044D7"


BLEService babyService(SERVICE_UUID);
BLECharacteristic bme680WriteChar(ENV_WRITE_CHAR_UUID, BLERead | BLENotify, "00;00;0000;000;000;0");
BLECharCharacteristic bme680ReadChar(ENV_READ_CHAR_UUID, BLERead | BLENotify);
BLECharCharacteristic classificationChar(CLASSIFICATION_CHAR_UUID, BLERead | BLENotify);

// Create an object of the class Bsec
Bsec iaqSensor;
String output;
void checkIaqSensorStatus(void);

int temperature = 0;
int humid = 0;
int airQuality = 0;

int r = 0, g = 0, b = 0, c = 0;


void enableLowPower() {

  digitalWrite(PIN_ENABLE_SENSORS_3V3, LOW);
  digitalWrite(PIN_ENABLE_I2C_PULLUP, LOW);

}

void disableLowPower() {
  digitalWrite(PIN_ENABLE_SENSORS_3V3, HIGH);
  digitalWrite(PIN_ENABLE_I2C_PULLUP, HIGH);

}

/** Audio buffers, pointers and selectors */
typedef struct {
  signed short *buffers[2];
  unsigned char buf_select;
  unsigned char buf_ready;
  unsigned int buf_count;
  unsigned int n_samples;
} inference_t;

static inference_t inference;
static bool record_ready = false;
static signed short *sampleBuffer;
static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal
static int print_results = -(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW);
int predicted_label = 2;
float prediction_threshold = 0.9;

void setup() {
  Serial.begin(9600);

  pinMode(LED_BUILTIN, OUTPUT);
  digitalWrite(LED_PWR, LOW);

  if (!BLE.begin())
  {
    Serial.println("starting BLE failed!");
    while (1);
  }

  Wire.begin();
  iaqSensor.begin(BME680_I2C_ADDR_SECONDARY, Wire);
  checkIaqSensorStatus();
  bsec_virtual_sensor_t sensorList[10] = {
    BSEC_OUTPUT_RAW_TEMPERATURE,
    BSEC_OUTPUT_RAW_HUMIDITY,
    BSEC_OUTPUT_IAQ

  };

  iaqSensor.updateSubscription(sensorList, 3, BSEC_SAMPLE_RATE_LP);
  checkIaqSensorStatus();

  if (!APDS.begin()) {
    Serial.println("Error initializing APDS9960 sensor.");
  }
  BLE.setLocalName("BabyConnect");
  BLE.setDeviceName("BabyConnect");
  BLE.setAdvertisedService(babyService);
  babyService.addCharacteristic(bme680ReadChar);
  babyService.addCharacteristic(bme680WriteChar);
  babyService.addCharacteristic(classificationChar);

  BLE.addService(babyService);

  BLE.advertise();
  Serial.println("Bluetooth device active, waiting for connections...");

  run_classifier_init();
  if (microphone_inference_start(EI_CLASSIFIER_SLICE_SIZE) == false) {
    ei_printf("ERR: Failed to setup audio sampling\r\n");
    return;
  }
}

void loop()
{
  BLEDevice central = BLE.central();

  if (central)
  {
    Serial.print("Connected to central: ");
    Serial.println(central.address());
    digitalWrite(LED_BUILTIN, HIGH);
    disableLowPower();

    while (central.connected()) {


      if (iaqSensor.run()) { // If new data is available

        temperature = (int) iaqSensor.rawTemperature * 1.8 + 32;
        humid =  iaqSensor.rawHumidity;
        airQuality = iaqSensor.iaq;
      } else {
        checkIaqSensorStatus();
      }

      if (APDS.colorAvailable()) {
        APDS.readColor(r, g, b, c);
      }

      int battery = analogRead(A0);
      int batteryLevel = map(battery, 0, 1023, 0, 100);

      //now run EI inference

      bool m = microphone_inference_record();
      if (!m) {
        ei_printf("ERR: Failed to record audio...\n");
        return;
      }

      signal_t signal;
      signal.total_length = EI_CLASSIFIER_SLICE_SIZE;
      signal.get_data = &microphone_audio_signal_get_data;
      ei_impulse_result_t result = {0};

      EI_IMPULSE_ERROR r = run_classifier_continuous(&signal, &result, debug_nn);
      if (r != EI_IMPULSE_OK) {
        ei_printf("ERR: Failed to run classifier (%d)\n", r);
        return;
      }

      float max_prediction = 0.0;
      
      if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) {
        // print the predictions
        ei_printf("Predictions ");
        ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
                  result.timing.dsp, result.timing.classification, result.timing.anomaly);
        ei_printf(": \n");
        for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
          ei_printf("    %s: %.5f\n", result.classification[ix].label, result.classification[ix].value);

          if ( result.classification[ix].value >= max_prediction) {
            max_prediction = result.classification[ix].value;
            predicted_label = ix;
          }

        }
        

        
        
        if(max_prediction < prediction_threshold){
          predicted_label = 2;
        }else{
          ei_printf("Predicted activity = %d with score=%.5f\n", predicted_label, max_prediction);
        }
      }
      //end of EI inference

      char buf[24];
      sprintf(buf, "%d;%d;%d;%d;%d;%d", temperature, humid, airQuality, c, batteryLevel, predicted_label);
      Serial.println(buf);
      bme680WriteChar.writeValue(String(buf).c_str());
      //delay(1 * 1000);

    }
  }
  digitalWrite(LED_BUILTIN, LOW);
  enableLowPower();

}

void checkIaqSensorStatus(void)
{
  if (iaqSensor.status != BSEC_OK) {
    if (iaqSensor.status < BSEC_OK) {
      output = "BSEC error code : " + String(iaqSensor.status);
      Serial.println(output);
      while (true) {

      }

    } else {
      output = "BSEC warning code : " + String(iaqSensor.status);
      Serial.println(output);
    }
  }

  if (iaqSensor.bme680Status != BME680_OK) {
    if (iaqSensor.bme680Status < BME680_OK) {
      output = "BME680 error code : " + String(iaqSensor.bme680Status);
      Serial.println(output);
      while (true) {

      }
    } else {
      output = "BME680 warning code : " + String(iaqSensor.bme680Status);
      Serial.println(output);
    }
  }
}

/**
 * @brief      Printf function uses vsnprintf and output using Arduino Serial
 *
 * @param[in]  format     Variable argument list
 */
void ei_printf(const char *format, ...) {
    static char print_buf[1024] = { 0 };

    va_list args;
    va_start(args, format);
    int r = vsnprintf(print_buf, sizeof(print_buf), format, args);
    va_end(args);

    if (r > 0) {
        Serial.write(print_buf);
    }
}

/**
 * @brief      PDM buffer full callback
 *             Get data and call audio thread callback
 */
static void pdm_data_ready_inference_callback(void)
{
    int bytesAvailable = PDM.available();

    // read into the sample buffer
    int bytesRead = PDM.read((char *)&sampleBuffer[0], bytesAvailable);

    if (record_ready == true) {
        for (int i = 0; i<bytesRead>> 1; i++) {
            inference.buffers[inference.buf_select][inference.buf_count++] = sampleBuffer[i];

            if (inference.buf_count >= inference.n_samples) {
                inference.buf_select ^= 1;
                inference.buf_count = 0;
                inference.buf_ready = 1;
            }
        }
    }
}

/**
 * @brief      Init inferencing struct and setup/start PDM
 *
 * @param[in]  n_samples  The n samples
 *
 * @return     { description_of_the_return_value }
 */
static bool microphone_inference_start(uint32_t n_samples)
{
    inference.buffers[0] = (signed short *)malloc(n_samples * sizeof(signed short));

    if (inference.buffers[0] == NULL) {
        return false;
    }

    inference.buffers[1] = (signed short *)malloc(n_samples * sizeof(signed short));

    if (inference.buffers[0] == NULL) {
        free(inference.buffers[0]);
        return false;
    }

    sampleBuffer = (signed short *)malloc((n_samples >> 1) * sizeof(signed short));

    if (sampleBuffer == NULL) {
        free(inference.buffers[0]);
        free(inference.buffers[1]);
        return false;
    }

    inference.buf_select = 0;
    inference.buf_count = 0;
    inference.n_samples = n_samples;
    inference.buf_ready = 0;

    // configure the data receive callback
    PDM.onReceive(&pdm_data_ready_inference_callback);

    // optionally set the gain, defaults to 20
    PDM.setGain(80);

    PDM.setBufferSize((n_samples >> 1) * sizeof(int16_t));

    // initialize PDM with:
    // - one channel (mono mode)
    // - a 16 kHz sample rate
    if (!PDM.begin(1, EI_CLASSIFIER_FREQUENCY)) {
        ei_printf("Failed to start PDM!");
    }

    record_ready = true;

    return true;
}

/**
 * @brief      Wait on new data
 *
 * @return     True when finished
 */
static bool microphone_inference_record(void)
{
    bool ret = true;

    if (inference.buf_ready == 1) {
        ei_printf(
            "Error sample buffer overrun. Decrease the number of slices per model window "
            "(EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)\n");
        ret = false;
    }

    while (inference.buf_ready == 0) {
        delay(1);
    }

    inference.buf_ready = 0;

    return ret;
}

/**
 * Get raw audio signal data
 */
static int microphone_audio_signal_get_data(size_t offset, size_t length, float *out_ptr)
{
    numpy::int16_to_float(&inference.buffers[inference.buf_select ^ 1][offset], out_ptr, length);

    return 0;
}

/**
 * @brief      Stop PDM and release buffers
 */
static void microphone_inference_end(void)
{
    PDM.end();
    free(inference.buffers[0]);
    free(inference.buffers[1]);
    free(sampleBuffer);
}

#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_MICROPHONE
#error "Invalid model for current sensor."
#endif

Buzz Swift Module

Forked from Chris Bartley's repo and added commands for LED and Buttons

Swift code

just4give / baby-connect-swift

0 0

No description — Read More - Download as zip

Credits

Mithun Das

Mithun Das

A tech enthusiast who loves creating, sharing, evaluating and learning new technology. Follow me on Twitter @tweetmithund

 

Hackster.io

This content is provided by our content partner Hackster.io, an Avnet developer community for learning, programming, and building hardware. Visit them online for more great content like this.

This article was originally published at Hackster.io. It was added to IoTplaybook or last modified on 07/12/2021.