top of page

Our Research

The ECHOS Platform to Enhance Communication for Nonverbal Children with Autism: A Case Study

ABSTRACT

Current augmentative communication technology has had limited success conveying the needs of some individuals with minimally verbal autism spectrum disorder (mvASD). Primary caregivers report being able to better understand these individuals' non-traditional utterances than those less familiar with the individual such as teachers and community members. We present an eight-month multi-phase case study for a translational platform, ECHOS, that uses primary caregivers' unique knowledge to enhance communicative and affective exchanges between mvASD individuals and the broader community. Through iterative development and participatory design, we discovered that physiological sensors were impractical for long-term use, on-body audio was content-rich and easily accessible, and a custom in-the-moment labeling app was transformative in obtaining accurate labels from caregivers for machine learning advancements. This paper presents the design methodology, results, and reflections from our case study and provides insights into development with and for the special needs community.

Narain, J.* & Johnson, K.T.*, Picard, R.W., Maes, P. "Zero-Shot Transfer Learning to Enhance Communication for Minimally Verbal Individuals with Autism using Naturalistic Data," NeurIPS Workshop on AI for Social Good, December 2019. (*equal contribution)

Zero-Shot Transfer Learning to Enhance Communication for Minimally Verbal Individuals with Autism using Naturalistic Data

ABSTRACT

We applied zero-shot transfer learning to classify vocalizations from a nonverbal individual with autism using captured audio. Data were recorded in natural environments using a small wearable camera and sparsely labeled in real-time with a custom-built open-source app. We then trained LSTM models on VGGish audio embeddings from the generic AudioSet database for three categories of vocalizations: laughter, negative affect, and self-soothing sounds. We applied these models to the unique audio recordings of a young autistic boy with no spoken words. The models identified laughter and negative affect with 70% and 69% accuracy, respectively, but classification of the self-soothing sounds produced accuracies around chance. This work highlights both the need and potential for specialized, naturalistic databases and novel computational methods to enhance translational communication technologies in underserved populations.

Narain, J.* & Johnson, K.T.*, Picard, R.W., Maes, P. "Zero-Shot Transfer Learning to Enhance Communication for Minimally Verbal Individuals with Autism using Naturalistic Data," NeurIPS Workshop on AI for Social Good, December 2019. (*equal contribution)

Augmenting Natural Communication in Nonverbal Individuals with Autism

ABSTRACT

Despite technological and usability advances, some individuals with minimally verbal autism (mvASD) still struggle to convey affect and intent using current augmentative communication systems. In contrast, their non-speech vocalizations are often affect and context rich and accessible in almost any environment. Our system uses primary caregivers' unique knowledge of an individual's vocal sounds to label and train machine learning models in order to build holistic communication technology (see Figure 1).

Narain, J.* & Johnson, K.T.*, Picard, R.W., Maes, P. "Augmenting Natural Communication in Nonverbal Individuals with Autism," International Society for Autism Research, 2020. (*equal contribution)

bottom of page