top of page

Our vision is motivated by communication.

Over 1 million people in the U.S. are non- or minimally verbal (nv/mv), including but not limited to people with autism, Down syndrome (DS), and other genetic disorders. These individuals experience stress, frustration, and isolation when communicating in a society largely constructed around typical verbal speech. Yet, through non-speech vocalizations, they organically express rich affective and communicative information. Some vocalizations have self-consistent phonetic content (e.g., “ba” to mean “bathroom”) and others vary in tone, pitch, and duration depending on the individual’s emotional or physical state or intended communication.  People who know the individual well often have unique abilities to understand these vocalizations.

​

We present, to our knowledge, the first project studying communicative intent and effect in naturalistic vocalizations of atypical verbal content for nonverbal and minimally verbal individuals.  

Our long-term vision is to design a device that can help others better understand and communicate with nonverbal and minimally verbal individuals by training machine learning models using primary caregivers’ unique knowledge of the meaning of an individual’s nonverbal communication.

What is the plan to reach our final goal?

Screen Shot 2020-04-27 at 7.31.02 PM.png

What are we currently working on?

Our focus is currently on developing personalized models to classify vocalizations using in the moment live labels from caregivers via the Commalla labeling app. As part of this work, we are developing scalable methods for collecting and live labeling naturalistic data, and processing methods for using the data in machine learning algorithms. We are currently piloting and refining our data collection, machine learning models, and vision with a small number of families through a highly participatory design process

bottom of page