Our long-term vision is to design a device that can help others better understand and communicate with nonverbal and minimally verbal individuals by training machine learning models using primary caregivers’ unique knowledge of the meaning of an individual’s nonverbal communication.
What is the plan to reach our final goal?
What are we currently working on?
Our focus is currently on developing personalized models to classify vocalizations using in the moment live labels from caregivers via the Commalla labeling app. As part of this work, we are developing scalable methods for collecting and live labeling naturalistic data, and processing methods for using the data in machine learning algorithms. We are currently piloting and refining our data collection, machine learning models, and vision with a small number of families through a highly participatory design process.