Product design to
improve inclusion for
neurodiversity
& learning challenges
In the summer of 2022, as a high school student, I completed the Engineering Design Lab program at Tufts University. I was part of a team of 4, where I became the mechanical lead. In one week we built a prototype of a device that would help people affected by neuro-divergent abilities, such as those with alexithymia, dyslexia, and ADHD. The device helped those with alexithymia (a difficulty deciphering the emotions of others often found in those with autism) by taking a photo of an individual and, using Google’s Cloud Vision API, displaying the most likely emotion being expressed. The device also incorporated a text reader that converted photographed text to both a dyslexic-friendly font and a computer-generated reading of the text through the embedded speaker to help both those with dyslexia and those with ADHD.
The summer program exposed me to robotics and Python programming. Through the course, I became comfortable with small computers (Raspberry Pi) and Internet of Things devices. In addition, I grew my 3D modeling skills along with learning laser cutting techniques.
Design Process
Brainstorming
For the device to accomplish both the emotion recognition and text reader portions, it would need to house:
Raspberry Pi
Battery for the Raspberry Pi
Camera to take photos of faces or text
Screen to show the likeliest emotion expressed or the text in the Dyslexie font
Speaker capable of outputting the spoken text
A way for the user to dictate which function the device would accomplish
Wires connecting each component to the Raspberry Pi
Initially, an ergonomic handle was considered for the device to enhance usability and comfort, but it was ultimately discarded because it made the device resemble a firearm too closely.
Original Sketched Designs
Planning a Solution
Main Constraints:
$50 budget
4 days for building and testing
Access to Raspberry Pi, battery, spare electronics, and NOLOP Makerspace
Considering the time constraint, simplicity and ease of assembly were prioritized over broad aesthetics and complexity.
Fig. 1 —Initial layout of the parts considering the size and length of the different parts and connecting wires. Fig. 4 shows the design when we originally only had access to a small screen, but we would go on to use our budget to purchase a larger touchscreen which rendered the two adjacent buttons unnecessary.
Fig. 1
Fig. 3
Fig. 2
Fig. 4
Prototyping
First Prototypes
The first few prototypes helped calibrate the measurements to ensure every piece of the casing fit properly together while also housing the internal devices properly. These first few prototypes still accounted for two physical buttons along with the smaller screen. Fig. 5 shows how the internal components were configured. As seen in Fig. 7, the buttons would require the piece of wire to also be protruding from the device, which originally spurred conversation about constructing button covers, but eventually was addressed by a larger touchscreen.
Final Prototypes
These last two prototypes came after the design shift to the larger touchscreen with the semiclear plastic casing being the final presented project. In these closing prototypes the emphasis was on making sure each component could work and would function as fully intended within the casing. Fig. 8 shows the prototype where our group noticed that the speaker couldn’t be properly plugged into the Raspberry Pi and was occasionally coming loose. Fig. 9’s prototype fixed this problem by having two small access holes respectively so that the speaker cord could connect properly.
Small Screen Prototypes and Laser Cut Outline
Fig. 8
Fig. 5
Fig. 6
Large Screen Prototypes
Fig. 7
Fig. 9
Programming and Testing
Understanding the Google Cloud Vision API (GCV API)
Google Cloud Vision is a cloud-based service that provides powerful image analysis capabilities through the use of machine learning to identify objects, faces, text, logos, landmarks, and more. Developers can integrate Cloud Vision into their applications via an API, allowing them to automatically categorize and extract insights from visual content, such as detecting emotions in faces, recognizing text (OCR), or classifying images based on their content.
GVV API applied for emotional recognition:
We would take a picture of the subject’s face and feed it through Cloud Vision’s “face detection” feature. The program would analyze the face’s “facial landmarks” (position of eyes, ears, nose, mouth, etc…), and then based on the results would give a rating between very unlikely to very likely for four separate emotions: joy, sorrow, anger, and surprise. The likelihood rating for each emotion would then be printed on the touchscreen.
For example, with a picture of a smiling person, the screen could read:
Joy: Very likely Sorrow: Very unlikely
Anger: Unlikely Surprise: Possible
Testing the Program
To achieve standardization, the device and algorithms were tested using a series of stock images and printed text, similar to the material shown here.
Ideally, a more comprehensive testing process would have been completed with testing edge cases and real-world photos containing varying levels of noise, distortion, and different fonts or handwriting. Instead this streamlined process was used to get a working product within the time frame.
GVV API applied for text detection:
Using Optical character recognition (OCR), a photo of the text is analyzed and converted into machine-coded text. The API could analyze both handwritten and printed material, expanding the device’s possible use cases. The machine-coded text would then be either manipulated into the Dyslexie font and printed on the touchscreen or synthesized into speech using text-to-speech technology.
Both Cloud Vision’s “Text Detection” and “Document Text Detection” were utilized to convert text to machine-coded text.
Product Demonstration and Takeaways
Working Product Demonstration
*No video was taken of the text detection software working properly, but during our presentation the device was able to detect printed text
Takeaways
The main takeaways from this project were a grown understanding of what it takes to manage a project and the responsibility having a leading role requires. Devoting my time to understanding what our programming lead was doing and needed from the casing while also developing and designing the casing in the limited time was extremely exciting but energy-consuming. I learned how to properly plan out the design process and communicate with my team so that none of us would experience major burnout while still creating a final product which we were all proud of.