MIT developing a wearable reading device for the visually impaired
MIT’s FingerReader is an index finger wearable device that supports the visually impaired in reading text by scanning text using a miniature camera.
The FingerReader is a project that expands upon the previous and similar EyeRing project by adding multimodal feedback via vibration motors, a new dual material case design, and a high-resolution video camera. For the EyeRing device, an OmniVision OV7725 VGA CMOS image sensor was used for imaging. The progressive scan OV7725 camera chip sensor is a 1/4” CMOS sensor that operates at frame rates of up to 60 fps in VGA mode and features a 6 µm x 6 µm pixel size.
The design helps focus the camera at a fixed distance and utilizes the sense of touch when scanning the surface. The finger-worn device features also features software that includes haptic response and text extraction algorithms. The algorithm expects an input of a close-up view of printed text, so the team started with image binarization and selective contour extraction. The algorithm looks for text lines by fitting lines to triplets of pruned contours.
MORE ARTICLES
Academic researchers developing new activity recognition algorithm
MIT researchers develop algorithm for better robotic navigation and scene understanding
Developers look to open sources for machine vision and image processing algorithms
It then looks for supporting contours to the candidate lines based on distance from the line, and duplications are eliminated using a 2D histogram of slope and intercept. Lastly, the team refined line equations based on their supporting contours and extracted words from characters along the selected text line and sends them to the OCR engine. Words with high confidence are retained and tracked as matching, utilizing image patches of the words, which are accumulated with each frame.
The motion of the user is recorded to predict where the word patches might appear next, in order to use a smaller search region, according to the MIT research paper. When the user veers from the scan line, tactile and auditory feedback is triggered. When the system cannot find more word blocks along the line, the FingerReader triggers an event to let the user know they’ve reached the end of a printed line. New high confidence words incur an event and invoke the text-to-speech engine to say the word aloud. When skimming through a sentence, a user can hear one to two words that are currently under their finger and can decide whether to keep reading or move to another area.
The current FingerReader model is a proof-of-concept prototype, and the team is working to enhance it and develop it into a commercially-viable product.
View the MIT research paper.
Share your vision-related news by contacting James Carroll, Senior Web Editor, Vision Systems Design
To receive news like this in your inbox, click here.
Join our LinkedIn group | Like us on Facebook | Follow us on Twitter | Check us out on Google +
James Carroll
Former VSD Editor James Carroll joined the team 2013. Carroll covered machine vision and imaging from numerous angles, including application stories, industry news, market updates, and new products. In addition to writing and editing articles, Carroll managed the Innovators Awards program and webcasts.