The BlindFind wearable computer-vision rig will be a breakthrough device for millions of visually impaired people by allowing them to go where they have not gone.
About
Brief Description: It is estimated that there are 285 million Visually Impaired (VI) people worldwide: 39 million are blind, 246 million have low vision. There are about 10 million VI people in the US whose disability affects them and their family and friends. The VI typically use a white cane to navigate and are functional in familiar surroundings, but are often dependent on family and friends to shop, navigate outdoors and indoors, and have limited career options due to their disability. The traditional approach to giving the VI increased mobility and independence has been to alter the surroundings to adapt to the needs of the VI, e.g., by installing active and passive devices in intersections, doorways, elevators, shopping aisles, etc. We propose a paradigm shift and suggest that it is far more effective to enable the VI to function in existing environments designed for the sighted world by developing the appropriate computer vision technology to render them independent. This is significantly more cost effective and thus more broadly applicable. The proposed technology is transformative in the lives of a millions of people, and promises to increase the impact of VI on society. Professor Ben Kimia (https://vivo.brown.edu/display/bkimia) is leading a team from the School of Engineering at Brown University in the development of BlindFind, a wearable computer-vision system and complementary crowd-sourced, geo-location mapping data repository to assist the visually impaired as they navigate public indoor environments. The system consists of small cameras mounted on eyeglasses, a haptic belt, an Inertial Measurement Unit (IMU) worn on the belt, and a bone-conduction headphone set, all connected to a small laptop carried in a backpack. New buildings - airports, educational facilities, shopping centers, etc. - are initially mapped by a couple of visually impaired persons with sighted assistance. During these visits, imagery of the space is gathered and uploaded into a central repository system. BlindFind will then guide users through the mapped space by matching acquired imagery to those stored in the repository and then plotting a path, which is communicated by a haptic interface. The repository is then continuously refined by each subsequent visit. The key point is that the construction and maintenance of maps and annotations is automatic: the actual work is done by the visually impaired community simply by visiting a site, thereby eliminating the need for a cumbersome and expensive external infrastructure to acquire models and maps of sites of potential interest. The team has developed a working prototype of each element of the system, achieving along the way a number of critical milestones. Last year a team of blind and low-vision volunteers tested the prototype haptic belt-based guidance system, and gave it rave reviews. Testing of the system’s IMU and tracking system have produced good results, indicating that these components can be further optimized to support navigation and guidance. Collaborating with the co-Director of Brown’s Creative Mind initiative, the team produced prototype glasses that can support the integration of six to eight miniature cameras. The BlindFind wearable computer-vision rig will be a breakthrough device for millions of visually impaired people around the world by allowing them to go where they have not thus far been to go unassisted. By overcoming technical challenges, the Laboratory for Engineering Man/Machine Systems in Brown University’s School of Engineering aims to inspire confidence, independence and self-sufficiency among blind and low-vision users. Additional Information: US Patent application 14/707,163 is pending.