We have always believed in equal accessibility for all. Even with all of the amazing technologies available today, we found that there is not much to aid the blind in recognizing everyday objects. We realized that this is a very prevalent issue, which inspired us to create this app.
AEye is an iOS application that uses machine learning and artificial intelligence to assist the visually impaired in recognizing objects. First, the user has to take a picture of the object through our camera function and then, our custom trained machine learning model will identify the object and display its description on the screen (i.e. an orange). For the blind to identify the object, the app will use trained text-to-speak modules to voice the description, displayed on the screen.
We used Swift and Xcode to create the back-end and functionality of the app. We created a custom machine learning image classifying model using Apple's CoreML and CreateML modules. For the text-to-speech functionality, we used IBM Watson's API to include this in our app. We prototyped and designed the app using Sketch and Adobe XD.
We ran into multiple challenges throughout the hackathon, which included not having the very latest Xcode update, issues with the classification of images, multiple syntax errors, collaborating with Git in Terminal, and more. However, we always stayed calm and consulted with multiple mentors throughout the day and night who were a tremendous help!
We are very proud of creating a machine learning model from scratch, which was a tedious process and creating an aesthetically-pleasing design. We are also proud of creating an app that can aid millions of blind people and making a contribution to the world around us.
As a diverse group of students ranging from 13 to 17, we learned to use each of our strengths and come together to create an awesome product, working together. As some of us were very new to programming and development, we were able to learn how to use Swift and Xcode, as well as create a functional app design.
In the future, we hope to market this app to the disabled community and improve the user interface to make it easier for people to use. In addition, we plan to better train our machine learning model with deeper neural networks and significantly improve the text-to-speech feature. Check the app store soon to see the launch of our awesome app!
Shruti J, Avishi G, Akshay S
see the linked Devpost for a demo and designs