Dependency is painful, no matter what kind it is, and it’s quite easy to understand why. Imagine if you are in Thailand, and nobody speaks English. You’re trying to ask for water but nobody understands you, so then you go on to either find a translator or spend hours trying to use different hand gestures, hoping they’ll understand. This is a very basic example but it shows that human dependency can be quite frustrating.
Now, imagine how all the visually impaired people feel – not knowing who and what is around them, or what they are eating unless they taste it. They constantly rely on other humans to guide them through every step of the way so that they are both safe and well taken care of. At the end of the day, how much another person can do for you, and not everyone has the patience. Moreover, the person unable to see starts feeling like a burden and most probably abhors dependency.
SEE ALSO: This Liquid Glass Will Make Your SmartPhone’s Screen Indestructible
Keeping all this in mind, Marita Cheng and Alberto Rizzoli created ‘Aipoly’ – an app that tells the visually impaired people what lies in their surroundings. The app is free of cost, and is available on iOS as well as google play store. One simply has to take a picture using his/her smart phone, and the app responds within seconds depending on the connectivity and image complexity. This way, those people don’t have to rely on humans anymore.
It may not be the first app designed to help the visually impaired but it is definitely the first ever app that gives complete independence to users. And this very particular feature makes it different from all other apps! As the app is entirely automated, there is a greater degree of discovery and exploration associated with it.
So, how does it work? The user simply takes a photo which is then sent to the cloud for identification purposes. There, the algorithm matches the image to what it has already learnt using a process of pattern detection and prediction known as machine learning. Upon successfully doing so, it sends the information back to the phone which then reads it out loud. They have made it more convenient by enabling the app to join key words to make a proper sentence. For example, if a picture contains a bicycle and a man, then the app will read out the following sentence ‘man riding a bicycle’.
The app isn’t flawless (which is quite normal) as users have reported that the app fails to correctly differentiate between men and women, and that it can’t identify facial expressions. However, these flaws can be rectified because that’s how brilliant machine-learning algorithm is.
So, don’t go on hating the app if it inaccurately identifies your gender because hey – it’s just an app after all!
We wish the developers best of luck because they are serving the humanity in unprecedented ways!
H/T: Tech Crunch, The Melbourne Engineer