An accident in a pool left Chieko Asakawa blind at the age of 14. For the past 3 decades she’s worked to produce technology – now with a huge focus on artificial intelligence (AI) – to change life for the aesthetically impaired.
“When I started there was no assistive technology,” Japanese-born Dr Asakawa says.
“I could not read any details by myself. I couldn’t go anywhere by myself.”
Those “uncomfortable experiences” set her on a path of finding out that began with a computer technology course for blind people. A job at IBM soon followed, where she made a doctorate and started her pioneering work on availability that continues today.
She’s behind early digital Braille innovations and produced the world’s very first practical web-to-speech browser. Those web browsers are prevalent these days, however 20 years earlier, Dr Asakawa provided blind internet users in Japan access to more details than they ‘d ever had previously.
Now she and other technologists are wanting to use AI to produce tools for aesthetically impaired individuals.
For example, Dr Asakawa has actually established NavCog, a voice-controlled smartphone app that helps blind people navigate complex indoor areas.
Low-energy Bluetooth beacons are installed roughly every 10m (33ft) to develop an indoor map. Testing data is collected from those beacons to develop “fingerprints” of a particular location.
“We detect user position by comparing the users’ current finger print to the server’s finger print design,” she says.
Collecting big amounts of information develops a more comprehensive map than is offered in an application like Google Maps, which does not work for indoor places and can not provide the level of information blind and visually impaired individuals need, she says.
“It can be really handy, but it can not browse us exactly,” states Dr Asakawa, who’s now an IBM Fellow, a prestigious group that has actually produced five Nobel prize winners.
NavCog is presently in a pilot phase, readily available in several sites in the United States and one in Tokyo, and IBM says it is close to making the app readily available to the general public.
‘It gave me more control’
Pittsburgh residents Christine Hunsinger, 70, and her spouse Douglas Hunsinger, 65, both blind, trialled NavCog at a hotel in their city during a conference for blind individuals.
“I felt more like I was in control of my own circumstance,” says Mrs Hunsinger, now retired after 40 years as a federal government bureaucrat.
She utilizes other apps to assist her navigate, and says while she required to use her white cane alongside NavCog, it did give her more flexibility to move around in unknown areas.
Mr Hunsinger agrees, stating the app “took all the guesswork out” of discovering locations inside your home.
“It was really liberating to travel independently on my own.”
A lightweight ‘suitcase robot’
Dr Asakawa’s next huge challenge is the “AI travel suitcase” – a light-weight navigational robot.
It steers a blind individual through the complex terrain of an airport, offering instructions as well as useful details on flight hold-ups and gate modifications.
The luggage has a motor embedded so it can move autonomously, an image-recognition electronic camera to find surroundings, and Lidar – Light Detection And Ranging – for measuring ranges to things.
When stairs need to be climbed, the suitcase informs the user to pick it up.
“If we interact with the robot it could be lighter, smaller sized and lower expense,” Dr Asakawa says.
The existing prototype is “quite heavy”, she confesses. IBM is pushing to make the next version lighter and hopes it will ultimately have the ability to consist of at least a laptop. It intends to pilot the job in Tokyo in 2020.
“I wish to truly take pleasure in travelling alone. That’s why I wish to concentrate on the AI luggage even if it is going to take a long period of time.”
IBM revealed me a video of the prototype, however as it’s not ready for release yet the firm was unwilling to release images at this stage.
AI for ‘social good’
Despite its aspirations, IBM drags Microsoft and Google in what it currently offers the visually impaired.
Microsoft has actually devoted $115m (₤ 90m) to its AI for Good programme and $25m to its AI for accessibility initiative. For instance, Seeing AI – a talking cam app – is a main part of its ease of access work.
And later this year Google apparently plans to launch its Lookout app, at first for the Pixel, that will narrate and guide aesthetically impaired individuals around specific objects.
“People with specials needs have been overlooked when it concerns technology development as an entire,” says Nick McQuire, head of enterprise and AI research study at CCS Insight.
But he says that’s been changing in the previous year, as huge tech companies push hard to buy AI applications that “improve social wellbeing”.
He expects more to come in this space, consisting of from Amazon, which has significant financial investments in AI.
More Technology of Business
- Can a brand create a ‘sonic identity’ from light bulbs?
- ‘People find anything about the vagina hard to talk about’
- Inside the death zone with the nuclear clean-up robots
- Would you buy a handbag from Plada or Loius Vuitton?
- How ‘miniature suns’ could provide cheap, clean energy
“But it’s really Microsoft and Google … in the last 12 months that have actually made the huge focus in this location,” he states.
Mr McQuire states the focus on social great and impairment is linked to “attempting to showcase the advantages [of AI] because of a lot of unfavorable sentiment” around AI changing human jobs and even taking control of totally.
But AI in the disability area is far from ideal. A lot of the investment today has to do with “showing the accuracy and speed of the applications” around vision, he says.
Dr Asakawa concludes merely: “I’ve been tackling the troubles I discovered when I ended up being blind. I hope these problems can be solved.”
- Follow Technology of Business editor Matthew Wall on Twitter and Facebook
- Click here for more Technology of Business features