FYI.

This story is over 5 years old.

Artificial Intelligence

The Lessons We Learned at London's Big AI Show

First: one day, your home devices might be able to work out if you're showing signs of Alzheimer's.
The hologram welcoming attendees to the expo.

Recently, a news organisation's push notification popped up on my phone screen: children in my home country of New Zealand are going to be taught to code by the time they leave high school. I, by contrast, had only just learned how to pronounce "gif".

All around me, technology seems to be advancing out of my comfort zone.

So, to find out more about what I'm up against, I headed to London's AI Congress, a huge artificial intelligence expo, at London's O2 Arena. When I arrived, an ethereally pretty hologram usher pointed me in the right direction. I was ready for the future.

Advertisement

Here are some things I discovered:

A fake basil plant is fed by an AI hydroponic system that monitors it, feeds it and waters it, in a technology that I helpfully suggested might have appeal beyond the potted herb market.

Digital eavesdroppers

I was drawn to Jeff Adams, a genial American, who was cheerily displaying a presentation on his laptop in comic sans.

He told me how, in 2011, Amazon came calling for him. The company’s reps related cryptic plans to acquire the voice transcribing company Yap, where Adams was vice-president. That was all Adams was allowed to know until the day he was ushered into a dimly-lit room in Seattle.
"They locked the windows, closed the doors, and said: 'Now we’re going to tell you our vision.'"
Amazon wanted Adams to build the technology for what would become Amazon’s Alexa.

"My first reaction was, 'It's impossible!'" he recounts. But he gave it a go anyway, putting together the team that created the Alexa technology and the Amazon Echo. Adams now runs Cobalt, which creates language interfaces for other companies, from voice assistant robots and toys, to smartphone apps.

When I asked him what the future of voice tech looked like, Adams recounted an anecdote about a researcher who came to believe Agatha Christie may have suffered from dementia after analysing new vocabulary tics in her final two books.

He said that if early neurological defects could be deciphered from text, then he figured it should be even easier to find them in spoken language. "I thought, 'I want to be able to find out that my parents are showing signs of Alzheimer’s from a device that listens to the way they speak in a phone call.'"

Advertisement

Adams' company developed a technology which uses machine learning to monitor speech patterns to detect early warning signs of issues like dementia or Parkinson’s. Adams believes the future is one in which Alexas and Siris not only help with housework and search for nice restaurants nearby, but are also on alert for their user having a stroke.

The image of a passive eavesdropping device is compelling, but it reminds me of the recent news from China where facial recognition technology is being used to prevent people from taking too much toilet paper. Isn’t Adams concerned his concepts will be co-opted for some kind of Orwellian plot?

"I’m always worried about that. Like any technology, once the genie is out of the bottle it’s out of the bottle," he says. "I don’t think there’s anyone that wants someone to abuse their tools. But we are all worried about what jobs we take on."

The new data superhighway

In his book Homo Deus, Yuval Noah Harari draws a comparison with religion when he writes that humans, desperate to belong to the new social order, want to plug into this "all-knowing, all-powerful" flow of data, as though it is the meaning of all things – even though algorithms serving other purposes arguably gain more from the information than we do.

Where Harari sees an increasingly one-way flow of traffic, others see a revolving door. One speaker at the event ran a company that designed chatbots which could interpret casual conversation well enough to book corporate travel over an interface similar to a Facebook Messenger service. An audience member asked her whether she thought the bots, in learning from users' prose, might also begin to influence theirs – after all, it's normal for humans to start talking a bit like the person they were just with.

Advertisement

She didn't have an answer, and seemed a bit nervous at the suggestion. "I don’t think human behaviour is being modified by the bots, right?"

Mind games

The virtual reality tennis game on offer was proving a crowd favourite at the expo, but let's be honest: it’s 2018. Even my grandma has used an Oculus Rift.

Oculus Rift tennis game

The next frontier? Technology interfacing directly with the human brain to transplant our thoughts into the virtual world.

Facebook is one of several companies working on what is being called "neurorealities". In their case, they're trying to create a system capable of typing 100 words-per-minute straight from your brain. According to UploadVR.com, the Facebook annual developers' conference heard the technology is yet to be invented, but could work by essentially X-raying the skull for brainwaves and translating the readings into words, which, if the dream came to pass, could have life-changing implications for people living with speech-affecting disabilities (and also mark the end of civilisation as we know it?).

"AI technologies could support, augment and perhaps enhance human beings in ways we simply can only imagine now," UK Tech's Sue Daley told me when I asked how worried I should be about a device that can read thoughts. "It has the potential to be a real power for good. For example, AI applications could support people with mental health problems in the future in ways we can’t help them today. However, AI is already raising profound ethics questions. It is vital that we act now to recognise and discuss the ethical questions so that we can ensure the interests of humans and human values remain at the core of the development of AI technologies."

Advertisement

Robot envy

Starship Technologies' head of computer vision and perception, Kristjan Korjus, is behind a fleet of self-driving food delivery drones which boast a collection of sensors and cameras so smart they can trundle through complex urban obstacle courses to deliver their fast food packages.

A Starship fast-food delivery drone

But the clever little robots do not navigate entirely unmolested.

It’s curious children mostly, he told me, who get in the way. "Usually they just pat it or talk to it or try to corner the robot somewhere, not letting it move."

Starship has developed an automated warning system that escalates from asking humans to please stop touching the robot, to a warning that it will call the police. There has been an as-yet-unexplained phenomenon of older women disturbing the robots in some areas. ("In Germany, I think they are quite conservative; it may be to protect jobs, I don’t really know.")

But Londoners vandalise Starships more than any other city the company operates in, Korjus tells me. Meddling has happened "many times" in the southeast area codes it operates in. One man was caught kicking a bot ("The police came and he was very high.")

Korjus believes no one has tried to pilfer a Starship yet, confident that no one would want to risk the embarrassment of being caught trousering a camera-laden, tracked device "to steal a burger".

With this, I knew that Korjus – like everyone I had met at the AI Congress – may know a lot more than me about many things, not limited to the future of fast food delivery. But only one of us frequently studied the human condition in McDonald’s at 3AM.

@taliashadwell