Remember computers in the ‘80s? Bulky machines with not much under the hood compared to the processing power you’ll find nowadays. Nevertheless, in 1987, Apple started to imagine how intelligent personal assistants could help us in our daily life.

We are still far from such a level of natural language conversation with machines, but there has been a huge progress in the last few years thanks to machine learning research. Smart voice recognition services like Siri, Google Now, Alexa and Cortana are able to answer many questions by linking information from different sources such as Wikipedia or TripAdvisor. Even more interestingly, they are able to perform additional tasks like organising your calendar, booking a table in a restaurant, playing music and many more. These intelligent assistants are not only available in smartphones, but also in smart-home devices like Amazon Echo or Google Home which suddenly become part of the family.

Some fear these assistants becoming HAL 9000 or Big Brother, so it’s no surprise that there are some open-source alternatives like Mycroft and Jasper that move away from big corporations. Others however, see an extraordinary opportunity to have a truly hands-free experience on many applications. These smart-home devices are especially interesting for some elderly and disabled people who may find it difficult or even impossible to use smartphones, but still need to access all kinds of information. The seemingly simple change from touch to voice will undoubtedly open up new kinds of interactions for those that can benefit most.

Well-funded mobility providers like Uber already offer an integration layer with these services, which allow customers to order a taxi by just asking. As this becomes an accepted part of modern society those that don’t provide this functionality will fall behind and cease to be relevant to the next generation of consumers. With Amazon announcing today the addition of Alexa to it’s Fire TV stick and bringing voice assistance to UK TV screens, there are plenty of signs that this technology isn’t going away anytime soon. 

At Passenger, we have already been able to integrate our platform with Amazon Echo and are working on more integrations to put public transport operators at the heart of smart cities. This allows to simply ask questions like “when is the next bus to work?” or “how do I get to the dentist?”. Using the transport network information already stored in Passenger Cloud and our journey planner system we are able to provide an answer in a few milliseconds. The application of AI-powered voice interfaces is potentially huge inside the home, let alone when you consider them as part of smart stations or stops.

If you are interested to find out more, please get in touch to arrange a demonstration of how we’re using Passenger’s new functionality to help make it easier than ever for customers to choose public transport.