Over the last few years the phrase ‘artificial intelligence’ has been thrown around a lot — whether it be in Hollywood films or through viral videos on social media. It seems to be the latest craze in technology and has been pipped to literally be the future.
As a graduate in the Life team at comparethemarket.com, I have been fortunate enough to work within a company that has a passion for technology that can disrupt the market. To learn more about disruptive and progressive technologies, such as Artificial Intelligence (AI), I attended the Microsoft Future Decoded at the Excel London with two colleagues.
Future Decoded is an annual conference held by Microsoft. It aims to showcase the future of technology and, unsurprisingly, a large focus this year was on AI and its real world applications. Throughout the day we explored the expo, attended the keynotes and I chose to opt in to the ‘AI Technology’ path of seminars. The AI path included 3 talks throughout the day from Microsoft evangelists covering Machine Learning, Cognitive Services and the Microsoft Bot Framework.
Seminar 1: Can machine learning predict whether you’ll come to my talk?
Speaker: Amy Nicholson
AI, machine learning, data science, deep learning, expert systems, all are terms that are used interchangeably with each other to try and help describe what AI actually is. After just a couple of minute, the big issue became immediately clear — does anybody actually know what AI is? If you ask 100 people what their definition of intelligence is, the chances are it will be different for each person; this is the exact point that was being made. Amy Nicholson took an approach to split down what AI actually meant. When you think of artificial intelligence, you may think of technology, data, algorithms, pattern recognition, but what happens when you achieve a little bit of success, by nature should you take into account psychology, sociology, ethics?
A small division of AI is machine learning. I took away a relatively simple definition of machine learning; “it allows the computer the ability to learn without being explicitly programmed to do so”. This will allow the computer to essentially grow, a very human trait, to understand and process new data it had previously never encountered before. To some, this itself could be considered ‘AI’.
In the session, Amy used Microsoft’s own Machine Learning tool to run through a live demonstration of how by using existing data, the platform can predict future trends. The tool, powered by Azure, uses algorithms to analyse patterns, behaviours and trends of data to allow it to ‘forward think’. The demonstration took attendance data from previous Future Decoded sessions, to predict the demographics and volume of people at the 2016 conference. Within the sample shown, and the people in the room as a test, the results proved fairly accurate on the day.
Within the demonstration, it was clear machine learning can be a very useful method, but even with the Azure tool, it is very complex and requires a large amount of clean data. What’s also glaringly obvious, is that it requires a very skilled data scientist and a lot of time and effort to get data that can be useful. Although not quite perfected yet, machine learning is just one cog in the machine that is AI.
Seminar 2: Cognitive services in action.
Speaker: Martin Kearn
Cognitive services, or cognitive computing can be best described as the simulation of human thought processes. It uses self-learning systems that leverage data-mining, pattern recognition and natural language processes (such as IBM’s Watson), to mimic the way the human brain works.
Martin Kearn’s session was based upon the Microsoft cognitive service APIs that have been designed to aid in everyday tasks. There are too many to run through in this blog post, however they can all be found at (https://www.microsoft.com/cognitive-services/en-us/apis) — they span across vision, speech, language, knowledge and search optimisation.
As a relative newcomer to cognitive services, I was fascinated by how technology has progressed so far to become an active part in improving the accessibility of people’s everyday lives. I didn’t realise that by using Google Translate abroad on my holidays, I had been using APIs that convert foreign language text into English on my camera screen, nor that when I gain access to my laptop by facial recognition, it is using an API to detect my face and unlock the device. The API’s on show in the session could detect new people or objects that came into videos, it could respond to the voice of a single user only by recognising and matching their voice tone only, it could correct commonly misspelt language such as ‘their’ and ‘there’ just by the context of the sentence. With the huge range of applications for cognitive services, the future is certainly an exciting one.
But how does this link with AI? I believe the simplest answer to this is that we are already using key components that make up the AI machine. Just like a vehicle, AI is made up of many different parts, some moving, some still, some vital and some optional, but all play a part and build a better experience for the user. The Microsoft API’s on display at Future Decoded were just another small cog in that machine. I could only think of the marginal gains approach made famous by Dave Brailsford and the work he did with the GB cycling team leading up to the 2012 Olympics, by improving in many different areas by a small margin, you can build a fully working, innovative and hugely progressive machine.
Seminar 3: Introducing the Microsoft Bot Framework
Speakers: Simon Michael and Jamie Dalton
Simon Michael and Jamie Dalton gave a truly inspired delivery of the Microsoft Bot Framework. The framework is “a comprehensive offering to help build and deploy high quality bots” — with a bot being an application that can perform one or more automated tasks, perhaps through the medium of conversation. This was the last session of the day, and instead of feeling tired, uninterested and wanting to get the tube home, I was ready, I was keen and I was excited to see what was about to unfold.
In the pre-session notes, Simon and Jamie mentioned that developers who write code for bots often face the same issues; bots require basic I/O; they must have the language and dialog skills; they must be performant, responsive and scalable’ and they must connect to users, in any format the user chooses. Through my experience with bots to date, I could agree with these points, and I was intrigued to find out how they suggested tackling the issues.
The session started with theories on why bots are become so prevalent. 85% of consumer’s time on smartphones was spent using 5 apps, and the large majority of these apps were messaging or communication apps. For the first time ever, in 2015 messaging apps became more popular that social networking apps — these trends are no fluke, it proves the theory that consumers are striving for more communication in their everyday lives — enter, the bot.
It was at this point the particularly interesting things started to happen. Simon and Jamie went through how to develop a good bot. With a bit of know-how, anybody can build a bot, but not every situation calls for one — the session made clear that you must understand where a bot is relevant in order to build a good bot. It was at this point all the sessions, key notes and stalls in the expo came together. It’s about making the bot personable, friendly, and useable. By using machine learning to progress, using relevant API’s, targeting through relevant channels such as Facebook messenger, Skype or Whatsapp, by using a natural language processor and accurately gauging user response, you can build not just a good bot, but a great one. A good bot however should be relevant, like any marketing campaign, like any conversation with your friend, or TV programme you love, it must appeal to you on a personal level — that is the trick. Technology will do whatever you program it to do, but it must be fit for purpose on a lot more levels than being technically astute. I believe one of the biggest barriers to entry for people trying new technology in the modern age is trust. With cybercrime rife, security now in the mainstream media and hacks going on every day, it is more important than ever to build that trust. With bots, if people do not trust the bot, do not respond to the bot as they would another person, it can never be fully accepted.
My day at Future Decoded was an absolute eye opener. I loved every second of it. To be able to see what leading experts in the field are working towards on subjects I am passionate about was incredible. To me, AI is the future, and I’m excited to be part of it.