Five aspects you need to know about Artificial Intelligence #AI
During the past weeks, I had the pleasure of attending and speaking at various conferences, in which, one way or another, Artificial Intelligence was one of the key topics for discussion. These are my related learning;
1. Fear of Artificial intelligence is real
There is a lot of misunderstanding about Artifical intelligence, which is partly fuelled by the fear of the unknown and partly driven by media. Even on a recent post on the WSJ’s Blog, they “sell” AI because of the economic aspects, like efficiency and cost-cutting. Clearly not an inclusive way of looking at progress made by AI, and clearly not presented as a technology whose economic benefit might help bridge the gap between the wealthiest and the poorest in our society. Even when providing examples in retail, WSJ focuses on stock management and demand predictive analytics, which sounds to non-tech educated people, like an algorithm trying to get in their heads. Even though, in retail, one of the big advances in Artificial Intelligence is the ability to deploy cashier-less retail stores, where shoppers don’t have to queue at the exit to pay for their goods.
The only effective approach to reducing the fear of AI pivots around two axes: first and foremost, we must educate the public, on what AI is, does, and is capable of doing. Moreover, we need to get the public to experience innovation bit by bit, in a way that we create a habit, so that the fear factors decrease: in this sense virtual assistants and the smartification of devices, can be a perfect playing ground for doing just that.
2. Focus on the complementarity of AI, not the replacement of human tasks
Because of the above-mentioned fear, people tend to be suspicious of AI. As a result, Artificial Intelligence-based products and projects have a much bigger success rate when they focus on complementing human tasks, rather than replacing them. So for example in Healthcare, where there seem to be an obsession with the question of whether AI will ever replace medical professionals, Artificial Intelligence is only playing a role in complementing nurses’ and doctors’ abilities: for example by improving diagnostics in telemedicine, in which patients are “self-assessing” themselves through smart devices, which they were really not trained for. In addition to that, AI is playing an important role in research, by identifying patterns not visible to the human eye, in the analysis of very large datasets of medical records. But AI is also playing a role in making more convenient the doctor-patient relationships: virtual assistants a la Alexa, are being enabled with “skills” to listen, take notes, summarize instruction from the doctor to the patients, and converting those instructions in a digital prescription on the patient’s name. This is, of course, a clear win-win-win situation: it’s efficient for the doctor, who does not have to tell, then write it, and then read it and explain it to the patient; it’s efficient for the patient, who has a record of the discussion to go back to, in case of doubts. And it’s efficient for the healthcare operator, which can more conveniently track and monitor visits.
3. Sharpen your insights, local nuances do matter
Robo-investing is one of the fastest growing segments in Fintech: consumers are willingly deciding to give algorithms the management of their savings. Now I choose not to do that, and I would probably prefer not to be diagnosed by a silicon-based entity. While I am open to the idea that a real doctor uses AI to improve its diagnostics of my illness, by having an AI looking at my medical data. This is just an example of a far more complex landscape: consumers, patients, shoppers do not showcase the same behavior. There are plenty of cultural, social and economic nuances, and of course, cultural dimensions that play a role in our attitude to embracing such drastically changing technologies. And this is why, AI projects are more successful when they are based on strong, relevant, local insights: this also implies that oftentimes the key benefits are functional (e.g., reducing diagnostic error by X%) and other times they are emotional (e.g., showing their patients they are digitally savvy).
4. In “change” deliver a low-hanging fruit
An AI-project is a change management project, and as such, it requires low-hanging fruits to reduce the risk of failure. In a nutshell, better to start with a smaller scope, get people to appreciate the benefits, build a habit on the technology. This will also help reducing the fear of AI, and converting users to ambassadors for more drastic, long-term changes. Any large organization will have process aberrations, where cumbersome tasks are forced on users because of organizational structures: that’s usually a good starting place to begin with AI. In private clinics, patients almost never see the same doctor: their info is on file, and the doctor needs to be able to read, understand, and ask questions about the health of the patients while forging an empathic bond with him or her. In all of that, in a very short period of time, that the private clinic scheduling allows for. AI can help the doctor, by summarizing the info, making the process smoother, and even by suggesting the right questions to ask. In other examples, we have seen how AI helps retailers, to develop a cashierless experience, by building the ultimate convenience: but AI can also assist in building better retail experiences in local context: for example Nike Live, is a pilot program, where local shopping data at ZIP level, is used to customize the experience to the wishes and habits of the local Nike community: store layout, services and even merchandise is selected based on local data.
5. AI is as Good as the training it gets
You probably are already aware of the news that Amazon has killed its AI-based Recruitment project because it showed recruitment biases. Well according to the experts I interviewed, while none of them worked with Amazon, so none of them has insider knowledge, the most common problem with AI is the training set, in other words artificial intelligence is as good as its training data. In other words, if your organization is showing some cultural bias, your AI is likely to show that bias too, unless you complement your training set with external data.