We interviewed Kartik Hosanagar, John Hower professor of Technology and Marketing at the Wharton School and author of A HUMAN’S GUIDE TO MACHINE INTELLIGENCE.
The interview covered the role of AI in marketing and retail, the opportunities and risks with marketing automation, and the future of AI in business.
Kartik’s research work focuses on the digital economy, in particular, the impact of analytics and algorithms on consumers and society, Internet media, Internet marketing, and e-commerce. He has been recognized as one of the world’s top 40 business professors under 40 and his past consulting and executive education clients include Google, American Express, Citi and others.
Q: How do algorithms and AI play a part in our daily lives? And why should marketers care?
Kartik: Algorithms touch our lives every day, from how we choose products to purchase (Amazon’s “People who bought this also bought”) and movies to watch (NetFlix’s recommendations) to whom we date or marry (Match.com or Tinder matches). Consider these stats: nearly 80% of viewing hours streamed on Netflix originate from automated recommendations. By some estimates, nearly 35% of sales at Amazon originate from automated recommendations. Similarly, Google’s ranking algorithm is a huge driver of the products and media we get exposed to. They also impact deeply personal decisions. For example, the vast majority of matches on dating apps like Tinder are initiated by algorithms.
To a large extent, the question of how can marketers ensure their products are discovered and chosen by consumers will become a question of how they can ensure their products are discovered by the algorithms curating consumers’ lives.
As Artificial Intelligence in general – and Machine Learning in particular – further advance, intelligent algorithms will further curate our lives and drive a significant majority of our decisions on what products we buy, what media we consume, etc.
Q: For readers who are less familiar with these terms, what’s the distinction between AI and Machine Learning?
Kartik: AI means getting computers to do all the things that it takes human intelligence to do like reasoning, understanding language and the visual world, navigating, and manipulating objects. Machine learning is a sub-field of AI that deals with the ability to learn. Learning is the one thing that underlies all the others. If you had a robot that was as good as humans at everything, but couldn’t learn, five minutes later it would have fallen behind. So, learning is one of the most important aspects of modern AI.
Q: AI has become such a big buzz word. What are the practical applications in marketing?
Kartik: AI has been used to drive ad targeting online for several years. In recent years, marketing automation is being used to help firms scale their go-to-market strategy. For example, in managing leads, identifying which customers are likely to leave, and so on. Today, there are so many marketing automation tools available to marketers including SalesLoft, Marketo, Gainsight, etc. So marketers really need to ensure they are making the most out of the automation tools emerging in the market.
Q: How about AI applications in retail?
Kartik: There are many retail applications of AI, ranging from conversational customer interactions through chatbots to product recommendations in e-commerce to robots that help improve in-store operations. Chatbots can currently assist with a rather simple customer service requests, e.g., provide information on product availability and store locations, and send notifications about sales, restocked items, and special events. Their capabilities will significantly expand as they become personalized shopping assistants for customers. Besides chatbots, other applications in retail include personalized product recommendations in online shopping, outfit recommendations, robot-based inventory-taking in stores to improve shelf replenishment processes and correct discrepancies between the actual shelf and planogram, and many other applications to improve store ops.
Q: Are there risks as we make more and more decisions through AI?
Kartik: We tend to think of algorithms as objective decision-makers but they are in fact prone to many of the same biases we associate with humans. A recent example is the use of algorithms in US courtrooms to compute risk scores such as a defendant’s risk of reoffending. These scores are then used by judges, parole, and probation officers to make criminal sentencing, bail, and parole decisions. Recent research shows that these algorithms were biased against black defendants. Other examples include sexist resume screening algorithms used by recruiters, chatbots that use offensive language, social media newsfeed algorithms that promoted fake news stories around elections, and many more. Even in marketing, there are risks, for example, biases in terms of which groups receive better prices or discounts from firms.
Q: So, how concerned should we be that AI and algorithms have biases?
Kartik: The biggest cause for concern is not that algorithms have biases; in fact, algorithms are on average less biased than humans. The issue is that we are more susceptible to biases in algorithms than in humans. This is because human biases and rogue behaviors don’t scale the way rogue software might. A bad judge or doctor can affect the lives of thousands of people; bad code can, and does, affect the lives of billions. So I recommend that firms should be a bit careful when they use AI to make critical decisions such as pricing, recruiting, credit approvals, etc.
Q: What are the steps firms can take to mitigate risks with automated decisions?
Kartik: One of my main suggestions to firms that want to invest heavily into AI is to also build a strong audit process. Automated decisions should be audited by a team that is independent of the team that built the algorithm. Audits will explicitly test for a number of issues such as biases in the training data or the machine decisions. It should also test for unwarranted generalizations from data. Another important issue is that of transparency. Some of the best performing machine learning algorithms today are highly opaque. So companies will need to invest extra effort to better understand the factors driving the recommendations or decisions made by black-box AI systems.
In movies and TV shows, AI has a history of overriding human control and boundaries, which is everyone’s greatest fear. Westworld, War Games, and 2001: A Space Odyssey are just a few classic examples. Could these scenarios ever happen and what checks and balances are in place to prevent these situations from happening?
Today’s AI is what we call weak AI. They are good at one task but they don’t truly have general intelligence. A chess playing algorithm can only play chess. It cannot converse with people and navigate the physical world. A chatbot can chat but it cannot play chess or recognize people. Eventually, researchers will figure out how to build strong AI – AI systems that have general intelligence and are capable of reasoning on their own across a wide variety of situations. When we reach such superintelligence, anything is possible. We might end up in a utopia where our AI friends invent everything we could ever invent well before us and solve all of humanity’s problems. Or there is also the possibility that such superintelligent AI will pose an existential threat to humans. So, yes, stories about AI overriding human boundaries are within the realm of possibility.
That said, before we worry about those scenarios that might happen in the distant future, we first have to tackle the more immediate dangers facing us. Soon, we will have completely autonomous algorithms that invest our savings, and diagnose and treat diseases. We will have autonomous cars driving us around and military drones and robot armies running wars. The dangers from these AI will not be existential threats to all of humanity. But missteps from algorithms that don’t understand notions of ethics and fairness will nonetheless be very costly. I think these dangers are more immediate and real and need our immediate attention before we worry about hypothetical cases about AI wiping out humanity.
How do you see algorithmic applications and AI advancing in the next five years? Ten years?
We have to remember that the entire field of AI is very young –- just a little over sixty years old. As recently as ten years back, AI had mostly not lived up to its promise because most of the successes of AI were relegated to games such as chess and Jeopardy. But things have changed dramatically in the past few years. Most people find it hard to understand the rate at which AI is improving because the pace of change is exponential and not linear. This will mean huge changes in the next few years.
In the next five years, we will see lots of driverless cars on the road. We will have AI assistants running our lives for us. And we will have AI companions and chatbots that we will be interacting with. But the biggest development in the next ten years that I am hoping this book will also help contribute towards is the creation of algorithms that are fair, ethical, and transparent. That may well be the most important advance in AI in the next ten years.