AI: why now? And what is the impact on marketing?

Home » Insights » Branding » AI: why now? And what is the impact on marketing?
AI: why now? And what is the impact on marketing?

AI: why now? And what is the impact on marketing?

AI before Marketing: Historical Foundations

In computer sciences, researchers tend to track back the origins of AI as an academic discipline to a workshop in Dartmouth in 1955, in which top experts met to create this new academic field. That particular event was pivotal for future development, not only because the participants at the workshop made use for the first time of the term Artificial Intelligence but also because the experts and academics could not reach an agreement on the correct form and definition of AI: one school of thought supported a top-down, rule and logic-based approach to AI; whereas the second group of experts envisioned a data and statistics-based process, which we – nowadays – refer to as Machine Learning (ML). 

The top-down, rule-based approach to AI was the research domain from which Expert Systems emerged: a rule/ logic-based series of AI applications designed and implemented to support logistics, financial and tax planning, and credit approval processes. The vast majority of AI-based research and applications stem nowadays from the domain of ML, where many applications emerged in speech recognition, computer vision, bio-surveillance, robot control, and empirical research support. ML often is described as the convergence of two separate missions belonging to a specific domain: in computer sciences, the challenge of building machines that solve problems; in statistics, the challenge of what can be inferred from data and how reliable is that inference. And as such, ML takes on a slightly different interpretation depending on the origin of the observer: to a disciple of statistics, ML is mostly a technique to mine large quantities of data to infer the data structure itself. On the other hand, to computer scientists, ML is a computer program that converts “experience” – in training data – into “expertise,” which is the algorithm’s final output.

A training data set is a set of data used to build “the experience”: in other words, it is the subset of data that the researcher uses in a linear regression model to create the linear function, which is going to be used – “expertise” – to predict/ infer outputs given a specific new input. In AI, the learning process is supervised when the data structure is known and shared. The learning process is unsupervised when the data structure is unknown, and the algorithm ought to find patterns and construction in the data or a subset of the data. When the data structure is known, and the training data set is limited in size, but a clear reward function is buildable, the process is known as Reinforcement Learning.

In the domain of ML, of particular significance are Artificial Neural Networks (ANN), which are symbolic computational elements built to resemble biological neurons in the human brain. Deep Learning (DL) is the subset of ANN – featuring several hidden layers of non-linear processing neurons – which is behind the recent and rapid development of speech and image recognition and natural language processing and plays a crucial role in detecting unexpected obstacles for autonomous vehicles, drones, and robots. In other words, when the media refer to businesses using and developing AI, they most likely refer to ANN and DL based-models. Given that those have been around since the 1940s, the question that we still need to address is why those are trending now and developing so rapidly?

AI: Why Now?

According to the research of global management consulting firm McKinsey, the rapid development of AI in recent years is due to the convergence of three separate – and, we posit, interdependent – vectors: 

1. Advances in ANN and DL algorithms: ANN was developed in the 40s. The Hebbian Learning model was published at the end of the same decade. The first Perceptron with Hebbian Learning – which is considered the first ML neural network model – saw the light of the publishing press at the end of the 50s. In 1965 Alexey Gricorevich was credited with having developed the first DL model. In contrast, the emergence of Recurrent Neural Networks and Convolutional Neural Networks – the critical areas of DL theory and practice today – happened two decades later. While the algorithmic and modeling vector had no continuous progress since the 1940s, with decade-long hiccups in real advancement in the discipline, there was enough research and models to attract the interest of application developers by the end of the past century. 

 

2. The explosion of data: Since the Internet opened to the public in 1991, Negroponte’s broad hypothesis that bits would replace atoms is becoming self-evident. This digitalization has created a spike in the quantity of data produced, exchanged, broadcasted, and stored daily. The term big data appeared to highlight the difference with traditional forms of databases: “Under the explosive increase of global data, the term of big data is mainly used to describe enormous datasets. Compared with traditional datasets, big data typically includes masses of unstructured data that need more real-time analysis. In addition, big data also brings about new opportunities for discovering new values, helps us gain an in-depth understanding of the hidden values, and incurs new challenges, e.g., how to organize and manage such datasets effectively”.

The four primary sources of big data are also the domains where most advances in ML have emerged, which underlines how big data and ML are so deeply intertwined. Those domains are:

    1. Enterprises: including and not limited to production, inventory, customer/ consumer data, financial and economic data.
    2. Internet of Things (IoT): both personal and industrial IoT generates large quantities of data through sensors, network transmissions, and IoT-based applications. 
    3. Bio-medical Sources include the Human Genome Project and clinical data for medical and pharmaceutical R&D.
    4. Research Related to the Large Hadron Collider and the Sloan Digital Sky Survey are other research-related sources.


3. The increases in computing capabilities, coupled with the cloudification of data centers: during the late 90s and on the verge of the new millennium, the gaming industry boomed, and – as a response to the increasing sophistication of 3D games – processor manufacturers developed dedicated boards, called GPUs – Graphics Processing Units, that would offload the graphic rendering from the central processor of the computing device. These GPUs developed further as visual boards until 2006 when General Purpose GPUs emerged with a dedicated software development kit that would broaden the usage of GPUs beyond graphics. GPUs have later become the AI processing standard, as their processing capabilities made the development of DL applications feasible.

Moreover, the emergence of cloud computing has proven to be a catalyst for digital businesses of all sizes. Experts supported the notion that cloud computing offers a pay-as-you-go business model, which requires no capital investment from the cloud’s customers. Furthermore, it removes – at the customer level – the hurdle of forecasting capacity, which at the same time reduces complexity and makes application development more easily scalable. 

In this context, we are compelled to draw attention to the interdependence of the three vectors: in 2019 alone, there are more than 30,000 publications on the topic of “DL” on Google Scholar. This evolution in research towards better and improved models, and practical applications of those models, is also possible because of the explosion of data – necessary for the training and testing – and the availability of faster and cheaper computers. On the other hand, the explosion of data is also partially due to the increasing computing power of appliances that – until recently – did not come equipped with a processing unit, whereas now include of sorts of computing capabilities. Finally, the recent evolution of processing power is also driven by the emerging cloudification of data centers because they compete for better and faster hardware. On the other end, the cloud means that researchers and app developers do not need to operate their data centers: from a developer’s point of view, the cloud has translated the problematic, expensive, and high-barrier purchase of a processing unit into a cost-effective, easy to access and maintain, pay-per-use of a shared service center. The lowering of this barrier is fueling the broader developments and testing of new ML algorithms, making them accessible to a wider community of researchers and developers. Furthermore, it makes it a fertile ground for financial and corporate investors, who are keener to invest in ventures whose activities lead to potential new Intellectual Property rather than start-ups requiring computing assets, whose market value dilutes too quickly.

Impact of AI on marketing

In the discipline of marketing AI is no hype and is no future scenario. It is pretty much an everyday tool.

Every day marketers battle with and operate around AI-based algorithms of social media platforms like Facebook, Twitter, and Pinterest. For a long time, marketers had to obsess with Search Engine Optimization and marketing to an algorithm. Not to touch on the impacts of analytics, big data on insight generation, and market research.

 

But beyond the apparent tools, where is AI making an enormous impact on the marketing side?

 

Personalization is one of the first fields where artificial intelligence and marketing intersect. Predictive analysis helps companies understand their users’ preferences and make recommendations based on that data. Netflix recommends TV shows, and Amazon recommends products. As a marketer, you should develop a set of data that allows you to guide users to a specific type of product or service. Make solving a user’s problem easy for them.

 

The second area is customer interaction. According to Business Insider, 85% of customer interactions will happen without the need of a human. This is particularly true with GenZ consumers – who prefer not to engage with real people on the phone/chat – and it is proven by the rapid development of AI-based assistant services like Alexa and Siri.

 

The third area is marketing streamlining. Marketing is often about hunching on consumer expectations and whether consumers would react if exposed to a specific call to action. This trial and error method is often cumbersome and time-consuming. Deep learning through artificial intelligence allows computers to more accurately recognize user behavior and predict which groups are more likely to become future customers. Marketing programs can provide specific information related to which leads will probably be converted, allowing marketers to target efforts based on detailed demographics – without wasting time on less likely leads.

No Comments yet!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.


%d bloggers like this: