Artificial Intelligence

Artificial Intelligence is an interdisciplinary area of computer science that deals with the development of intelligent systems, which can autonomously react to their environment and perform tasks traditionally requiring human intelligence. AI draws upon methods from mathematics, engineering, computer science, linguistics, psychology, neuroscience and other disciplines to build autonomous systems that can perceive their environment, reason and learn from experience in order to make decisions. Common examples of AI experienced by regular users include virtual personal assistants such as Apple’s Siri and Amazon’s Alexa; facial recognition technology like the one on your mobile phone; self-driving cars; and machine learning algorithms used by companies like Google and Facebook to analyze data. The term ‘AI’ is also used to refer to intelligence exhibited by artificial entities (such as robotic systems) that resemble biological organisms.

International Organization for Standardization (ISO), artificial intelligence (AI) refers to a machine or computer system’s ability to perform tasks that would typically require human intelligence (iso.org).

There are various kinds of AI that can be broadly classified into two categories: Narrow AI and General AI. Narrow AI, also known as Weak AI, refers to AI systems designed to perform specific tasks, such as voice recognition, recommendation systems or image recognition. These are the AI systems we encounter in our everyday life: the digital assistants in our phones, the recommendation algorithms of Netflix, or the image recognition software in self-driving cars. These systems operate under a limited set of constraints and are focused solely on their pre-programmed tasks. They do not possess understanding or consciousness.

On the other hand, General AI, also known as Strong AI, refers to systems that possess the ability to understand, learn, adapt, and implement knowledge across a broad array of tasks, replicating human intelligence. This type of AI remains largely theoretical and represents the Holy Grail for many AI researchers. A General AI system would be capable of independent thought, understanding, and decision-making, irrespective of the task at hand. 

It’s important to understand that these categories are not clear-cut divisions but more of a spectrum, with many AI technologies falling somewhere in between. Also worth noting is that most of the AI in use today is Narrow AI. Despite the rapid advances in AI technology, we are still far from creating a true General AI. However, it is important to note that even Narrow AI has the potential to disrupt numerous industries and significantly change our lives.

Many experts view artificial intelligence as the third significant revolution in information technology, following the emergence of computers and the Internet. The first revolution, the advent of computers, provided us with the ability to process vast amounts of data in record time, transforming everything from scientific research to business processes. The second, the rise of the Internet, connected the world in an unprecedented global network, giving rise to a new era of communication, knowledge sharing, and digital commerce. 

Now, we stand at the cusp of the third revolution brought forth by AI, reshaping our world in ways we could hardly have imagined a few decades ago. The rise of AI has permeated every aspect of our lives, from communication and entertainment, to work and learning. Its impact is felt across industries and disciplines, altering the landscape of jobs and the nature of work itself. It is redefining what it means to be a professional in the 21st century. It is paramount for everyone, especially professionals, to understand, learn, and harness the power of AI in their work and personal life. Ignoring this wave of change could be tantamount to turning a blind eye to the advent of the computer or the Internet, both of which revolutionized our world in profound ways.

What is Machine Learning?

Machine Learning is a subfield of artificial intelligence that focuses on the development of algorithms and statistical models that enable computers to perform tasks without being explicitly programmed to do so. The primary goal of machine learning is to allow computers to learn from experience, much like humans do.

According to International Organization for Standardization (ISO), machine learning (ML) is defined as “the process of optimising model parameters through computational techniques, such that the model’s behaviour reflects the data or experience.” (iso.org)

Machine learning algorithms are designed to improve over time as they are exposed to more data. They accomplish this by identifying patterns within the data and adjusting their outputs or actions based on these patterns. For instance, a machine learning algorithm used for email filtering learns to distinguish between spam and non-spam messages by analyzing the features of emails marked by users as spam in the past. The more data it processes, the more accurately it can perform its task.

ML is behind many of the technological advancements we see today. It powers the recommendation systems of services like Netflix and Amazon, drives the predictive text and voice recognition capabilities of our smartphones, and is key to the development of autonomous vehicles. In short, ML is a crucial component of the AI revolution, enabling intelligent systems to adapt and improve over time.

How does ML differ from AI?

While ML and AI are often used interchangeably, they do not refer to the same thing. To better understand this, one might picture AI as a large circle that encompasses various technologies and approaches designed to make machines intelligent. Within that circle, there is a smaller one representing ML.

AI is the broader concept of machines being able to carry out tasks in a way that we would consider ‘smart’ or ‘intelligent.’ It is about designing and implementing intelligent behavior, including the ability to learn and improve. AI includes a range of techniques and approaches, not all of which involve learning from data. For instance, rule-based systems, which operate based on pre-set if-then rules, are a form of AI that doesn’t involve learning.

On the other hand, ML is a subset of AI that uses statistical methods to enable machines to improve with experience. It revolves around the idea that machines can be given access to data and learn for themselves. It is, in a way, the leading edge of AI, as it is through learning from data that many modern AI achievements have been made possible.

In other words, all ML is AI, but not all AI involves machine learning. ML is just one of many tools and approaches used in AI research. AI that doesn’t involve learning from data may be based on predefined rules or other forms of human input.

Another term used in this context is Deep learning. Deep learning is a subfield of machine learning that focuses on training artificial neural networks to learn and make predictions or decisions without explicitly being programmed. Neural networks are inspired by the structure and function of the human brain, specifically the interconnected network of neurons. Deep learning algorithms use multiple layers of artificial neurons, known as neural networks, to extract high-level features from raw data and make sense of complex patterns. Deep learning models can handle large-scale and unstructured data, such as images, audio, text, and video, with exceptional accuracy.

Click here to display content from YouTube.
Learn more in YouTube’s privacy policy.

What Are Neural Networkds?

A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.

Click here to display content from YouTube.
Learn more in YouTube’s privacy policy.

What are Convolutional Neural Networks (CNNs)?

Convolutional neural networks, or CNNs, are distinguished from other neural networks by their superior performance with image, speech, or audio signal inputs. But how exactly do they work? In this lightboard video, Martin Keen with IBM, explains how this deep learning algorithm operates to enable machines to view the world as humans do.

Click here to display content from YouTube.
Learn more in YouTube’s privacy policy.

How Do AI And ML Work?

The aim of AI and ML is to build machines and software that can mimic or even surpass human capabilities in a variety of tasks. But how do they work in reality? At the heart of it all lies the concept of ‘learning from data.’

Consider how a child learns to identify a car. They don’t memorize every possible brand, model, color, or shape of a car. Instead, they develop an intuitive understanding of ‘car-ness’ by seeing many examples over time. Similarly, AI and ML systems learn patterns from examples in the data they are given.

AI systems can be rule-based, meaning they follow explicitly programmed instructions, like a recipe, or they can be learning-based, where they adapt their behavior based on data. Learning-based systems are where ML comes into play.

ML is an AI technique that allows computer systems to learn from data and improve their performance without being explicitly programmed. It operates under the principle that algorithms can be trained to learn patterns from raw data by minimizing a so-called ‘error’ or ‘loss’ function, which measures how well the algorithm is performing. The algorithm iteratively adjusts its internal parameters to minimize this error.

In practice, this might look like an ML algorithm learning to identify cars in pictures. We’d feed it a lot of pictures, some of which have cars and some of which do not, and for each picture, we tell the algorithm whether a car is present. The algorithm will then try to find patterns in the pictures that correlate with the presence of a car, adjusting its internal parameters based on whether its current prediction is correct or not. After being trained on enough examples, it can then accurately identify cars in new pictures it has never seen before.

There are different types of machine learning, including supervised learning (where the algorithm learns from labeled data, like our car example), unsupervised learning (where the algorithm finds patterns in data without labels), and reinforcement learning (where the algorithm learns to perform actions based on rewards and penalties).

AI and ML are not only about identifying cars in images, of course. These methods power a wide range of applications, from voice recognition in our smart assistants, to recommendation systems on our favorite streaming platforms, to medical diagnostics and far beyond. As we proceed in this book, we will discuss some of these applications in more detail, along with the limitations of AI and ML and their profound impacts on our lives and work.

What is Generative AI?

As we embark on this journey into the world of Generative AI, let’s first establish a clear understanding of what it is. Do not be scared by a bit of technical explanation of it below. They are not meant to turn you into an AI developer. But rather to just describe how GenAI works.

At its core, Generative AI is a subset of artificial intelligence technologies that are capable of producing something new, original, and in some instances, indistinguishable from creations by humans. This could be anything from a piece of music, a poem, or a piece of art to a block of text or even an entirely new design for a machine part.

While traditional AI (also called discriminative AI) models primarily focus on understanding and interpreting data, Generative AI goes a step further by creating new data instances — an imaginative process akin to human thinking. It achieves this through complex machine learning techniques such as Generative Adversarial Networks (GANs), Variational Auto Encoders (VAEs), and Transformer models. 

GANs, for instance, consist of two distinct neural networks: a generator and a discriminator. As briefly described in the first chapter, neural networks are inspired by the structure and function of the human brain, specifically the interconnected network of neurons. Think of them as machines programmed to think like the human brain, using its neural networks. 

The generator neural network in GANs creates new data instances, while the discriminator neural network evaluates them for authenticity, i.e., whether they resemble an instance from the actual data distribution. Through an iterative process, the generator becomes progressively better at creating believable data, and the discriminator becomes better at determining its authenticity.

On the other hand, VAEs are a type of auto-encoders, a class of neural networks trained to reproduce their input data. However, VAEs introduce a probabilistic spin to the process, generating a distribution of values instead of a fixed value for each encoded latent attribute, which leads to the creation of new, original content.

Finally, transformer models, such as the famous GPT-3 developed by OpenAI, use large-scale machine learning and the power of attention mechanisms to generate human-like text, making them especially popular for natural language processing tasks.

These three components together help the GenAI solution to learn from existing content (also known as training the algorithm), and build what is called a ‘foundation model’. This model is then utilized to create new creations (called generation) when a user provides ‘prompts’ and asks the GenAI to do so. A new creation is done by generating a content and then comparing the new generation with the sample data upon which the model has been trained to find the most similar generation which minimizes the error.

By leveraging these techniques, Generative AI opens up vast new possibilities across industries, catalyzing innovation, accelerating workflows, and redefining the nature of work and creativity. But how do they translate into real-world applications? That’s what we’ll explore in the coming sections. To learn more about Generative AI, visit my article here.

Click here to display content from YouTube.
Learn more in YouTube’s privacy policy.

7 Types of AI

Researchers have classified AI into seven categories; you may be disappointed to learn that we’ve only realized three of them so far! In this video, Master Inventor Martin Keen lays them out, from narrow AI we know and enjoy today to the other extreme, super AI, which may have superior emotional and intellectual intelligence than humans… someday (?).

Click here to display content from YouTube.
Learn more in YouTube’s privacy policy.

Applications of AI

Now let’s explore some wide-ranging applications of AI that touch various sectors and facets of life. Remember, these applications are made possible due to the incredible versatility of AI, enabling it to handle tasks as simple as sorting out your email inbox to more complex tasks like helping to detect diseases.

How AI works in everyday life

Find out how machines learn to help you with everyday interactions with technology products.

Click here to display content from YouTube.
Learn more in YouTube’s privacy policy.

Impact of AI

As we reach the close of this chapter, it’s essential to recognize the profound impact artificial intelligence is having on the world around us. AI is not simply an emerging technology; it’s a revolutionary shift that’s reshaping industries, altering our everyday lives, and challenging our understanding of what is possible.

The proliferation of AI is changing everything from how we communicate to how we work, shop, travel, and entertain ourselves. It is creating efficiencies, opening new avenues for creativity and innovation, and propelling breakthroughs across sectors, from healthcare to finance, education to energy, and beyond. Indeed, new research projects that AI could unlock between $2.6 and $4.4 trillion in productivity annually, signaling the enormous economic potential of this technology.

However, it’s also important to dispel common misconceptions about AI. It’s not a panacea that will cure all problems, nor is it an omnipotent entity that will render humans obsolete. Instead, AI is a tool — a powerful and transformative one — whose effectiveness ultimately relies on the quality of data it’s trained on, the algorithms utilized, and the human oversight guiding its use.

With AI’s pervasive influence, companies across countless industries are experimenting with its applications, realizing its potential to optimize operations, drive innovation, and create competitive advantage. Yet, the rise of AI also signifies a shift in the demand for skills and talent. The transformation extends beyond automating repetitive tasks—it’s about augmenting human abilities, fostering creativity, and driving strategic decision-making. It is also transforming the nature of work itself, reshaping job roles, and shifting talent demands.

This transformation underscores why understanding AI, its potential and its limitations is crucial for professionals and businesses alike. The AI shift isn’t a distant future—it’s happening now. The next chapters will delve deeper into the realities of this AI era, its impact on knowledge work, and how to navigate this complex landscape successfully. The journey through the world of AI is just beginning, and it’s one that holds immense promise and potential challenges. It’s time to embrace the shift and begin our exploration.

Click here to display content from YouTube.
Learn more in YouTube’s privacy policy.

Learn More

Learn more about AI in this foundation course from IBM.

https://www.coursera.org/learn/introduction-to-ai

What is a Neural Network? | IBM

Neural networks allow programs to recognize patterns and solve common problems in artificial inte…

https://www.ibm.com/topics/neural-networks

PwC’s Global Artificial Intelligence Study

Exploiting the AI Revolution. What’s the real value of AI for your business and how can you capitalise?

https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html