Down the Rabbit Hole: Artificial Intelligence

I found myself in one of my many rabbit holes this holiday: Artificial Intelligence (AI). You've probably heard of it, and social media is flooded with AI-generated images. So what is AI about? First, a brief history of AI.

The idea of creating intelligent machines dates back to ancient civilizations, but the field of artificial intelligence (AI) as we know it today began to take shape in the mid-20th century.

One of the earliest milestones in the development of AI was the creation of the first computer program that could play chess in the 1950s. This achievement demonstrated the potential for computers to perform tasks that required logical reasoning and decision-making.

1990s pressure-sensory chess computer with LCD screen

In the 1960s, the development of the concept of artificial neural networks and the discovery of the backpropagation algorithm paved the way for developing more advanced machine learning algorithms. These algorithms allowed computers to learn from data without the need for explicit programming, and they formed the basis for many of the machine learning techniques widely used today.

In the 1980s and 1990s, AI research experienced a resurgence of interest, driven partly by the availability of large amounts of data and the increasing power of computers. This period saw the development of many AI techniques still in use today, such as decision tree algorithms, support vector machines, and natural language processing algorithms.

In recent years, AI has continued to evolve and expand, driven by the increasing availability of data and the development of more advanced machine learning algorithms. Today, AI is being applied to many problems and significantly impacts many aspects of our lives.

What exactly is Artificial Intelligence?

Artificial intelligence (AI) is a computer science field aiming to create intelligent machines that can think and act like humans. Many types of AI range from simple rule-based systems to more advanced machine learning algorithms that can learn and adapt over time.

Photo by DeepMind / Unsplash

AI has the potential to transform many aspects of our lives, from how we work and communicate to how we learn and interact with the world around us. It can automate tasks, make decisions, and provide insights and recommendations based on data.

One specific type of AI is called ChatGPT, a variant of the GPT (Generative Pre-training Transformer) language model developed by OpenAI. ChatGPT is designed to generate human-like text and can be used for tasks such as language translation, summarization, and conversation generation.

Overall, AI has the potential to shape the future in many ways, including improving efficiency and productivity, enabling new forms of communication and interaction, and transforming industries and societies. However, it is crucial to consider AI's ethical and social implications and ensure that it is developed and used responsibly.

What are the different types of AI?

There are many different types of AI, and the categorization of AI can vary depending on the criteria used. Here are a few common ways that AI can be classified:

  1. Rule-based systems: These are simple AI systems that follow predetermined rules to solve a problem or make a decision.
  2. Decision tree systems: These AI systems use a tree-like model to make decisions based on certain conditions.
  3. Artificial neural networks: These are AI systems inspired by the human brain's structure and function. They can be trained to recognize patterns and make decisions based on that data.
  4. Deep learning: This machine learning type involves training artificial neural networks on large datasets. Deep learning algorithms can learn and adapt over time, improving their performance as they process more data.
  5. Natural language processing: This is a subfield of AI that focuses on enabling machines to understand and generate human-like language.
  6. Expert systems: These AI systems are designed to solve problems in a specific domain and can provide recommendations or make decisions based on their knowledge and expertise.
  7. General artificial intelligence: This is a hypothetical type of AI that can perform any intellectual task that a human can across a wide range of domains.

There are many other ways to classify AI, and the field is constantly evolving as new techniques and technologies are developed.

Neural networks

These are some of the scary kinds of AI. It's Skynet IRL. The Cyberdine Systems program, which NORAD hoped would remove human error from its nuclear weapons system, had been learning about its human makers at an exponential rate. When military officials panicked and tried to turn it off, Skynet fought back by launching ICBMs at Russia. The Russians fired back, eradicating the U.S. and clearing the way for the age of machines.

At 2:14 a.m. Eastern time on August 29th 1997, Skynet became self-aware.

I know this not because it happened but because I've heard Arnold Schwarzenegger and Linda Hamilton game out the scenario in repeated viewings of James Cameron's 1991 classic Terminator 2: Judgment Day.

So how do these networks work, and how do they get trained? Artificial neural networks (ANNs) are AI inspired by the human brain's structure and function. They consist of many interconnected processing nodes, called artificial neurons or simply "neurons," that are organized into layers.

Each neuron receives input from other neurons, processes that input using a simple mathematical function, and sends the output to other neurons in the next layer. The connections between neurons, called weights, can be adjusted during the learning process to optimize the network's performance.

There are many different types of neural networks, including feedforward networks, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory (LSTM) networks. Each type of neural network is well-suited to different types of tasks and can be trained using other techniques.

To train a neural network, you need a dataset of examples from which the network can learn. The training process involves feeding the input data through the network and adjusting the weights of the connections between neurons to minimize the error between the predicted output and the actual output. This process is typically done using an optimization algorithm, such as gradient descent, that iteratively adjusts the weights to minimize the error.

Once the network is trained, it can make predictions or decisions based on new input data. The performance of a neural network can be improved by adding more layers or neurons, increasing the size of the training dataset, or using more advanced optimization algorithms.

How about self-teaching networks?

Self-teaching networks, also known as unsupervised learning networks, are a type of artificial neural network that can learn from data without the need for direct supervision or labeled examples. Instead of being provided with labeled examples to learn from, self-teaching networks are given a large dataset and must learn to independently identify patterns and relationships in the data.

Self-Teaching scheme. A network T is trained in selfsupervised fashion, e.g. on monocular sequences [t − 1, t, t + 1]. A new instance S of the same is trained on dT output of T .

There are several types of unsupervised learning algorithms that can be used to train self-teaching networks, including clustering algorithms, dimensionality reduction algorithms, and generative models.

One common type of self-teaching network is the autoencoder, which is a neural network that is trained to reconstruct its input data. An autoencoder consists of two parts: an encoder that maps the input data to a lower-dimensional representation and a decoder that maps the lower-dimensional model back to the original input data. During training, the network adjusts the weights of the connections between neurons to minimize the difference between the initial input and the reconstructed output.

Self-teaching networks can be used for anomaly detection, data compression, and feature learning tasks. However, they are typically less effective than supervised learning algorithms, which are trained on labeled examples and can make more accurate predictions.

AlphaGo was self-learning. Is it now more competent than a human?


AlphaGo was a groundbreaking AI system developed by DeepMind, a division of Alphabet, that defeated human champions in the board game Go. Go is a complex strategy game that is much more difficult for computers to play than chess because it has a much larger number of possible moves and a much higher level of uncertainty.

AlphaGo beat human champions using a combination of supervised learning and reinforcement learning. It was trained on a dataset of millions of expert Go moves and then used reinforcement learning to learn from its own experience and improve its play over time.

AlphaGo's success was a significant milestone in AI and demonstrated the impressive capabilities of machine learning algorithms to learn and adapt to new environments. However, it is essential to note that AlphaGo is a specialized AI system that can only play the board game Go. It is not a general AI system capable of understanding and learning about a wide range of topics like humans.

Do we need to be afraid of AI?

AI can provide extraordinary benefits, but like all technology, it can have negative impacts unless built and used responsibly. How can AI benefit society without reinforcing bias or unfairness? How can we build computer systems that invent new ideas but also reliably behave in ways we want?

There are valid concerns about AI's potential impacts and risks, including issues related to job displacement, privacy, and the potential for biased or unfair outcomes. However, it is essential to recognize that AI is a tool that can be used for both positive and negative purposes, and the way it is developed and used will determine its ultimate impact.

There are several ways that AI can be developed and used responsibly to maximize its benefits and minimize its potential risks:

  1. Ensuring transparency: It is crucial to ensure that AI systems are transparent and explainable so that it is clear how they make decisions and why. This can help to reduce the risk of biased or unfair outcomes and build trust in the technology.
  2. Promoting diversity and inclusion: AI systems should be designed and trained on diverse and representative datasets to ensure that they do not perpetuate biases or stereotypes. It is also essential to have a diverse team of developers and decision-makers involved in designing and deploying AI systems.
  3. Establishing ethical guidelines: It is essential to establish clear ethical guidelines for the development and use of AI, considering issues such as privacy, autonomy, and the potential impacts on society.
  4. Ensuring accountability: There should be mechanisms in place to hold AI developers and users accountable for the impacts of their systems, including tools for addressing any harmful consequences that may arise.
  5. Fostering education and public understanding: It is essential to educate the public about AI and its potential impacts and to involve people in the decision-making process around the development and deployment of AI systems.

By taking these steps, we can ensure that AI is developed and used to benefit society and minimize the risks of biased or unfair outcomes.