AI and ML Odyssey - Documenting My Journey Through the World of Artificial Intelligence and Machine Learning

Chapter 1: Introduction to AI and ML

Since my university days, I have had a keen interest in Artificial Intelligence, which started during my engineering degree where I undertook AI as a minor. Those early years were marked by the use of Prolog, a programming language that laid the ground for my understanding of AI principles.

However, my professional journey took a different path, and I didn't have the opportunity to work directly in the field of AI since then. With recent global surge in interest and investment in AI, my curiosity is ignited again. Motivated by this renewed enthusiasm, I am now diving back into the world of AI. I intend to create a series of notes in a cheat sheet format and document my journey, to help me rekindle my knowledge, and also assist others venturing into this field.

As I balance this learning process with other commitments, I expect it to be a gradual yet rewarding journey, so let’s get started.

What is AI?

Artificial Intelligence (AI) is a branch of computer science dedicated to creating systems capable of performing tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, understanding natural language, and perceiving environments

Artificial Intelligence, often referred to as AI, constitutes a branch of computer science dedicated to the development of systems with the capacity to undertake tasks that generally necessitate human-like intelligence. These tasks encompass acquiring knowledge, drawing conclusions, addressing complex challenges, comprehending natural language, and interpreting the surrounding environment

What is ML?

Machine Learning (ML) is a subset of artificial intelligence focused on building systems that learn from experience. It involves the development of algorithms that can analyze data, learn patterns, and make decisions with minimal human intervention. ML enables computers to recognize complex patterns and make intelligent decisions based on the data they are exposed to, improving their accuracy over time as they process more information.

How ML Works

ML works by training algorithms on data; this enables algorithms to make predictions/decisions. The more data you expose these algorithms to, the more they learn and improve their performance over time.

Types of ML

    • Supervised Learning: Learning with labelled data.
    • Unsupervised Learning: Learning from unlabeled data.
    • Reinforcement Learning: Learning through a process of trial and error.

What is Deep Learning?

Deep Learning is a subset of ML based on Artificial Neural Networks with representation learning. It can automatically discover the representations needed for feature detection or classification from raw data.

How Deep Learning Works?

Deep Learning models consist of layers of interconnected nodes similar to the neurons in the human brain. Each layer performs specific functions, and the data is processed through these layers to produce an output.

The Relationship between AI, ML, and Deep Learning

Aspect

Artificial Intelligence (AI)

Machine Learning (ML)
Subsiet of AI

Deep Learning
Subset of ML

Definition

AI refers to the broader field of creating machines or systems that can perform tasks that typically require human intelligence.

ML is a subset of AI focused on the development of algorithms that enable machines to learn from data and make predictions or decisions.

Deep Learning is a specialized subset of ML that uses neural networks with many layers (deep networks) to analyze various levels of abstract data features.

Core Objective

The main goal of AI is to create machines that can simulate human intelligence, including reasoning, problem-solving, understanding natural language, and more.

ML focuses on developing algorithms that can improve their performance on a specific task by learning patterns and rules from data.

The core objective of Deep Learning is to interpret and process complex data structures through neural networks that mimic human brain functions.

Dependency on Data

AI systems may or may not rely heavily on data. They can use predefined rules and logic to make decisions.

ML heavily relies on data to train models. The quality and quantity of data play a crucial role in ML performance.

Deep Learning requires large volumes of data (big data) to effectively train the deep neural networks, often requiring more data than traditional ML to achieve high accuracy.

Examples

Chatbots, autonomous vehicles, recommendation systems, natural language processing, computer vision, expert systems, and more.

Regression, classification, clustering, neural networks, decision trees, random forests, and deep learning, among others.

Image and speech recognition, natural language processing, self-driving cars, and enhanced computer vision applications.

Human Involvement

AI can operate autonomously, but may also involve human interaction or supervision.

ML typically involves human intervention in data preprocessing, model selection, and evaluation, but the model can make autonomous predictions.

Deep Learning models, once trained, can operate independently, but they require significant human effort in designing network architecture and tuning parameters.

Learning Paradigms

AI encompasses various techniques, including rule-based systems, symbolic reasoning, and statistical methods, including ML.

ML primarily utilizes statistical and mathematical methods to learn patterns and make predictions.

Deep Learning uses backpropagation and other advanced algorithms to train deep neural networks and automatically extract features without explicit programming for feature extraction.

Scope of Application

AI can be applied in a wide range of domains, from healthcare to gaming, finance, and beyond.

ML is a critical component of AI and is widely used in areas like image recognition, natural language processing, fraud detection, and recommendation systems.

Deep Learning excels in tasks that involve unstructured data like images and text, where the complexity of the data requires the discovery of intricate patterns that simpler ML models may not capture.

History 

Year

Milestone

1950

Alan Turing publishes "Computing Machinery and Intelligence," proposing the Turing Test.

1951

Early AI research began. Development of a chess program happened slightly later.

1956

The term "Artificial Intelligence" is coined at the Dartmouth Conference.

1959

John McCarthy and Marvin Minsky founded the MIT Artificial Intelligence Project.

1961

The first known chatbot, ELIZA, was created by Joseph Weizenbaum in the mid-1960s.

1997

IBM's Deep Blue defeats world chess champion Garry Kasparov.

2005

Stanford University's autonomous vehicle, Stanley, wins the DARPA Grand Challenge.

2006

Geoffrey Hinton et al. introduce the concept of "Deep Learning" in machine learning.

2010 – Ongoing

The concept of Explainable AI (xAI) began gaining significant traction in the mid-2010s.

2011

IBM's Watson wins Jeopardy! against top champions.

2013

The Tianhe-2 supercomputer, developed by China, unveiled. Major breakthrough in processing power for AI research.

2014

DeepMind's AI agents play Atari 2600 games at superhuman levels. AlphaGo developed by Google's DeepMind. Tesla's major foray into AI, particularly for Autopilot and Full Self-Driving (FSD) technologies, starts gaining attention.

2016

AlphaGo defeats world champion Lee Sedol in Go.

2017

Stanford researchers work on diffusion models; Google researchers develop the concept of transformers.

2018

Google AI's BERT model revolutionizes natural language processing.

2020

OpenAI releases GPT-3, a significant leap in language model capabilities.

2022

OpenAI releases ChatGPT, an advanced conversational AI model.

2023

·         Google introduces Bard, an AI-powered conversational tool.

·         xAI announces Grok, an AI modeled after "Hitchhiker’s Guide to the Galaxy."

·         GPT-4, the fourth iteration of OpenAI's Generative Pre-trained Transformer series, is released.

 

Why AI Gained Traction in Recent Years?

  • More Computational Power: Enhanced computing capabilities allow for processing large volumes of data essential for advanced AI.
  • Availability of More Data: The digital era's vast data generation is crucial for training sophisticated AI models.
  • Advanced Algorithms: Significant improvements, especially in neural networks, have led to more complex and accurate AI systems.
  • Broad Investment: Increased funding from both public and private sectors has fueled AI research and applications across industries.
  • Improvements in Cloud Computing: Accessible and affordable cloud services have simplified data storage and processing for AI development.
  • Open Source Software and Collaboration: A collaborative environment fostered by open source software accelerates AI advancements.
  • Increased Focus on AI in Academia: Growing academic interest and specialized programs contribute to a skilled AI workforce.
  • Advances in Hardware Technologies: Specialized hardware like GPUs and TPUs enhance AI computation efficiency and speed.
  • Greater Public Awareness and Acceptance: Integration of AI in daily life increases public interest and demand.
  • Government Policies and Initiatives: National strategies support AI research and aim to position countries as leaders in the field.
  • Interdisciplinary Applications: AI's use in diverse fields demonstrates its versatility and spurs demand for innovation.
  • Ethical and Responsible AI Movements: Focus on ethical AI development opens new research areas and ensures responsible application.

 

 Chapter 2: Fundamentals of Machine Learning

 

---- TO BE CONTINUED ......