Generative AI learning path by Google
🔮

Generative AI learning path by Google

Introduction to Generative Artificial Intelligence

Introduction

  • Al, Artificial Intelligence is the theory and development of computer systems able to perform tasks usually requiring human intelligence.
  • ML, Machine Learning, allows computers to learn without explicit programming.
  • DL, Deep learning uses Artificial Neural Networks (ANNs) - allowing them to process more complex patterns than traditional machine learning.
  • Generative AI and LLMs are a subset of Deep Learning.

Deep Learning Methods

  • There are two types of DL methods:
    • notion image
    • Discriminative:
      • Used to classify or predict
      • They are typically trained on a dataset of labeled data.
      • Learns the relationship between the features of the data points and the labels
    • Generative
      • Generates new data that is similar to data it was trained on
      • Understands the distribution of data and how likely a given example is
      • Predict the next word in a sequence.
notion image
notion image
 
notion image

What is Generative Al?

  • Generative Al is a type of Artificial Intelligence that creates new content based on what it has learned from existing content.
  • Learning from existing content is called training and results in creating a statistical model.
  • When given a prompt, GenAl uses this statistical model to predict an expected response and generate new content.
notion image
notion image
notion image
  • Generative language models learn about patterns in language through training data. Then, given some text, they predict what comes next.
notion image

Hallucinations

  • Hallucinations are words or phrases generated by the model that are often nonsensical or grammatically incorrect.
  • Challenges:
    • The model is not trained on enough data.
    • The model is trained on noisy or dirty data.
    • The model is not given enough context.
    • The model is not given enough constraints.

Generative AI model types

  1. Text to text
    1. Generation
    2. Classification
    3. Summarization
    4. Translation
    5. Extraction
    6. Clustering
  1. Text to image
    1. Image generation
    2. Image editing/expanding
  1. Text to Video/3D
    1. Video generation
    2. Video editing
    3. Game assets
  1. Text to Task:
    1. Text-to-task models are trained to perform a specific task or action based on text input. This task can be a wide range of activities, such as answering a question, performing a search, making a prediction, or taking some action. For example, a text-to-task model could be trained to navigate web UI or change a doc through the GUI.
    2. Types:
      1. Software Agents
      2. Virtual Assistant and
      3. Automation
notion image

Reading Materials


Introduction to LLMs

  • Large, general-purpose language models can be pre-trained and fine-tuned for specific purposes.

Attributes of LLMs:

  • Large
    • Large training dataset
    • A large number of parameters
  • General purpose
    • The commonality of human languages
    • Resource restriction
  • Pre-trained and fine-tuned

Benefits

  1. A single model can be used for different tasks.
  1. The fine-tuning process requires minimal field data.
  1. The performance is continuously growing with more data and parameters.
notion image
notion image

LLM Types

  1. Generic (or Raw) Language Models | These predict the following word (technically token) based on the language in the training data.
  1. Instruction Tuned | Trained to predict response to the instructions given in the input.
  1. Dialog Tuned | Trained to have a dialog by predicting the subsequent response.

Tuning

  • Tuning: The process of adapting a model to a new domain or set of custom use cases by training the model on new data. For example, we may collect training data and "tune" the LLM specifically for the legal or medical domain.
  • Fine-tuning: Bring your dataset and retrain the model by tuning every weight in the LLM. This requires a big training job (like tremendous) and hosting your own fine-tuned model.
notion image

Reading Materials


Introduction to Responsible AI

Responsible AI doesn’t mean focusing only on controversial use cases. Without responsible AI practices, even seemingly innocuous AI use cases, or those with good intentions, could still cause ethical issues or unintended outcomes or not be as beneficial as they could be. Ethics and responsibility are essential because they represent the right thing to do and can guide AI design to be more useful for people's lives.

8 AI Principles by Google

  1. AI should be socially beneficial.
  1. AI should avoid creating or reinforcing unfair bias.
  1. AI should be built and tested for safety.
  1. AI should be accountable to people.
  1. AI should incorporate privacy design principles.
  1. AI should uphold high standards of scientific excellence.
  1. AI should be made available for uses that accord with these principles.
  1. In addition to these seven principles, there are specific AI applications we will not pursue.
    1. We will not design or deploy AI in these four application areas: Technologies that cause or are likely to cause overall harm.
    2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
    3. Technologies that gather or use the information for surveillance violate internationally accepted norms.
    4. And technologies whose purpose contravenes widely accepted international law and human rights principles.

Introduction to Image Generation

notion image
notion image
 
notion image
Denoising Diffusion Probabilistic Model Training
Denoising Diffusion Probabilistic Model Training
notion image
notion image

Encoder-Decoder Architecture

notion image
notion image

Attention Mechanism

  • The attention mechanism is a technique that allows the neural network to focus on specific parts of an input sequence.
notion image
  • Check the following for more Attention.

Transformer Models and BERT model

Transformer model architecture
Transformer model architecture
notion image
notion image
notion image
notion image
notion image
notion image
notion image
  • Backpropagation at the self-attention layer calculates query, Key, and Value vectors.
notion image
Different types of Transformer models
Different types of Transformer models
  • BERT: Bidirectional Encoder Representations from Transformers
notion image
Different versions of BERT
Different versions of BERT
  • BERT is trained on two different tasks.
notion image
notion image
notion image
  • Each word is represented in token embeddings.
  • How does Bert distinguish the input in a given pair? By using segment embeddings.
  • The order of the input sequence is incorporated into the position embeddings.

Create Image Captioning Models

notion image
Decoder architecture
Decoder architecture
notion image

Introduction to Generative AI Studio

Different types of Prompting
Different types of Prompting
notion image
notion image
notion image
notion image

Further, Read

💬
ChatGPT Prompt Engineering for Developers
Building Systems with the ChatGPT API
notion image
In the 10th course, it asked to buy tokens to access Lab, so I did not complete it.

⚠️Disclaimer: All the screenshots, materials, and other media documents used in this article are copyrighted to the original platform or authors.