
7 Free Courses by Google on Generative AI
Advance your career with 7 Free Courses by Google on Generative AI! Dive into cutting-edge technologies and expand your skill set with these comprehensive resources. Elevate your expertise in artificial intelligence and take your career to new heights with Google's specialized courses.
Ghita El Haitmy
CEO & Content Creator @ Techbible
Artificial intelligence and machine learning have rapidly become two of the most dynamic and in-demand specialisations within cloud computing. As these fields continue advancing at a rapid speed, with innovative new developments like generative AI models – many IT professionals are considering how to chart a long-term career path in AI/ML. But with technology evolving so quickly, what skills are needed to future-proof your role?
Google has taken on a mission to support the upcoming generation of AI enthusiasts, striving to make AI education more accessible than ever. Their initiative includes a comprehensive library of free microlearning courses centred around generative AI, tailored to individuals ranging from absolute beginners to those with some prior experience.
These courses aim to equip learners with the skills necessary to improve their understanding and proficiency in the field.
Introduction to Generative AI
This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps. This course is estimated to take approximately 45 minutes to complete.
Join here
Introduction to Large Language Models
This introductory-level microlearning course explores what large language models (LLM) are, the use cases where they can be utilised, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps. This course is estimated to take approximately 45 minutes to complete.
Join here
Attention Mechanism
This course will introduce you to the attention mechanism, a powerful technique that allows neural networks to focus on specific parts of an input sequence. You will learn how attention works and how it can improve the performance of various machine-learning tasks, including machine translation, text summarization, and question-answering.
Join here
Transformer Models and BERT Model
This course introduces the Transformer architecture and the Bidirectional Encoder Representations from the Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference.
Join here
Introduction to Image Generation
This course introduces diffusion models, a family of machine learning models that recently showed promise in the image generation space. Diffusion models draw inspiration from physics, specifically thermodynamics. Within the last few years, diffusion models have become popular in research and industry. Diffusion models underpin many state-of-the-art image generation models and tools on Google Cloud. This course introduces you to the theory behind diffusion models and how to train and deploy them on Vertex AI.
Join here
Create Image Captioning Models
This course teaches you how to create an image captioning model using deep learning. You learn about the different components of an image captioning model, such as the encoder and decoder, and how to train and evaluate your model. By the end of this course, you will be able to create your image captioning models and use them to generate captions for images.
Join here
Encoder-Decoder Architecture
This course gives you a synopsis of the encoder-decoder architecture, a powerful and prevalent machine learning architecture for sequence-to-sequence tasks such as machine translation, text summarization, and question answering.
You learn about the main components of the encoder-decoder architecture and how to train and serve these models. In the corresponding lab walkthrough, you'll code in TensorFlow a simple implementation of the encoder-decoder architecture for poetry generation from the beginning.
Join here
Summary
Introduction to Generative AI
Introduction to Large Language Models
Attention Mechanism
Transformer Models and BERT Model
Introduction to Image Generation
Create Image Captioning Models
Encoder-Decoder Architecture
5 Min Read