Category Deletion of bucket

Summary of Compute services of GCP – Basics of Google Cloud Platform

The summary of compute services of GCP can be seen in Figure 1.32:

Figure 1.32: Summary Compute Service

Creation of compute Engine (VM instances)

Compute Engine instances may run Google’s public Linux and Windows Server images and private custom images. Docker containers may also be deployed on Container-Optimized OS public image instances. Users’ needs to specify zone, OS, and machine configuration type (number of virtual CPUs and RAM) while creation. Each Compute Engine instance has a small OS-containing persistent boot drive and more storage can be added. VM instance default time is Coordinated Universal Time (UTC). Follow the steps below to create a virtual machine instance:

Step 1: Open compute engine:

Follow steps described in Figure 1.33 to open the compute engine of the platform:

Figure 1.33: VM Creation

  1. Users can type Compute Engine in the search box.
  2. Alternatively, users can navigate under Compute-to-Compute Engine | VM instances.

Step 2: Enable API for compute engine:

Users will be prompted to enable APIs when the resources are accessed for the first time as shown in Figure 1.34:

Figure 1.34: VM Creation API enablement

  1. If promoted for API enablement, then enable the API.

Step 3: VM instance selection:

Follow the steps described in Figure 1.35 to select the VM:

Figure 1.35: VM Creation

  1. Select VM instances.
  2. Click on CREATE INSTANCE to begin the creation process.
  3. GCP provides option of IMPORT VM for the migration of compute engine.

Step 4: VM instance creation:

Follow the steps described in Figure 1.36 to select location and machine type for the VM instance:

Figure 1.36: Location, machine selection for VM instance

  1. GCP provides multiple options for VM creation, first one for creating VM from scratch. We will create a New VM instance.
  2. VM’s can be created from template. Users can create a template and use it for future purpose.
  3. VM’s can be created from machine images, machine images is resource that stores configuration, metadata and other information to create a VM.
  4. GCP also provides option to choose deploy ready to go solution on to VM instance.
  5. Instance name has to be given by the user.
  6. Labels are optional, they provides key/value pairs to group VMs together.
  7. Select region and zone.
  8. GCP provides a wide range of options for users in machine configurations starting from GENERAL-PURPOSE, COMPUTE-OPTIMIZED, MEMORY-OPTIMIZED AND GPU based.
  9. Under each machine family configuration, GCP provides few series machines and machine type (to choose between CPU and memory).
  10. Users can choose CPU platform and GPU, if they want to select vCPUs to core ratio and visible core count.

Note: Try selecting different machine configurations and observe the variation in the price estimates.

Step 5: VM instance disk and container selection:

Follow the steps described in Figure 1.37 to select boot disks for the instance:

Figure 1.37: Boot disk selection for VM instance

  1. Enable display tools, enable use of screen recording tools.
  2. Adds protection to your data in use by keeping memory of this VM encrypted.
  3. Deploy Container option is helpful when there is a need to deploy a container to VM instance by using a container-optimized OS image.
  4. Users can change the operating system, size and type the hard disk (HDD/SDD) by clicking on change.

Step 6: VM instance creation access settings:

Follow the steps described in Figure 1.38 to configure access:

Figure 1.38: access control for VM instance

  1. Choose to set the default service account associated with the VM instance or users can create a service account and use the same for the compute engine.
  2. Users’ needs select how VM needs to be accessed, they can choose allow default access, or allow all the APIs or only few APIs to access the VM
  3. By default, all the internet traffic will be blocked, we need to enable them.
  4. Additional options such as disk protection, reservations, network tags, change host name, delete or retain boot disk when instance is deleted etc., are provided under networking, disks, security and management.
  5. Click on Create.

Command Line Interface – Basics of Google Cloud Platform-1

You may control your development process and your GCP resources using the gcloud command-line tool provided by the Google Cloud SDK, if you are a developer. To get command-line access to Google Cloud’s computational resources, you may use the Cloud Shell. With a 5-GB home directory, Cloud Shell is a Debian-based virtual computer from which you can easily manage your GCP projects and resources. Cloud Shell is pre-installed with the gcloud command-line tool and other necessary tools, allowing you to get up and running fast. To use cloud shell, follow the steps below:

Activate Cloud shell as shown in Figure 1.17:

Figure 1.17: GCP CLI

Click on Activate cloud shell.It will take few mins to open the Cloud shell command window

Once the cloud shell is activated, black space appears at the bottom of the screen to type commands as shown in Figure 1.18:

Figure 1.18: GCP CLI

  1. Type commands:

gcloud projects create project_id – For Project creation

gcloud projects delete project_id – For project deletion

APIs

It is common for apps to communicate with Google Cloud via Software Development Kit (SDK). Go, Python, and node.js are few of the many programming languages for which Google Cloud SDKs are available.

Note: We will use this method while getting predictions from the deployed models.

Storage

Along with computing and network, storage is considered to be one of the fundamental building components. Applications benefit from storage services’ increased levels of persistence and durability. These services are located deep inside the platform and serve as the foundation for the vast majority of Google Cloud’s services as well as the systems that you construct on top of it. They are the platform’s central pillars.

Three types of storage options are provided by Google Cloud:

  • Persistent Disks
  • Filestore
  • Cloud Storage (GCS)

They are explained as follows:

  • Persistent Disks: Block storage is provided by a Google Cloud Persistent Disk, which is used by virtual machine hosted on Google Cloud (Google Cloud Compute Engine). Imagine those Persistent Disks as simple USB sticks; this will help you comprehend it far better than any other method. They may be connected to virtual machines or detached from them. They allow you to construct data persistence for your services whether virtual machines are started, paused, or terminated. These Persistent Disks are put to use to power not just the virtual machines that are hosted on Google Cloud Compute Engine, but also the Google Kubernetes Engine service.

A Google Cloud Persistent Disk operates similarly to a virtual disc on your local PC. Persistent Disk can either be HDD or SSD, with the latter offering superior I/O performance. In addition, there is the choice of where they are placed as well as the sort of availability that is required, which may be either regional, zonal, or local.

Other capabilities of Google Cloud Persistent Disks that are lesser known but are prove useful include automatic encryption, the ability to resize the disc while it is being used, and a snapshot capability that can be used for both backing up data and creating images for virtual machines. Read and write access can be configured for multiple VMs. One VM can have write access and all other VMs can have read access for a Persistent disk.

  • Filestore: Filestore is a network file storage service that is provided by Google Cloud. The idea of network file storage has been around for quite some time, and similar to block storage, it can also be found in the on-premises data centers that most businesses use. You should be comfortable with the notion if you are used to dealing with NAS, which stands for network-attached storage. In response to the dearth of services that are compatible with network-attached storage (NAS), Google has expanded its offerings to include a cloud file storage service.

New Project Creation – Basics of Google Cloud Platform

Google Cloud projects serve as the foundation for the creation, activation, and use of all Google Cloud services, such as managing APIs, enabling invoicing, adding and deleting collaborators, and managing permissions for Google Cloud resources.

Project can be created by following the steps in web console:

  1. Navigation from IAM and admin | Manage resources. The sequence can be seen in Figure 1.12:

Figure 1.12: Project Creation

  1. Click on CREATE PROJECT.

The project creation can be seen in the following figure:

Figure 1.13: Project Creation

  1. Users to provide name for the project, follow the steps as shown in Figure 1.14:

Figure 1.14: Project Creation

  1. Provide Project name.
  2. Project ID will be automatically populated, users can edit it while creation of the project. If users need to access the resources under Project through SDK or APIs project-ID is needed. Once project is created users cannot change the Project-ID
  3. If users are creating a project under Organization, select the organization. Users with free account cannot create organization or folder. All the projects will be created under No Organization.

Note: Users who are accessing through free account will be given limited amount of project creation.

Deletion of Project

To delete any project that is active:

  1. Select the project that needs to be deleted.
  2. Click on DELETE, users will be prompted for confirmation.

This can be seen illustrated in Figure 1.15:

Figure 1.15: Project deletion

Once the users will confirm the deletion of project, it will be marked for deletion and will be in same state for 30 days. Users can restore the project within a period of 30 days, post that project and the resources associated under that project will be deleted permanently and cannot be recovered back. Also, project which is marked under deletion is not usable.

Interacting with GCP services

When we talk about resources, let us discuss how we can work with them. GCP offers three basic ways to interact with the services and resources:

Google Cloud Platform Console

When working with Google Cloud, the Console or Web User Interface is the most common method of communication. At the same time, it delivers a wealth of functionality and an easy-to-use interface for the users who are just getting started with GCP.

Cloud console can be accessed with the link https://console.cloud.google.com.

Landing page of the google cloud console is as shown in Figure 1.16:

Figure 1.16: GCP Console

Hierarchy of GCP – Basics of Google Cloud Platform

The term resource is used to describe anything that is put to use on Google Cloud Platform. Everything in the Google cloud has a clear hierarchy that resembles a parent-child connection. Hierarchy followed in Google Cloud Platform is as shown in Figure 1.11:

Figure 1.11: Hierarchy of GCP

The Organization node serves as the starting point for the GCP Hierarchy and may stand for either an organization or a firm. The organization is the progenitor of both the folder and the project, as well as their respective resources. The rules for controlling access that have been implemented on the organization are applicable to all of the projects and resources that are affiliated with it.

But, if we establish an account with the personal mail ID as we did in the previous section, we would not be able to view the organization. On the other hand, if we login with our Google Workspace account and then start a project, the organization will be provided for us immediately. In addition, without organization, only a small number of the functions of the resource manager will be available.

Under organization we have folders. We are able to have an extra grouping mechanism at our disposal with the assistance of folders, and we may conceptualize this as a hierarchy of sub-organizations contained inside the larger organization. It is possible for a folder to have extra subfolders included inside it. You have the option of granting rights to access the project and all of its resources, either completely or partially, depending on the folder in question.

A project is an entity that exists at the most fundamental level. It is possible to have many projects nested inside of organization’s and folders. The project is absolutely necessary in order to make use of GCP resources, and it serves as the foundation for making use of cloud services, managing APIs, and enabling billing. A project has two different IDs connected with it. The first of these is the project ID, which is a one-of-a-kind identification for the project. And the second one is the project number, which is automatically issued whenever a project is created, and we are unable to modify it in any way.

The term resources refers to the components that make up Google Cloud Platform. Resources include things like cloud storage, databases, virtual machines, and so on. Each time we establish a cloud storage bucket or deploy a virtual machine, we link those resources to the appropriate project.

Services offered by GCP – Basics of Google Cloud Platform

Users may make use of a comprehensive selection of services provided by Google Cloud Platform. Every one of the services may be placed into one of the categories that are shown in the Figure 1.10:

Figure 1.10: Services of GCP

  • Google offers Cloud Storage for storing unstructured items, Filestore for sharing files in the traditional manner, and persistent disk for virtual machines in the storage space. Compute Engine, App Engine, Cloud Run, Kubernetes Engine, and Cloud Functions are the core computing services that Google Cloud Platform provides.
  • Cloud SQL, supports MySQL, PostgreSQL, and Microsoft SQL Server; and Cloud Spanner, which is a massively scalable database that is capable of running on a global scale. These are the relational database services that GCP provides.
  • Bigtable, Firestore, Memorystore, and Firebase Realtime Database are different NoSQL services that Google provides. When dealing with massive amounts of analytical work, Bigtable is the most effective solution. Firestore is well suited for use in the construction of client-side web and mobile apps. Firebase Realtime Database is ideal for real-time data synchronization between users, such as is required for collaborative app development. Memorystore is a kind of datastore that operates entirely inside memory and is generally used to accelerate application performance by caching data that is frequently accessed.
  • BigQuery is the name of the data warehouse service offered by Google.
  • A Virtual Private Cloud (VPC) is an on-premises network on GCP. By using VPC Network Peering, virtual private clouds may be linked to one another. Users may utilize Cloud VPN, which operates over the internet, to establish a protected connection between a VPC and an on-premises network. Alternatively, users can establish a dedicated, private connection by using either Cloud Interconnect or Peering. To facilitate the migration of applications and data sets to its platform, the platform provides a diverse set of options. Offers Anthos as an alternative for hybrid cloud deployments.
  • The field of data analytics is one in which Google excels in particular. Pub/Sub is used as a buffer for services that may not be able to deal with large surges in the amount of data coming in. Dataproc is a Hadoop and Spark implementation that is controlled by Dataproc. Apache Beam is the underlying technology that powers Dataflow, a managed implementation. You can do data processing using Dataprep even if you do not know how to write code, and it leverages Dataflow behind the scenes. Users may use google looker studio to visualize or show your data using graphs, charts, and other such graphical representations.
  • Platform provides AI and ML services for a diverse group of customers. Vertex AI provides AUTOML option for the novices, for more experienced users, it provides trained models that make predictions via API and also provides various options for the advanced AI practitioners.
  • Cloud Build enables you to develop continuous integration / continuous deployment pipelines. Private Git repositories that are hosted on GCP are known as Cloud Source Repositories. Artifact Registry expands on the capabilities of Container Registry and is the recommended container registry for Google Cloud. It provides a single location for storing and managing your language packages and Docker container images.
  • IAM stands for Identity and Access Management, and it enables users and apps to have roles assigned to them. Everything you store in the GCP is by default encrypted. Companies now have the ability to control their encryption keys thanks to Cloud Key Management. Your API keys, passwords, certificates, and any other sensitive information may be safely stored in the Secret Manager.
  • The Monitoring, Logging, Error Reporting, Trace, Debugger, and Profiler functions are all included in the Cloud Operations suite. The Active Security Threats and Vulnerabilities, as well as Compliance Infractions, are Presented to You by the Security Command Center. The development of Google Cloud Platform resources may be automated with the help of Cloud Deployment Manager.

Importance of Cloud for data scientist – Basics of Google Cloud Platform

Since the beginning of the previous decade, the expansion of data has followed an exponential pattern, and this trend is expected to continue. The safe and secure storage of data should be one of the top priorities of every company. The cloud is usually the top option when it comes to storing and processing the enormous quantity of data since it has all of the advantages that were discussed above. As a consequence of this, a data scientist in today’s world has to have experience with cloud computing in addition to their expertise in statistics, machine learning algorithms, and other areas.

However, due to the low processing capacity of their CPU, they are unable to carry out these responsibilities in a timely way, assuming that they are even capable of doing so at all. In addition, the memory of the machine is often incapable of storing massive datasets because of their size. It determines how quickly the assignment is performed and how well it was accomplished overall. Data scientists are now able to investigate more extensive collections of data without being constrained by the capabilities of their local workstations thanks to the cloud. Utilizing the cloud might result in a decrease in the cost of infrastructure since it eliminates the requirement for a physical server. In addition, depending on the cloud for data storage can lead to a reduction in the cost of infrastructure. In addition to offering data storage services, many cloud platforms including google cloud platform also has other services caterings to data ingestion, data processing, analytics, AI and data visualization.

Types of Cloud

There are three types of cloud based on different capabilities:

  • Public Cloud
  • Private Cloud
  • Hybrid Cloud

Public Cloud: The public cloud is a massive collection of readily available computing resources, including networking, memory, processing elements, and storage. Users can rent these resources, which are housed in one of the public cloud vendors globally dispersed and fully managed datacenters, to create your IT architecture. Using a web browser, users have access to your resources in this form of cloud. Google Cloud Platform is an example for Public Cloud.

A major advantage of the public cloud is that the underlying hardware and logic are hosted, owned, and maintained by each vendor. Customers are not responsible for purchasing or maintaining the physical components that comprise their public cloud IT solutions. In addition, Service Level Agreements (SLAs) bind each provider to a monthly uptime percentage and security guarantee in accordance with regulations.

Private Cloud: Unlike public clouds, private clouds are owned and operated only by a single organization. They have usually been housed in the company’s datacenter and run on the organization ‘s own equipment. To host their private cloud on their equipment, however, an organization may use a third-party supplier. Even if the resources are housed in a remotely managed datacenter, private cloud has certain characteristics with public cloud in this case. They may be able to provide certain administrative services but they would not be able to offer the full range of public cloud services.

If the private cloud is housed in your own datacenter, organization have complete control over the whole system. A self-hosted private cloud may help to comply with some of the stricter security and compliance regulations.

Hybrid Cloud: This kind of cloud computing is a blend and integration of both public and private clouds, as the name of this form of cloud computing indicates. In this manner, it will be able to provide you with the advantages associated with a variety of cloud kinds when it comes to cloud computing. It enables a larger degree of flexibility in terms of the transmission of data and expands the alternatives available to a company for its adoption. This guarantees a high level of control as well as an easy transition while giving everything at rates that are more economical.

Advantages of Cloud – Basics of Google Cloud Platform

There are various advantages of cloud as shown in Figure 1.1, and mentioned as follows:

Figure 1.1: Advantages of Cloud platform

  • Cost efficiency: In terms of IT infrastructure management, cloud computing is undoubtedly the most cost-effective option. It is incredibly affordable for organizations of any size to transition from on-premises hardware to the cloud thanks to a variety of pay-as-you-go and other scalable choices. Using cloud resources instead of purchasing costly server equipment and PCs that need a lot of time to set up and maintain, such as long hours of setup and maintenance. Cloud also helps in reduced spending on compute, storage, network, operational and upgrade expenses.
  • Scalability and elasticity: Overall, cloud hosting is more flexible than hosting on a local machine. You do not have to undertake a costly (and time-consuming) upgrade to your IT infrastructure if you need more bandwidth. This increased degree of latitude and adaptability may have a major impact on productivity.

Elasticity is only employed for a short amount of time to deal with rapid shifts in workload. This is a short-term strategy used to meet spikes in demand, whether they are unanticipated or seasonal. The static increase in workload is met through scalability. To cope with an anticipated rise in demand, a long-term approach to scalability is used.

  • Security: Cloud platform provides a multitude of cutting-edge security measures, which ensure the safe storage and management of any data. Granular permissions and access control using federated roles are two examples of features that may help limit access to sensitive data to just those workers who have a legitimate need for it. This helps reduce the attack surface that is available to hostile actors. Authentication, access control, and encryption are some of the fundamental safeguards that providers of cloud storage put in place to secure their platforms and the data that is processed on those platforms. After that, users can implement additional security measures of their own, in addition to these precautions, to further strengthen cloud data protection and restrict access to sensitive information stored in the cloud.
  • Availability: The vast majority of cloud service providers are quite dependable in terms of the provision of their services; in fact, the vast majority of them maintain an uptime of 99.9 percent. Moving to the cloud should be done with the intention of achieving high availability. The goal is to make your company’s goods, services, and tools accessible to your clients and workers at any time of day and from any location in the world using any device that can connect to the internet.
  • Reduced downtime: Cloud based solutions provide the ability to operate critical systems and data directly from the cloud or to restore them to any location. During a catastrophic event involving information technology, they make it easier for you to get these systems back online, reducing the amount of manual work required by conventional recovery techniques.
  • Increased Collaboration: Developers, QA, operations, security, and product architects are all exposed to the same infrastructure and may work concurrently without tripping on one another’s toes in cloud settings. To minimize disputes and misunderstanding, cloud roles and permissions provide more visibility and monitoring of who performed what and when. Different cloud environments, such as staging, QA, demo, and pre-production, may be created for specialized reasons. The cloud makes transparent collaboration simpler and promotes it.
  • Insight: A bird’s-eye perspective of your data is also provided through the integrated cloud analytics that are offered by cloud platforms. When your data is kept in the cloud, it is much simpler to put in place, monitoring systems and create individualized reports for doing information analysis throughout the whole organization. You will be able to improve efficiency and construct action plans based on these insights, which will allow your organization to fulfil its objectives.
  • Control over data: Cloud provides you total visibility and control over your data. You have complete control over which users are granted access to which levels of specified data. This not only gives you control, but also helps simplify work by ensuring that staff members are aware of the tasks they have been allocated. Additionally, it will make working together much simpler. Because several users may make edits to the same copy of the text at the same time, there is no need that multiple copies of the document be distributed to the public.
  • Automatic software updates: There is nothing more cumbersome than being required to wait for the installation of system upgrades, especially for those who already have a lot on their plates. Applications that are hosted in the cloud instantly refresh and update themselves, eliminating the need for an IT personnel to carry out manual updates for the whole organization. This saves critical time and money that would have been spent on consulting from other sources.
  • Ease of managing: The use of cloud can streamline and improve IT maintenance and management capabilities through the use of agreements supported by SLA, centralized resource administration, and managed infrastructure. Users can take advantage of a simple user interface without having to worry about installing anything. In addition, users are provided with management, maintenance, and delivery of the IT services.

Introduction – Basics of Google Cloud Platform

You will learn about the Google cloud platform in this chapter, as well as its benefits and the role it plays in today’s digital revolution. Basic knowledge of cloud computing, including cloud service models, GCP account creation, footprint, range of services, and GCP hierarchy. This chapter will also introduce a few key GCP services, including storage, computation, google BigQuery and identity and access management, is then provided.

Structure

In this chapter, we will cover the following topics:

  • Introduction and basics of Cloud platform
  • Advantages of Cloud
  • Importance of Cloud for data scientists
  • Types of Cloud
  • Introduction to Google Cloud platform
  • Footprint of Google Cloud
  • Cloud service model
  • Services of GCP
  • Hierarchy of GCP
  • Interacting with GCP services
  • Storage in GCP
  • Compute in GCP
  • BigQuery
  • Identity and Access Management

Objectives

Before diving into the Vertex AI of the Google Cloud platform, it is very essential to grasp a few significant principles and vital services of the cloud platform. Users will have a solid understanding of the GCP components and services by the time this chapter ends. Detailed instructions for using GCP’s storage, compute, and BigQuery services are included.

Introduction to Cloud

The term Cloud describes the applications and databases that run on servers that can be accessed over the Internet. Data centers across the globe host the cloud servers. Organizations can avoid managing physical servers or running software on their own computers by utilizing cloud computing. The cloud enables users to access the same files and applications from almost any device, because the computing and storage takes place on servers in a data center, instead of locally on the user device.

For businesses, switching to cloud computing removes some IT costs and overhead: for instance, they no longer need to update and maintain their own servers, as the cloud vendor they are using will do that.

Installing Terraform – Deploying Skills Mapper

Terraform is a command-line tool that you can install on your local machine. It’s compatible with Windows, Mac, and Linux, and you can download it directly from the Terraform website. After downloading, you’ll need to add it to your system’s path to enable command-line execution. You can verify the installation by running terraform –version, which should return the installed version.

Terraform makes use of plugins that allow it to communicate with the APIs of service providers like Google Cloud. Not surprisingly, in this setup, you will mainly be using the Google Cloud provider. Terraform is not perfect, though, and it is common to come across small limitations. The Skills Mapper deployment is no exception, so there are a few workarounds required.

Terraform Workflow

Using the Terraform tool has four main steps:

terraform init

Initialize the Terraform environment and download any plugins needed.

terraform plan

Show what Terraform will do. Terraform will check the current state, compare it to the desired state, and show what it will do to get there.

terraform apply

Apply the changes to the infrastructure. Terraform will make the changes to the infrastructure to get to the desired state.

terraform destroy

Destroy the infrastructure. Terraform will remove all the infrastructure it created.

Terraform Configuration

Terraform uses configuration files to define the desired state. For Skills Mapper, this is in the terraform directory or the GitHub repository. There are many files in this configuration, and they are separated into modules, which is Terraform’s way of grouping functionality for reuse.

Preparing for Terraform

Several prerequisites need to be in place before you can deploy using Terraform.

Creating Projects

First, you need to create two projects, an application project and a management project, as you did earlier in the book. Both projects must have a billing project enabled. The instructions for this is are Chapter 4.

Ensure you have the names of these projects available as environment variables (e.g., skillsmapper-application and skillsmapper-management, respectively):

APPLICATION_PROJECT_ID
=
skillsmapper-application

MANAGEMENT_PROJECT_ID
=
skillsmapper-management

Conferences and Events – Going Further

Google hosts two significant events annually: Google Cloud Next and Google I/O, each serving distinct audiences and covering unique areas of focus.

Google I/O, typically held in the second quarter, is a developer-oriented conference. It’s designed primarily for software engineers and developers utilizing Google’s consumer-oriented platforms, such as Android, Chrome, and Firebase, as well as Google Cloud. The event offers detailed technical sessions on creating applications across web, mobile, and enterprise realms using Google technologies. It’s also renowned for product announcements related to Google’s consumer platforms.

Conversely, Google Cloud Next is aimed at enterprise IT professionals and Google Cloud developers, taking place usually in the third quarter. Its focus revolves around Google Cloud Platform (GCP) and Google Workspace. The event provides insights into the latest developments and innovations in cloud technology. It also presents networking opportunities, a wealth of learning resources, and expert-led sessions dedicated to helping businesses leverage the power of the cloud for transformative operational changes. Its feel is notably more corporate than Google I/O.

Both conferences record the hundreds of talks presented and make them accessible on YouTube. This wealth of knowledge is a fantastic resource for keeping abreast of the latest developments in Google Cloud and gaining an in-depth understanding of technical areas.

In addition to these main events, numerous local events tied to Google Cloud Next and Google I/O are organized by local Google teams or community groups. These include Google I/O Extended and Google Cloud Next Developer Days, which offer a summary of the content from the larger events. The Google Events website is a reliable source to stay updated on upcoming happenings.

Summary

As you turn the last page of this book, my hope is that it has kindled a fire in you—a deep, consuming desire to explore the vast and fascinating world of Google Cloud, but more importantly, to build with it and innovate. If it has, then this book has served its purpose.

Remember, you are not alone on this journey. There’s an immense community of like-minded cloud enthusiasts and Google Cloud experts, eager to support and guide you on this path. They’re rooting for your success—so embrace their help!

Writing this book has been an enriching experience, filled with growth and discovery. I trust that you’ve found reading it just as enjoyable. I would be thrilled to hear about your unique experiences and journeys with Google Cloud. Your feedback on this book is not only welcome but greatly appreciated.

To share your thoughts and experiences, or simply reach out, please visit my website at https://danielvaughan.com.

As you venture further into the world of cloud computing, remember: every day brings new opportunities for growth and innovation. Embrace them with open arms.

Happy cloud computing, and here’s to the incredible journey that lies ahead!