Category Exams of Google Cloud

Summary of Compute services of GCP – Basics of Google Cloud Platform

The summary of compute services of GCP can be seen in Figure 1.32:

Figure 1.32: Summary Compute Service

Creation of compute Engine (VM instances)

Compute Engine instances may run Google’s public Linux and Windows Server images and private custom images. Docker containers may also be deployed on Container-Optimized OS public image instances. Users’ needs to specify zone, OS, and machine configuration type (number of virtual CPUs and RAM) while creation. Each Compute Engine instance has a small OS-containing persistent boot drive and more storage can be added. VM instance default time is Coordinated Universal Time (UTC). Follow the steps below to create a virtual machine instance:

Step 1: Open compute engine:

Follow steps described in Figure 1.33 to open the compute engine of the platform:

Figure 1.33: VM Creation

  1. Users can type Compute Engine in the search box.
  2. Alternatively, users can navigate under Compute-to-Compute Engine | VM instances.

Step 2: Enable API for compute engine:

Users will be prompted to enable APIs when the resources are accessed for the first time as shown in Figure 1.34:

Figure 1.34: VM Creation API enablement

  1. If promoted for API enablement, then enable the API.

Step 3: VM instance selection:

Follow the steps described in Figure 1.35 to select the VM:

Figure 1.35: VM Creation

  1. Select VM instances.
  2. Click on CREATE INSTANCE to begin the creation process.
  3. GCP provides option of IMPORT VM for the migration of compute engine.

Step 4: VM instance creation:

Follow the steps described in Figure 1.36 to select location and machine type for the VM instance:

Figure 1.36: Location, machine selection for VM instance

  1. GCP provides multiple options for VM creation, first one for creating VM from scratch. We will create a New VM instance.
  2. VM’s can be created from template. Users can create a template and use it for future purpose.
  3. VM’s can be created from machine images, machine images is resource that stores configuration, metadata and other information to create a VM.
  4. GCP also provides option to choose deploy ready to go solution on to VM instance.
  5. Instance name has to be given by the user.
  6. Labels are optional, they provides key/value pairs to group VMs together.
  7. Select region and zone.
  8. GCP provides a wide range of options for users in machine configurations starting from GENERAL-PURPOSE, COMPUTE-OPTIMIZED, MEMORY-OPTIMIZED AND GPU based.
  9. Under each machine family configuration, GCP provides few series machines and machine type (to choose between CPU and memory).
  10. Users can choose CPU platform and GPU, if they want to select vCPUs to core ratio and visible core count.

Note: Try selecting different machine configurations and observe the variation in the price estimates.

Step 5: VM instance disk and container selection:

Follow the steps described in Figure 1.37 to select boot disks for the instance:

Figure 1.37: Boot disk selection for VM instance

  1. Enable display tools, enable use of screen recording tools.
  2. Adds protection to your data in use by keeping memory of this VM encrypted.
  3. Deploy Container option is helpful when there is a need to deploy a container to VM instance by using a container-optimized OS image.
  4. Users can change the operating system, size and type the hard disk (HDD/SDD) by clicking on change.

Step 6: VM instance creation access settings:

Follow the steps described in Figure 1.38 to configure access:

Figure 1.38: access control for VM instance

  1. Choose to set the default service account associated with the VM instance or users can create a service account and use the same for the compute engine.
  2. Users’ needs select how VM needs to be accessed, they can choose allow default access, or allow all the APIs or only few APIs to access the VM
  3. By default, all the internet traffic will be blocked, we need to enable them.
  4. Additional options such as disk protection, reservations, network tags, change host name, delete or retain boot disk when instance is deleted etc., are provided under networking, disks, security and management.
  5. Click on Create.

Deletion of bucket – Basics of Google Cloud Platform-2

Unique features include auto upgrades, node auto maintenance and auto scaling (Additional resources will be allotted by Kubernetes Engine in the event that the strain is placed on the applications).

  • Advanced cluster administration functions, such as load balancing, node pools, logging and monitoring, will be made available to the users as a bonus.
    • Compute Engine instance provides load balancing and distribution
    • Within a cluster, node pools are used to identify subsets of nodes in order to provide greater flexibility.
    • Logging and monitoring in conjunction with Cloud Monitoring so that you can see within your cluster.
  • Google APP Engine: GCP’s provision of a platform as a service for the development and deployment of scalable apps is known as Google APP Engine. It is a kind of computing known as serverless computing, and gives the users the ability to execute their code in a computing environment that does not involve the setting up of virtual machines or Kubernetes clusters.

It is compatible with a variety of programming languages, including Java, Python, Node.js, and Go. Users have the option of developing their apps in any of the supported languages. The Google App Engine is equipped with a number of APIs and services that make it possible for developers to create applications that are powerful and packed with features. These characteristics are as follows:

  • Access to the application log
    • Blobstore, serve large data objects

Other important characteristics include a pay-as-you-go strategy, which means that you only pay for the resources employed. When there is a spike in the number of users using an application, the app engine will immediately increase the number of available resources, and vice versa.

Effective Diagnostic Services include Cloud Monitoring and Cloud Logging, which assist in running app scans to locate faults in the application. The app reporting document provides developers with assistance in immediately fixing any faults they find.

As a component of A/B testing, Traffic Splitting is a feature that allows app engine to automatically direct different versions of incoming traffic to various app iterations. Users are able to plan the subsequent increments depending on which version of the software functions the most effectively.

There are two distinct kinds of app engines:

  • Standard APP Engine Applications are completely separate from the operating system of the server and any other applications that may be executing on that server at the same time. Along with the application code, there is no need for any operating system packages or other built software to be installed.
  • The second kind is called Flexible APP Engine.

Docker containers are executed by users inside the App Engine environment. Additionally, it is necessary to install libraries or other third-party software in order to execute application code.

  • Google Cloud Functions: The Google Cloud Function is a lightweight serverless computing solution that is used for event-driven processing. It is a function as a service (FAAS) product of GCP. Through the use of Cloud Functions, the time-consuming tasks of maintaining servers, setting software, upgrading frameworks, and patching operating systems are eliminated.

The user has to provide code for Cloud Functions to start running in response to an event since GCP completely manages both the software and the infrastructure. Cloud events are occurrences that take place inside a cloud computing environment, such as changes to the data stored in a database, additions of new files to a storage system, and even the construction of new virtual machines are all instances of these kinds of operations. A trigger is a declaration that you are interested in a certain event or set of events. Binding a function to a trigger allows user to capture and act on events. Event data is the data that is passed to Cloud Function when the event trigger results in function execution:

Figure 1.31: Cloud Function

New Project Creation – Basics of Google Cloud Platform

Google Cloud projects serve as the foundation for the creation, activation, and use of all Google Cloud services, such as managing APIs, enabling invoicing, adding and deleting collaborators, and managing permissions for Google Cloud resources.

Project can be created by following the steps in web console:

  1. Navigation from IAM and admin | Manage resources. The sequence can be seen in Figure 1.12:

Figure 1.12: Project Creation

  1. Click on CREATE PROJECT.

The project creation can be seen in the following figure:

Figure 1.13: Project Creation

  1. Users to provide name for the project, follow the steps as shown in Figure 1.14:

Figure 1.14: Project Creation

  1. Provide Project name.
  2. Project ID will be automatically populated, users can edit it while creation of the project. If users need to access the resources under Project through SDK or APIs project-ID is needed. Once project is created users cannot change the Project-ID
  3. If users are creating a project under Organization, select the organization. Users with free account cannot create organization or folder. All the projects will be created under No Organization.

Note: Users who are accessing through free account will be given limited amount of project creation.

Deletion of Project

To delete any project that is active:

  1. Select the project that needs to be deleted.
  2. Click on DELETE, users will be prompted for confirmation.

This can be seen illustrated in Figure 1.15:

Figure 1.15: Project deletion

Once the users will confirm the deletion of project, it will be marked for deletion and will be in same state for 30 days. Users can restore the project within a period of 30 days, post that project and the resources associated under that project will be deleted permanently and cannot be recovered back. Also, project which is marked under deletion is not usable.

Interacting with GCP services

When we talk about resources, let us discuss how we can work with them. GCP offers three basic ways to interact with the services and resources:

Google Cloud Platform Console

When working with Google Cloud, the Console or Web User Interface is the most common method of communication. At the same time, it delivers a wealth of functionality and an easy-to-use interface for the users who are just getting started with GCP.

Cloud console can be accessed with the link https://console.cloud.google.com.

Landing page of the google cloud console is as shown in Figure 1.16:

Figure 1.16: GCP Console

Hierarchy of GCP – Basics of Google Cloud Platform

The term resource is used to describe anything that is put to use on Google Cloud Platform. Everything in the Google cloud has a clear hierarchy that resembles a parent-child connection. Hierarchy followed in Google Cloud Platform is as shown in Figure 1.11:

Figure 1.11: Hierarchy of GCP

The Organization node serves as the starting point for the GCP Hierarchy and may stand for either an organization or a firm. The organization is the progenitor of both the folder and the project, as well as their respective resources. The rules for controlling access that have been implemented on the organization are applicable to all of the projects and resources that are affiliated with it.

But, if we establish an account with the personal mail ID as we did in the previous section, we would not be able to view the organization. On the other hand, if we login with our Google Workspace account and then start a project, the organization will be provided for us immediately. In addition, without organization, only a small number of the functions of the resource manager will be available.

Under organization we have folders. We are able to have an extra grouping mechanism at our disposal with the assistance of folders, and we may conceptualize this as a hierarchy of sub-organizations contained inside the larger organization. It is possible for a folder to have extra subfolders included inside it. You have the option of granting rights to access the project and all of its resources, either completely or partially, depending on the folder in question.

A project is an entity that exists at the most fundamental level. It is possible to have many projects nested inside of organization’s and folders. The project is absolutely necessary in order to make use of GCP resources, and it serves as the foundation for making use of cloud services, managing APIs, and enabling billing. A project has two different IDs connected with it. The first of these is the project ID, which is a one-of-a-kind identification for the project. And the second one is the project number, which is automatically issued whenever a project is created, and we are unable to modify it in any way.

The term resources refers to the components that make up Google Cloud Platform. Resources include things like cloud storage, databases, virtual machines, and so on. Each time we establish a cloud storage bucket or deploy a virtual machine, we link those resources to the appropriate project.

Services offered by GCP – Basics of Google Cloud Platform

Users may make use of a comprehensive selection of services provided by Google Cloud Platform. Every one of the services may be placed into one of the categories that are shown in the Figure 1.10:

Figure 1.10: Services of GCP

  • Google offers Cloud Storage for storing unstructured items, Filestore for sharing files in the traditional manner, and persistent disk for virtual machines in the storage space. Compute Engine, App Engine, Cloud Run, Kubernetes Engine, and Cloud Functions are the core computing services that Google Cloud Platform provides.
  • Cloud SQL, supports MySQL, PostgreSQL, and Microsoft SQL Server; and Cloud Spanner, which is a massively scalable database that is capable of running on a global scale. These are the relational database services that GCP provides.
  • Bigtable, Firestore, Memorystore, and Firebase Realtime Database are different NoSQL services that Google provides. When dealing with massive amounts of analytical work, Bigtable is the most effective solution. Firestore is well suited for use in the construction of client-side web and mobile apps. Firebase Realtime Database is ideal for real-time data synchronization between users, such as is required for collaborative app development. Memorystore is a kind of datastore that operates entirely inside memory and is generally used to accelerate application performance by caching data that is frequently accessed.
  • BigQuery is the name of the data warehouse service offered by Google.
  • A Virtual Private Cloud (VPC) is an on-premises network on GCP. By using VPC Network Peering, virtual private clouds may be linked to one another. Users may utilize Cloud VPN, which operates over the internet, to establish a protected connection between a VPC and an on-premises network. Alternatively, users can establish a dedicated, private connection by using either Cloud Interconnect or Peering. To facilitate the migration of applications and data sets to its platform, the platform provides a diverse set of options. Offers Anthos as an alternative for hybrid cloud deployments.
  • The field of data analytics is one in which Google excels in particular. Pub/Sub is used as a buffer for services that may not be able to deal with large surges in the amount of data coming in. Dataproc is a Hadoop and Spark implementation that is controlled by Dataproc. Apache Beam is the underlying technology that powers Dataflow, a managed implementation. You can do data processing using Dataprep even if you do not know how to write code, and it leverages Dataflow behind the scenes. Users may use google looker studio to visualize or show your data using graphs, charts, and other such graphical representations.
  • Platform provides AI and ML services for a diverse group of customers. Vertex AI provides AUTOML option for the novices, for more experienced users, it provides trained models that make predictions via API and also provides various options for the advanced AI practitioners.
  • Cloud Build enables you to develop continuous integration / continuous deployment pipelines. Private Git repositories that are hosted on GCP are known as Cloud Source Repositories. Artifact Registry expands on the capabilities of Container Registry and is the recommended container registry for Google Cloud. It provides a single location for storing and managing your language packages and Docker container images.
  • IAM stands for Identity and Access Management, and it enables users and apps to have roles assigned to them. Everything you store in the GCP is by default encrypted. Companies now have the ability to control their encryption keys thanks to Cloud Key Management. Your API keys, passwords, certificates, and any other sensitive information may be safely stored in the Secret Manager.
  • The Monitoring, Logging, Error Reporting, Trace, Debugger, and Profiler functions are all included in the Cloud Operations suite. The Active Security Threats and Vulnerabilities, as well as Compliance Infractions, are Presented to You by the Security Command Center. The development of Google Cloud Platform resources may be automated with the help of Cloud Deployment Manager.

Cloud Service Model – Basics of Google Cloud Platform

The cloud platform offers a variety of services, all of which may be roughly placed into one of three distinct categories:

  • Infrastructure as a service (IAAS)
  • Platform as a service (PAAS)
  • Software as a service (SAAS)

The difference between cloud service models is illustrated in the Figure 1.9

Figure 1.9: Cloud Service Model

Let us imagine we are working on an application and hosting it at the same time on a server that is located on our premises. In this particular circumstance, it is our obligation to own and maintain the appropriate infrastructure, as well as the appropriate platforms, and of course, our application.

  • Infrastructure as a service: In the case of IAAS, it will be the cloud’s obligation to provide the necessary infrastructure, which may include virtual machines, networking, and storage devices. We are still responsible for ensuring that we have the appropriate platform for development and deployment. We have no other option for exercising control over the underlying infrastructure but to make use of it. One example of an infrastructure as a service provided by Google with its compute engine and Kubernetes engine.
  • Platform as a service: In the case of PAAS, the responsibility for providing the appropriate platform for development and deployment, such as an operating system and tools for the environment in which programming languages are used, lies with the cloud service provider. This responsibility exists in addition to the infrastructure responsibility to provide such things. One example of a PAAS platform is Google App Engine.
  • Software as a service: In the instance of SAAS, a cloud service provider will rent out to their customers apps that are theirs to run on their infrastructure. The maintenance of the software applications will also fall within the purview of the cloud service provider, in addition to the platform and the underlying infrastructure. These software programs are accessible to us on whatever device we choose by way of web browsers, app browsers, and so on. Email (Gmail) and cloud storage (Google Drive) are two excellent instances of SAAS.
  • Data as a service (DAAS): DAAS is a service that is now starting to gain broad use, in contrast to the three service models that were mentioned before, which have been popular for more than a decade. This is partly owing to the fact that general cloud computing services were not originally built for the management of enormous data workloads; rather, they catered to the hosting of applications and basic data storage needs (as opposed to data integration, analytics, and processing).

SaaS eliminates the need to install and administer software on a local computer. Similarly, Data-as-a-Service methodology centers on the on-demand delivery of data from a number of sources using application programming interfaces (APIs). It is intended to make data access more straightforward, and provide curated datasets or streams of data that can be consumed in a variety of forms. These formats are often unified via the use of data virtualization. In fact, a DaaS architecture may consist of a wide variety of data management technologies such as data virtualization, data services, self-service analytics.

In its most basic form, DaaS enables organizations to have access to the ever-growing quantity and sophistication of the data sources at their disposal in order to give consumers with the most relevant insights. The democratization of data is something that is absolutely necessary for every company that wants to get actual value from its data. It gives a significant potential to monetize an organization’s data and acquire a competitive edge by adopting a more data-centric approach to the operations and procedures.

Advantages of Cloud – Basics of Google Cloud Platform

There are various advantages of cloud as shown in Figure 1.1, and mentioned as follows:

Figure 1.1: Advantages of Cloud platform

  • Cost efficiency: In terms of IT infrastructure management, cloud computing is undoubtedly the most cost-effective option. It is incredibly affordable for organizations of any size to transition from on-premises hardware to the cloud thanks to a variety of pay-as-you-go and other scalable choices. Using cloud resources instead of purchasing costly server equipment and PCs that need a lot of time to set up and maintain, such as long hours of setup and maintenance. Cloud also helps in reduced spending on compute, storage, network, operational and upgrade expenses.
  • Scalability and elasticity: Overall, cloud hosting is more flexible than hosting on a local machine. You do not have to undertake a costly (and time-consuming) upgrade to your IT infrastructure if you need more bandwidth. This increased degree of latitude and adaptability may have a major impact on productivity.

Elasticity is only employed for a short amount of time to deal with rapid shifts in workload. This is a short-term strategy used to meet spikes in demand, whether they are unanticipated or seasonal. The static increase in workload is met through scalability. To cope with an anticipated rise in demand, a long-term approach to scalability is used.

  • Security: Cloud platform provides a multitude of cutting-edge security measures, which ensure the safe storage and management of any data. Granular permissions and access control using federated roles are two examples of features that may help limit access to sensitive data to just those workers who have a legitimate need for it. This helps reduce the attack surface that is available to hostile actors. Authentication, access control, and encryption are some of the fundamental safeguards that providers of cloud storage put in place to secure their platforms and the data that is processed on those platforms. After that, users can implement additional security measures of their own, in addition to these precautions, to further strengthen cloud data protection and restrict access to sensitive information stored in the cloud.
  • Availability: The vast majority of cloud service providers are quite dependable in terms of the provision of their services; in fact, the vast majority of them maintain an uptime of 99.9 percent. Moving to the cloud should be done with the intention of achieving high availability. The goal is to make your company’s goods, services, and tools accessible to your clients and workers at any time of day and from any location in the world using any device that can connect to the internet.
  • Reduced downtime: Cloud based solutions provide the ability to operate critical systems and data directly from the cloud or to restore them to any location. During a catastrophic event involving information technology, they make it easier for you to get these systems back online, reducing the amount of manual work required by conventional recovery techniques.
  • Increased Collaboration: Developers, QA, operations, security, and product architects are all exposed to the same infrastructure and may work concurrently without tripping on one another’s toes in cloud settings. To minimize disputes and misunderstanding, cloud roles and permissions provide more visibility and monitoring of who performed what and when. Different cloud environments, such as staging, QA, demo, and pre-production, may be created for specialized reasons. The cloud makes transparent collaboration simpler and promotes it.
  • Insight: A bird’s-eye perspective of your data is also provided through the integrated cloud analytics that are offered by cloud platforms. When your data is kept in the cloud, it is much simpler to put in place, monitoring systems and create individualized reports for doing information analysis throughout the whole organization. You will be able to improve efficiency and construct action plans based on these insights, which will allow your organization to fulfil its objectives.
  • Control over data: Cloud provides you total visibility and control over your data. You have complete control over which users are granted access to which levels of specified data. This not only gives you control, but also helps simplify work by ensuring that staff members are aware of the tasks they have been allocated. Additionally, it will make working together much simpler. Because several users may make edits to the same copy of the text at the same time, there is no need that multiple copies of the document be distributed to the public.
  • Automatic software updates: There is nothing more cumbersome than being required to wait for the installation of system upgrades, especially for those who already have a lot on their plates. Applications that are hosted in the cloud instantly refresh and update themselves, eliminating the need for an IT personnel to carry out manual updates for the whole organization. This saves critical time and money that would have been spent on consulting from other sources.
  • Ease of managing: The use of cloud can streamline and improve IT maintenance and management capabilities through the use of agreements supported by SLA, centralized resource administration, and managed infrastructure. Users can take advantage of a simple user interface without having to worry about installing anything. In addition, users are provided with management, maintenance, and delivery of the IT services.

Deploying with Terraform – Deploying Skills Mapper

To deploy the environment, you need to run the Terraform commands in the terraform directory.

First, initialize Terraform to download the needed plugins with:

terraform
init

Then check that you have set the required variables in your terraform.tfvars with:

terraform
validate

All being well, you should see Success! The configuration is valid.

Although Terraform can enable Google Services, and these scripts do, it can be unreliable as services take time to enable. Use the enable_service.sh script to enable services with gcloud:

./enable_services.sh

Terraform will then show how many items would be added, changed, or destroyed. If you have not run Terraform on the projects before, you should see a lot of items to be added.

When you are ready, run the apply command:

terraform
apply

Again, Terraform will devise a plan for meeting the desired state. This time, it will prompt you to approve applying the plan. Enter yes and watch while Terraform creates everything from this book for you. This may take 30 minutes, the majority of which will be the creation of the Cloud SQL database used by the fact service.

When completed, you will see several outputs from Terraform that look like this:

application-project = “skillsmapper-application”
git-commit = “3ecff393be00e331bb4412f4dc24a3caab2e0ab8”
management-project = “skillsmapper-management”
public-domain = “skillsmapper.org”
public-ip = “34.36.189.201”
tfstate_bucket_name = “d87cf08d1d01901c-bucket-tfstate”

The public-ip is the external IP of the global load balancer. Use this to create an A record in your DNS provider for the domain you provided.

Reapplying Terraform

If you make a change to the Terraform configuration, there are a few things you need to do before deploying Terraform again.

First, make sure you are using the application project:

gcloud
config
set
project
$APPLICATION_PROJECT_ID

Terraform is unable to change the API Gateway configuration, so you will need to delete it and allow Terraform to recreate it.

Also, if Cloud Run has deployed new versions of the services, you will need to remove them and allow Terraform to recreate them, too, as Terraform will have the wrong version.

This time you will notice only a few added, changed, or destroyed resources, as Terraform only applies the differences to what is already there.

Deleting Everything

When you have finished with Skills Mapper, you can also use Terraform to clean up completely using:

terraform
destroy

This will remove all the infrastructure that Terraform has created.

At this point, you may also like to unlink the billing accounts from the projects so they can no longer be billed: gcloud
beta
billing
projects
unlink
$APPLICATION_PROJECT_ID
gcloud
beta
billing
projects
unlink
$MANAGEMENT_PROJECT_ID

Installing Terraform – Deploying Skills Mapper

Terraform is a command-line tool that you can install on your local machine. It’s compatible with Windows, Mac, and Linux, and you can download it directly from the Terraform website. After downloading, you’ll need to add it to your system’s path to enable command-line execution. You can verify the installation by running terraform –version, which should return the installed version.

Terraform makes use of plugins that allow it to communicate with the APIs of service providers like Google Cloud. Not surprisingly, in this setup, you will mainly be using the Google Cloud provider. Terraform is not perfect, though, and it is common to come across small limitations. The Skills Mapper deployment is no exception, so there are a few workarounds required.

Terraform Workflow

Using the Terraform tool has four main steps:

terraform init

Initialize the Terraform environment and download any plugins needed.

terraform plan

Show what Terraform will do. Terraform will check the current state, compare it to the desired state, and show what it will do to get there.

terraform apply

Apply the changes to the infrastructure. Terraform will make the changes to the infrastructure to get to the desired state.

terraform destroy

Destroy the infrastructure. Terraform will remove all the infrastructure it created.

Terraform Configuration

Terraform uses configuration files to define the desired state. For Skills Mapper, this is in the terraform directory or the GitHub repository. There are many files in this configuration, and they are separated into modules, which is Terraform’s way of grouping functionality for reuse.

Preparing for Terraform

Several prerequisites need to be in place before you can deploy using Terraform.

Creating Projects

First, you need to create two projects, an application project and a management project, as you did earlier in the book. Both projects must have a billing project enabled. The instructions for this is are Chapter 4.

Ensure you have the names of these projects available as environment variables (e.g., skillsmapper-application and skillsmapper-management, respectively):

APPLICATION_PROJECT_ID
=
skillsmapper-application

MANAGEMENT_PROJECT_ID
=
skillsmapper-management

Conferences and Events – Going Further

Google hosts two significant events annually: Google Cloud Next and Google I/O, each serving distinct audiences and covering unique areas of focus.

Google I/O, typically held in the second quarter, is a developer-oriented conference. It’s designed primarily for software engineers and developers utilizing Google’s consumer-oriented platforms, such as Android, Chrome, and Firebase, as well as Google Cloud. The event offers detailed technical sessions on creating applications across web, mobile, and enterprise realms using Google technologies. It’s also renowned for product announcements related to Google’s consumer platforms.

Conversely, Google Cloud Next is aimed at enterprise IT professionals and Google Cloud developers, taking place usually in the third quarter. Its focus revolves around Google Cloud Platform (GCP) and Google Workspace. The event provides insights into the latest developments and innovations in cloud technology. It also presents networking opportunities, a wealth of learning resources, and expert-led sessions dedicated to helping businesses leverage the power of the cloud for transformative operational changes. Its feel is notably more corporate than Google I/O.

Both conferences record the hundreds of talks presented and make them accessible on YouTube. This wealth of knowledge is a fantastic resource for keeping abreast of the latest developments in Google Cloud and gaining an in-depth understanding of technical areas.

In addition to these main events, numerous local events tied to Google Cloud Next and Google I/O are organized by local Google teams or community groups. These include Google I/O Extended and Google Cloud Next Developer Days, which offer a summary of the content from the larger events. The Google Events website is a reliable source to stay updated on upcoming happenings.

Summary

As you turn the last page of this book, my hope is that it has kindled a fire in you—a deep, consuming desire to explore the vast and fascinating world of Google Cloud, but more importantly, to build with it and innovate. If it has, then this book has served its purpose.

Remember, you are not alone on this journey. There’s an immense community of like-minded cloud enthusiasts and Google Cloud experts, eager to support and guide you on this path. They’re rooting for your success—so embrace their help!

Writing this book has been an enriching experience, filled with growth and discovery. I trust that you’ve found reading it just as enjoyable. I would be thrilled to hear about your unique experiences and journeys with Google Cloud. Your feedback on this book is not only welcome but greatly appreciated.

To share your thoughts and experiences, or simply reach out, please visit my website at https://danielvaughan.com.

As you venture further into the world of cloud computing, remember: every day brings new opportunities for growth and innovation. Embrace them with open arms.

Happy cloud computing, and here’s to the incredible journey that lies ahead!