Category Exams of Google Cloud

Community Support – Going Further

Google Cloud hosts a vibrant community comprising Google staff, partners, customer groups, and the developer community, all providing substantial in-person support.

Google Staff

Google employs a host of experts, including developer advocates, customer engineers, and professional services personnel, who are all dedicated to supporting various aspects of Google Cloud. While customer engineers and professional services cater predominantly to Google Cloud’s customers, developer advocates work to support the broader developer community, producing online content and presenting at conferences.

Partners

Google has partnered with a wide array of businesses, from global system integrators to boutique consultancies, to aid customers in utilizing Google Cloud effectively. They offer help in areas ranging from strategic planning to implementation and often specialize in certain areas. In addition to providing expertise and professional services similar to Google, some partners are authorized to provide training, boasting individuals certified as Google Certified Trainers.

Customer Groups

The Google Cloud Customer Community (C2C) is a peer-to-peer network that fosters a sharing platform for Google Cloud customers. This community spans across regions, enabling members to share ideas, experiences, and insights to address common challenges and encourage innovation. It welcomes anyone to join and participate in online forums and frequent free events, both online and in person.

Developer Community

Google supports several developer community programs catering to individual developers as opposed to customer organizations:

Google Developer Groups (GDG)

These are local groups of developers who meet online or in person to share experiences and learn from each other. These groups often utilize platforms like Meetup to advertise their events.

Women Techmakers

This initiative provides visibility, community, and resources specifically for women in technology.

Google Developer Student Clubs (GDSC)

Functioning much like Google Developer Groups, these are targeted toward university students, helping them learn about Google technologies across 1,900+ chapters in over 100 countries.

Google Developer Experts

This is a global network of over 1,000 professionals across 30+ countries, recognized for their expertise in Google technologies and their contributions to the community.

The Road to Google Certification is another significant initiative. Sponsored by GDG and held several times a year, this program is designed to help participants prepare for Google Cloud certifications at no cost. It comprises six weekly online sessions and supporting material, with the program being open to anyone interested. Note that it is managed by the GDG independently from Google.

Qwiklabs – Going Further

Google Cloud’s Qwiklabs or Google Cloud Skills Boost offers an online learning environment featuring practical training labs for those seeking to deepen their understanding of Google Cloud. Each lab hones in on a particular topic, and with step-by-step instructions, guides you in interacting with Google Cloud resources directly via your web browser. Labs span from beginner to advanced levels, and topics cover machine learning, security, infrastructure, and application development. Qwiklabs is a great platform for acquiring new skills and reinforcing ones previously learned through other methods, such as Google Cloud’s certification programs.

While Qwiklabs usually requires paid access and operates on credits, they often run promotions offering free credits or access to select courses.

Tip

As of this writing, Google offers an annual subscription named Google Innovators Plus. For $299, you receive a year’s unlimited access to Google Skills Boost, cloud expert consultations, and $500 of Google Cloud credit. The package also includes a voucher for a Google Cloud Professional Certification exam (valued at $200), and if you pass, you’re granted an additional $500 of Google Cloud credit. The cloud credit from this subscription proved invaluable for funding my Google Cloud projects while writing this book: it was unquestionably a sound investment for me.

A monthly subscription option is also available for $29/month. However, this package doesn’t include the cloud credit and exam voucher benefits.

Non-Google Communities

Other non-Google platforms offer valuable content and engaging discussions:

Google Cloud Collective on Stack Overflow

A community on Stack Overflow where developers can post queries, share knowledge, and help resolve Google Cloud–related issues. It’s a reputable place for technical discussions and detailed problem-solving.

Google Cloud on Reddit

This subreddit is a vibrant community of Google Cloud users. Members can share their experiences, ask questions, discuss the latest trends, or even vent their frustrations about Google Cloud. It offers a mix of technical, business, and general content about Google Cloud.

Google Cloud community on Medium

This Medium publication provides a variety of articles about Google Cloud written by the community. Topics range from tutorials and use cases to insights and trends. It’s a great place to consume long-form content related to Google Cloud.

Online Learning Resources and Communities – Going Further

To help you gain a comprehensive understanding of Google Cloud, a wide array of online resources is available. Your learning journey could continue with the following:

Official Google Cloud documentation

This is a powerful tool offering in-depth coverage of all the services.

Google Cloud blog

This provides timely news, helpful tips, and insider tricks.

Google Cloud community

This forum is a space for discussions on various Google Cloud topics.

Developer center and community

This resource is specifically designed for the Google Cloud Developer community, offering events and articles tailored to their interests.

Remember, these are just the tip of the iceberg; a multitude of other resources are also at your disposal.

YouTube

Google, being the owner of YouTube, ensures the platform is a valuable source of freely available content related to Google Cloud. Here are a few standout channels and playlists:

Google Cloud Tech

This is the primary channel for all the latest updates on Google Cloud, including the This Week in Cloud series with recent developments.

Serverless Expeditions playlist

A comprehensive video series on serverless development on Google Cloud. It aligns well with this book, featuring a significant focus on Cloud Run.

Google Cloud Events

This channel hosts recordings from Google Cloud Next conferences and other events. It’s a valuable resource since many of these talks come directly from the product developers themselves.

Google for Developers

Here, you can find recordings from Google I/O and other developer events. While not exclusively focused on Google Cloud, it provides a wide range of developer-oriented content.

Podcasts

For those who prefer audio content, there are several Google Cloud–related podcasts worth mentioning:

Google Cloud Platform Podcast

A weekly podcast that keeps you updated with the latest developments in Google Cloud. It also boasts an extensive back catalogue of episodes, offering insights into various aspects of Google Cloud.

Google Cloud Reader

A unique podcast that summarizes and presents the best articles from the Google Cloud blog on a weekly basis. It’s a great resource to keep up with important Google Cloud discussions without having to read through every article.

Kubernetes Podcast

Although it’s not exclusively about Google Cloud, this podcast produced by Google offers comprehensive information about Kubernetes, a crucial component in many Google Cloud services. This podcast is informative and handy for anyone wanting to deepen their understanding of Kubernetes and its applications in cloud environments.

Professional Certification – Going Further-2

If you have diligently worked through this book, I suggest starting with the Associate Cloud Engineer exam, progressing to the Professional Cloud Architect, and thereafter, tailoring your certification journey based on your interests and career aspirations. Although there is no rigid sequence for taking the exams, there is some overlap between them, and the more you undertake, the easier they become. For instance, once you’ve prepared for the Professional Architect exam, the Professional Developer exam does not require a great deal of additional preparation. Following is the full list of certifications available at the time of writing:

Cloud Digital Leader

Focuses on a foundational understanding of Google Cloud’s capabilities and their benefits to organizations

Associate Cloud Engineer

Highlights the hands-on skills needed for managing operations within Google Cloud

Professional Cloud Architect

Concentrates on the design, management, and orchestration of solutions using a comprehensive range of Google Cloud products and services

Professional Cloud Database Engineer

Addresses the design, management, and troubleshooting of Google Cloud databases, with an emphasis on data migrations

Professional Cloud Developer

Emphasizes the design, build, test, and deployment cycle of applications operating on Google Cloud

Professional Data Engineer

Designed for professionals constructing and securing data processing systems

Professional Cloud DevOps Engineer

Covers DevOps, SRE, CI/CD, and observability aspects within Google Cloud

Professional Cloud Security Engineer

Prioritizes the security of Google Cloud, its applications, data, and users

Professional Cloud Network Engineer

Concentrates on the design, planning, and implementation of Google Cloud networks, having significant overlap with security concepts

Professional Google Workspace Administrator

Targets professionals managing and securing Google Workspace, formerly known as G Suite

Professional Machine Learning Engineer

Serves those involved in the design, construction, and operationalization of machine learning models on Google Cloud

The exams are not easy—that is what makes them valuable—but they are not impossible either. Different people will have different preferences for how to prepare. When I have prepared for exams, I prefer to do a little, often: an hour of reading or watching a video in the morning followed by an hour of hands-on experimentation in the evening. I find that this helps me to retain the information and to build up my knowledge over time. As I get closer to the exam, I do more practice exams; Google provides example questions for each in the exam guide, to get used to the style of questions and identify any gaps in knowledge to work on.

I have a ritual of booking my exam for 10 AM and having Starbucks tea and fruit toast followed by a walk before the exam. I arrive or set up in plenty of time, so I am relaxed. When the exam starts, I recommend reading questions very carefully, as there is often a small detail that makes all the difference to the answer.

Sometimes a difficult question can use up time; in this case, I flag it and move on. I also flag any questions I am not completely sure about and come back later. At the end of the exam, I am usually much more confident about my answers.

Often, there will be a piece of information in one question that may unlock a difficult question earlier on. Most importantly, if you are not sure, make a guess. You will not be penalized for a wrong answer, but you will be penalized for not answering a question.

When you finish and submit your exam, you will get a provisional pass or fail. Google does not give you a score or a breakdown to tell you which questions you got wrong (like AWS, for example). You will get an email a few days later with your final result. You may also receive a code to redeem for a gift from Google (at the time of writing and depending on the exam), which is a nice touch. You can also list your certification in the Google Cloud Certified Directory. For example, you can see my profile in the Directory site.

Tip

Resist the temptation to use exam dumps for preparation. These question compilations are often shared in violation of the exam’s confidentiality agreement and tend to be outdated and misleading. The optimal way to prepare is to tap into the vast amount of learning material available, get hands-on experience, and take the official practice exams.

I’ve interviewed candidates who relied on exam dumps, and it’s usually clear: they struggle with basic questions. These exams are meant to gauge your understanding and proficiency with the platform, not rote memorization of facts. Encountering a familiar question in the exam is not as gratifying as being able to answer based on a solid understanding and practical experience.

It is a great feeling when you pass, and if you find the experience useful, there are many other specialties. One thing to note is that certification expires after two years, so if you do many exams at once, you will need to do them all again in two years to stay certified. The exception is that the Cloud Digital Leader and Associate Cloud Engineer certifications are valid for three years. Good luck on your certification journey!

Deploying the Pod– Scaling Up

The pod you’re about to deploy contains two containers. The first, Cloud SQL Proxy, establishes a connection to the Cloud SQL instance using permissions granted by the Google service account.

The second container holds the application. Unaware of its presence within Google Cloud or its deployment within a Kubernetes cluster, this application functions solely with the knowledge of its need to connect to a database. The connection details it requires are supplied through environment variables.

Scaling with a Horizontal Pod Autoscaler

In GKE Autopilot, as with other Kubernetes distributions, the number of instances (pods) for a service is not automatically scaled up and down by default as they are in Cloud Run. Instead, you can scale the number of pods in the cluster using a HorizontalPodAutoscaler. This will scale the number of pods based on the CPU usage of the pods. This is also slightly different to Cloud Run, as new pods are created when a threshold of CPU or memory usage is reached, rather than scaling based on the number of requests.

In the k8s directory, autoscaler.yaml defines the autoscaler. It is configured to scale the number of pods between 1 and 10 based on the CPU usage of the pods. The CPU usage is measured over 30 seconds, and the target CPU usage is 50%. This means that if the CPU usage of the pods is over 50% for 30 seconds, then a new pod will be created. If the CPU usage is below 50% for 30 seconds, then a pod will be deleted.

This helps ensure that there is sufficient capacity to handle requests, but it does not guarantee that there will be sufficient capacity. If there is a sudden spike in requests, then the pods may not be able to handle the requests.

However, as GKE Autopilot will automatically scale the number of nodes in the cluster, there will likely be sufficient capacity to handle the requests.

Exposing with a Load Balancer

When using Cloud Run, you did not need to expose the application to the internet. It was automatically exposed to the internet via a load balancer. For GKE Autopilot, you need to expose the application to the internet using a Kubernetes load balancer and an ingress controller.

GKE Autopilot does have an ingress controller built in, so you don’t need to worry about configuring NGINX or similar. You can use this by creating an ingress resource and then annotating your service to use the ingress controller.

This is a point where you take the generic Kubernetes configuration and annotate it with a specific Google Cloud configuration. In this case, annotate the service configuration for the fact service to use the ingress controller. Annotate the service with the following annotation to use the ingress controller:

For me, this returned sub 100ms response time, which was substantially better than with Cloud Run. It is a useful test to compare how GKE and Cloud Run compare for different workloads.

Kubernetes Configuration– Scaling Up

The project also contains several generic Kubernetes YAML configurations in the k8s directory. These would be the same for any Kubernetes platform and define how to deploy the application:

namespace.yaml

A namespace is a way to group resources in Kubernetes much like a project does in Google Cloud. This configuration defines a facts namespace.

deployment.yaml

In Kubernetes, the smallest deployable unit is a pod. This is made up of one or more containers. In this configuration, the pod contains two containers: the fact service instance and the Cloud SQL Proxy. A deployment is a way to deploy and scale an identical set of pods. It contains a template section with the actual pod spec.

service.yaml

A Kubernetes service is a way to provide a stable network endpoint for the pod with an IP address and port. If there are multiple instances of pods, it also distributes traffic between them and stops routing traffic if a readiness or liveness probe fails.

ingress.yaml

An ingress is a way to expose a Kubernetes services to the internet. Here you are using it to expose the fact service.

serviceaccount.yaml

A Kubernetes service account is a way to grant a pod access to other services. It is a way to provide a stable identity for the pod.

Implementation

With the preparation done, you are now ready to deploy the application to GKE Autopilot. First, you will deploy the application to connect to Cloud SQL, as you did with the Cloud Run implementation. Then you will configure Cloud Spanner and use that as an alternative.

Create a GKE Autopilot Cluster

Unlike Cloud Run, GKE Autopilot is a Kubernetes cluster, albeit a highly managed one, not a serverless service. You need to provision a cluster to run your application on.

If you have the kubectx command installed, you can enter it to list all the contexts in the kubeconfig file. This is all the clusters available to you. You should see the context for the cluster you just created and possibly any other Kubernetes clusters you have, for example, a local Minikube.

As GKE Autopilot is a fully managed Kubernetes cluster, the nodes are managed by Google, and you do not have access to them. For most people, this is a good thing, as managing a Kubernetes cluster yourself can get complicated very quickly.

Service Account Binding with Workload Identity

Kubernetes, like Google Cloud, has the concept of service accounts. These are a way to grant permissions to pods running in the cluster. You will create a Kubernetes service account and bind it to the Google service account you created earlier using Workload Identity. This will allow the pods to access the Cloud SQL instance.

This is not particularly straightforward, but when working, it provides a nice way of integrating workloads on Kubernetes with Google Cloud services without an explicit dependency on Google Cloud.

Executing this command isn’t directly creating the service account. Instead, it’s sending a declarative configuration to the Kubernetes API server. This configuration describes the desired state for a service account, namely how you intend it to exist within your Kubernetes environment.

The kubectl apply command allows you to assert control over the system configuration. When invoked, Kubernetes compares your input (the desired state) with the current state of the system, making the necessary changes to align the two.

To put it simply, by running kubectl apply -f k8s/serviceaccount.yaml, you’re instructing Kubernetes, “This is how I want the service account setup to look. Please make it so.”

Preparation– Scaling Up

There are some small changes to the fact service needed to prepare it for Kubernetes and Cloud Spanner.

Getting Ready for Kubernetes

For Cloud Run, you are not strictly required to configure health checks for the application. For GKE Autopilot, you will need to use the Kubernetes readiness and liveness probes to check the health of the application. This is a great way to ensure that the application is running correctly and is ready to receive traffic:

Liveness check

This indicates that a pod is healthy. If it fails, Kubernetes restarts the application.

Readiness check

This indicates that the application is ready to receive traffic. Kubernetes will not send traffic to the pod until it is successful.

As your Spring Boot application takes several seconds to start, it is helpful to use the readiness probe to ensure the application is ready to receive traffic before Kubernetes sends any.

Fortunately, Spring Boot provides a health endpoint that you can use for this purpose. You can configure the readiness and liveness probes to use this endpoint.

You use these endpoints in the Kubernetes configuration to configure the readiness and liveness probes.

Getting Ready for Spanner

There are a few things to consider when using Spanner. Although it is PostgreSQL compatible, it is not fully PostgreSQL compliant, and this means there are some limitations.

The first is that it does not support sequences, so it is not possible to automatically generate primary keys, as it was with Cloud SQL. This version of fact service in this chapter uses universally unique identifiers (UUIDs) for primary keys instead of an ID that is auto-incremented by the database.

Hibernate, the ORM library the fact service uses, has a nice feature of automatically updating schemas. This is not supported by Spanner, so you need to manually create the schema. Fortunately, the single table is simple in this case, so it’s not a big issue. However, this does add an extra step to the deployment process.

In Google Cloud Spanner, you can use the TIMESTAMP data type to store timestamp values. The precision of the TIMESTAMP data type is up to nanoseconds, but it does not store time zone information as Cloud SQL does. This means there is more information in the LocalDateTime Java type that can be stored in Spanner’s TIMESTAMP type.

To solve this issue, the common practice is to use two fields in your entity, one for the timestamp and another for the time zone. You store the timestamp as a String in a standardized format, like ISO 8601, and you store the time zone as another String. When you retrieve the data, you can parse the timestamp and apply the time zone. This is what has been done in this version of the fact service.

These are the type of limitations you need to be aware of when using Spanner; they are small but significant. It is not a drop-in replacement for PostgreSQL. An application written to work with Cloud SQL for PostgreSQL will not necessarily work with Spanner. However, an application written to work within the limitations of Spanner’s PostgreSQL will likely work with Cloud SQL for PostgreSQL. If you just target PostgreSQL, you will likely not be able to use Spanner without modification.

Tip

This is the trade-off you make when using a cloud native database. You get scalability and performance, but you lose some features of a traditional database. However, in this case, the benefits are large and the limitation relatively small.