Category Kubernetes Configuration

Summary of Compute services of GCP – Basics of Google Cloud Platform

The summary of compute services of GCP can be seen in Figure 1.32:

Figure 1.32: Summary Compute Service

Creation of compute Engine (VM instances)

Compute Engine instances may run Google’s public Linux and Windows Server images and private custom images. Docker containers may also be deployed on Container-Optimized OS public image instances. Users’ needs to specify zone, OS, and machine configuration type (number of virtual CPUs and RAM) while creation. Each Compute Engine instance has a small OS-containing persistent boot drive and more storage can be added. VM instance default time is Coordinated Universal Time (UTC). Follow the steps below to create a virtual machine instance:

Step 1: Open compute engine:

Follow steps described in Figure 1.33 to open the compute engine of the platform:

Figure 1.33: VM Creation

  1. Users can type Compute Engine in the search box.
  2. Alternatively, users can navigate under Compute-to-Compute Engine | VM instances.

Step 2: Enable API for compute engine:

Users will be prompted to enable APIs when the resources are accessed for the first time as shown in Figure 1.34:

Figure 1.34: VM Creation API enablement

  1. If promoted for API enablement, then enable the API.

Step 3: VM instance selection:

Follow the steps described in Figure 1.35 to select the VM:

Figure 1.35: VM Creation

  1. Select VM instances.
  2. Click on CREATE INSTANCE to begin the creation process.
  3. GCP provides option of IMPORT VM for the migration of compute engine.

Step 4: VM instance creation:

Follow the steps described in Figure 1.36 to select location and machine type for the VM instance:

Figure 1.36: Location, machine selection for VM instance

  1. GCP provides multiple options for VM creation, first one for creating VM from scratch. We will create a New VM instance.
  2. VM’s can be created from template. Users can create a template and use it for future purpose.
  3. VM’s can be created from machine images, machine images is resource that stores configuration, metadata and other information to create a VM.
  4. GCP also provides option to choose deploy ready to go solution on to VM instance.
  5. Instance name has to be given by the user.
  6. Labels are optional, they provides key/value pairs to group VMs together.
  7. Select region and zone.
  8. GCP provides a wide range of options for users in machine configurations starting from GENERAL-PURPOSE, COMPUTE-OPTIMIZED, MEMORY-OPTIMIZED AND GPU based.
  9. Under each machine family configuration, GCP provides few series machines and machine type (to choose between CPU and memory).
  10. Users can choose CPU platform and GPU, if they want to select vCPUs to core ratio and visible core count.

Note: Try selecting different machine configurations and observe the variation in the price estimates.

Step 5: VM instance disk and container selection:

Follow the steps described in Figure 1.37 to select boot disks for the instance:

Figure 1.37: Boot disk selection for VM instance

  1. Enable display tools, enable use of screen recording tools.
  2. Adds protection to your data in use by keeping memory of this VM encrypted.
  3. Deploy Container option is helpful when there is a need to deploy a container to VM instance by using a container-optimized OS image.
  4. Users can change the operating system, size and type the hard disk (HDD/SDD) by clicking on change.

Step 6: VM instance creation access settings:

Follow the steps described in Figure 1.38 to configure access:

Figure 1.38: access control for VM instance

  1. Choose to set the default service account associated with the VM instance or users can create a service account and use the same for the compute engine.
  2. Users’ needs select how VM needs to be accessed, they can choose allow default access, or allow all the APIs or only few APIs to access the VM
  3. By default, all the internet traffic will be blocked, we need to enable them.
  4. Additional options such as disk protection, reservations, network tags, change host name, delete or retain boot disk when instance is deleted etc., are provided under networking, disks, security and management.
  5. Click on Create.

Deletion of bucket – Basics of Google Cloud Platform-2

Unique features include auto upgrades, node auto maintenance and auto scaling (Additional resources will be allotted by Kubernetes Engine in the event that the strain is placed on the applications).

  • Advanced cluster administration functions, such as load balancing, node pools, logging and monitoring, will be made available to the users as a bonus.
    • Compute Engine instance provides load balancing and distribution
    • Within a cluster, node pools are used to identify subsets of nodes in order to provide greater flexibility.
    • Logging and monitoring in conjunction with Cloud Monitoring so that you can see within your cluster.
  • Google APP Engine: GCP’s provision of a platform as a service for the development and deployment of scalable apps is known as Google APP Engine. It is a kind of computing known as serverless computing, and gives the users the ability to execute their code in a computing environment that does not involve the setting up of virtual machines or Kubernetes clusters.

It is compatible with a variety of programming languages, including Java, Python, Node.js, and Go. Users have the option of developing their apps in any of the supported languages. The Google App Engine is equipped with a number of APIs and services that make it possible for developers to create applications that are powerful and packed with features. These characteristics are as follows:

  • Access to the application log
    • Blobstore, serve large data objects

Other important characteristics include a pay-as-you-go strategy, which means that you only pay for the resources employed. When there is a spike in the number of users using an application, the app engine will immediately increase the number of available resources, and vice versa.

Effective Diagnostic Services include Cloud Monitoring and Cloud Logging, which assist in running app scans to locate faults in the application. The app reporting document provides developers with assistance in immediately fixing any faults they find.

As a component of A/B testing, Traffic Splitting is a feature that allows app engine to automatically direct different versions of incoming traffic to various app iterations. Users are able to plan the subsequent increments depending on which version of the software functions the most effectively.

There are two distinct kinds of app engines:

  • Standard APP Engine Applications are completely separate from the operating system of the server and any other applications that may be executing on that server at the same time. Along with the application code, there is no need for any operating system packages or other built software to be installed.
  • The second kind is called Flexible APP Engine.

Docker containers are executed by users inside the App Engine environment. Additionally, it is necessary to install libraries or other third-party software in order to execute application code.

  • Google Cloud Functions: The Google Cloud Function is a lightweight serverless computing solution that is used for event-driven processing. It is a function as a service (FAAS) product of GCP. Through the use of Cloud Functions, the time-consuming tasks of maintaining servers, setting software, upgrading frameworks, and patching operating systems are eliminated.

The user has to provide code for Cloud Functions to start running in response to an event since GCP completely manages both the software and the infrastructure. Cloud events are occurrences that take place inside a cloud computing environment, such as changes to the data stored in a database, additions of new files to a storage system, and even the construction of new virtual machines are all instances of these kinds of operations. A trigger is a declaration that you are interested in a certain event or set of events. Binding a function to a trigger allows user to capture and act on events. Event data is the data that is passed to Cloud Function when the event trigger results in function execution:

Figure 1.31: Cloud Function

Deletion of bucket – Basics of Google Cloud Platform-1

Follow steps described in Figure 1.28 to delete the bucket:

Figure 1.28: Bucket deletion

  1. Select the bucket which needs to be deleted.
  2. Click on DELETE, you will be prompted with a pop-up where user need to type delete.

Compute

The provision of compute services is an essential component of any cloud services offering. One of the most important products that Google provides to companies who want to host their applications on the cloud is called Google Compute Services. The umbrella term “computing” covers a wide variety of specialized services, configuration choices, and products that are all accessible for your consumption.

Users have access to the computing and hosting facilities provided by Google Cloud, where they may choose from the following available options:

  • Compute Engine: Users may construct and utilize virtual machines in the cloud as a server resource via the usage of Google Compute Engine, which is an IAAS platform. Users do not need to acquire or manage physical server hardware. Virtual Machines are servers that are hosted in the cloud, and Google’s data centers will be the locations where they are operated. They provide configurable choices for the central processing unit (CPU) and graphics processing unit (GPU), memory, and storage, in addition to several options for operating systems. Both a command line interface (CLI) and a web console are available for use when accessing virtual machines. When the compute engine is decommissioned, all the data will be erased. Therefore, the persistence of data may be ensured by using either traditional or solid-state drives.

The Hypervisor is what runs Virtual Machines. The hypervisor is a piece of software that is installed on host operating systems and is responsible for the creation and operation of many virtual machines. The hypervisor on the host computer makes it possible to run many virtual machines by allowing them to share the host’s resources. Figure 1.29 shows the relationship between VMs, hypervisor and physical infrastructure.

Figure 1.29: Compute Engine

When it comes to pricing, stability, backups, scalability, and security, using Google Compute Engine is a good choice. It is a cost-efficient solution since consumers only need to pay for the amount of time that they utilize the resources. It allows live migration of VMs from one host to another, which helps to assure the system’s reliability. In addition to that, it has a backup mechanism that is reliable, built-in, and redundant. It does this by making reservations in order to assist and guarantee that applications have the capacity they need as they expand. Additional security will be provided by the compute engine for the applications running on it.

Compute Engine is a good solution for migrating established systems or fine-grained management of the operating system or other operational features.

  • Kubernetes Engine: The Google Kubernetes Engine, sometimes known as GKE, is a container as a service of GCP. When installing containerized apps, clusters are managed via the open-source Kubernetes cluster management system. Kubernetes makes it possible for users to communicate with container clusters by providing the necessary capabilities. Both a control plane and a worker node are components of the Kubernetes system. Worker nodes are supplied to function as compute engines.

Figure 1.30: Kubernetes Engine

VM operates on Guest operating systems and hypervisors, however with Kubernetes Engine, compute resources are segregated in containers, and both the container manager and the Host operating system are responsible for their management.

Working with Google Cloud Storage – Basics of Google Cloud Platform

Majority of the exercises covered in this book inputs the data from the cloud storage, so let us see how we can create cloud storage bucket and upload files to the bucket.

Step 1: Open cloud storage:

Follow the steps described in the Figure 1.20 to open the cloud storage:

Figure 1.20: Cloud Storage Creation

  1. Users can type cloud storage in the search box.
  2. Alternatively, users can navigate under STORAGE to Cloud Storage | Browser.

Step 2: Bucket creation:

Users can upload files or folders into cloud storage after creating a bucket. Initiate the bucket creation process as shown in Figure 1.21:

Figure 1.21: New bucket creation

  1. Click on CREATE BUCKET.

Step 3: Provide name for the bucket:

Follow steps shown in Figure 1.22 to add names, labels to bucket:

Figure 1.22: Bucket name

  1. Users need to provide globally unique name for the bucket.
  2. Labels are optional, it provides key/value pairs to group buckets (or even services) together.
  3. Click on Continue.

Step 4: Choosing location for bucket:

Follow steps in Figure 1.23 to choose the location of the bucket:

Figure 1.23: Location for the bucket

  1. Users can select location type. Multi-region gives options to choose multiple regions of America or Europe or Asia. Dual-region provides options to choose two regions belonging to same continent (America, Europe and Asia-pacific). Region provides options to choose one region from the drop-down list. For majority of the exercises which this book covers we will have the data uploaded to bucket belonging to single region.
  2. Click Continue.

Step 5: Selecting storage class

Follow the steps shown in Figure 1.24 to select the class of storage:

Figure 1.24: Storage class for the bucket

  1. Choose Standard (Exercises which this book covers we will create standard bucket).
  2. Click on Continue.

Note: Try providing different options with location and storage class and observe the variation in the price estimates.

Step 6: Access control for buckets:

Follow the steps described in the Figure 1.25 to configure access control for the bucket:

Figure 1.25: Access control for the bucket

  1. Selection of the box prevents public access from the internet, if this option is chosen it will not be possible to provide public access through IAM policies.
  2. Uniform access controls apply same IAM policies for all the folders and files belonging to the bucket (select Uniform)
  3. Opposite to Uniform access controls, Fine-grained specifies policies to individual files and folders.
  4. Click Continue.

Step 7: Data protection in bucket:

Follow the steps described in Figure 1.26 for data protection:

Figure 1.26: Data protection for bucket

  1. GCP provides additional options for data protection, object versioning helps users for data recovery and retention policy helps for compliance (data cannot be deleted for a minimum period of time once it is uploaded).
  2. All the data uploaded to GCP will be encrypted by google managed encryption key, if users can choose customer managed encryption keys (CMEK) for more control.
  3. Click on Create.

Step 8: Uploading files to the bucket:

Follow the steps described in Figure 1.27 to upload files to cloud storage:

Figure 1.27: File upload to bucket

  1. OBJECTS provide users options to upload the data.
  2. CREATE FOLDER inside the bucket.
  3. Directly UPLOAD FOLDER from the local system.
  4. Upload files.
  5. All the options which users have chosen during the process of bucket creation will be listed under configuration.
  6. PERMISSIONS provides options to enable prevent public access, or change the uniform access/fine-grained access. Also, it provides options to add users for accessing the bucket.
  7. PROTECTION tab provides options to enable or disable object versioning and retention policy.
  8. When specific criteria are satisfied, LIFECYCLE rules allow you to perform operations on the items in a bucket.

Hierarchy of GCP – Basics of Google Cloud Platform

The term resource is used to describe anything that is put to use on Google Cloud Platform. Everything in the Google cloud has a clear hierarchy that resembles a parent-child connection. Hierarchy followed in Google Cloud Platform is as shown in Figure 1.11:

Figure 1.11: Hierarchy of GCP

The Organization node serves as the starting point for the GCP Hierarchy and may stand for either an organization or a firm. The organization is the progenitor of both the folder and the project, as well as their respective resources. The rules for controlling access that have been implemented on the organization are applicable to all of the projects and resources that are affiliated with it.

But, if we establish an account with the personal mail ID as we did in the previous section, we would not be able to view the organization. On the other hand, if we login with our Google Workspace account and then start a project, the organization will be provided for us immediately. In addition, without organization, only a small number of the functions of the resource manager will be available.

Under organization we have folders. We are able to have an extra grouping mechanism at our disposal with the assistance of folders, and we may conceptualize this as a hierarchy of sub-organizations contained inside the larger organization. It is possible for a folder to have extra subfolders included inside it. You have the option of granting rights to access the project and all of its resources, either completely or partially, depending on the folder in question.

A project is an entity that exists at the most fundamental level. It is possible to have many projects nested inside of organization’s and folders. The project is absolutely necessary in order to make use of GCP resources, and it serves as the foundation for making use of cloud services, managing APIs, and enabling billing. A project has two different IDs connected with it. The first of these is the project ID, which is a one-of-a-kind identification for the project. And the second one is the project number, which is automatically issued whenever a project is created, and we are unable to modify it in any way.

The term resources refers to the components that make up Google Cloud Platform. Resources include things like cloud storage, databases, virtual machines, and so on. Each time we establish a cloud storage bucket or deploy a virtual machine, we link those resources to the appropriate project.

Services offered by GCP – Basics of Google Cloud Platform

Users may make use of a comprehensive selection of services provided by Google Cloud Platform. Every one of the services may be placed into one of the categories that are shown in the Figure 1.10:

Figure 1.10: Services of GCP

  • Google offers Cloud Storage for storing unstructured items, Filestore for sharing files in the traditional manner, and persistent disk for virtual machines in the storage space. Compute Engine, App Engine, Cloud Run, Kubernetes Engine, and Cloud Functions are the core computing services that Google Cloud Platform provides.
  • Cloud SQL, supports MySQL, PostgreSQL, and Microsoft SQL Server; and Cloud Spanner, which is a massively scalable database that is capable of running on a global scale. These are the relational database services that GCP provides.
  • Bigtable, Firestore, Memorystore, and Firebase Realtime Database are different NoSQL services that Google provides. When dealing with massive amounts of analytical work, Bigtable is the most effective solution. Firestore is well suited for use in the construction of client-side web and mobile apps. Firebase Realtime Database is ideal for real-time data synchronization between users, such as is required for collaborative app development. Memorystore is a kind of datastore that operates entirely inside memory and is generally used to accelerate application performance by caching data that is frequently accessed.
  • BigQuery is the name of the data warehouse service offered by Google.
  • A Virtual Private Cloud (VPC) is an on-premises network on GCP. By using VPC Network Peering, virtual private clouds may be linked to one another. Users may utilize Cloud VPN, which operates over the internet, to establish a protected connection between a VPC and an on-premises network. Alternatively, users can establish a dedicated, private connection by using either Cloud Interconnect or Peering. To facilitate the migration of applications and data sets to its platform, the platform provides a diverse set of options. Offers Anthos as an alternative for hybrid cloud deployments.
  • The field of data analytics is one in which Google excels in particular. Pub/Sub is used as a buffer for services that may not be able to deal with large surges in the amount of data coming in. Dataproc is a Hadoop and Spark implementation that is controlled by Dataproc. Apache Beam is the underlying technology that powers Dataflow, a managed implementation. You can do data processing using Dataprep even if you do not know how to write code, and it leverages Dataflow behind the scenes. Users may use google looker studio to visualize or show your data using graphs, charts, and other such graphical representations.
  • Platform provides AI and ML services for a diverse group of customers. Vertex AI provides AUTOML option for the novices, for more experienced users, it provides trained models that make predictions via API and also provides various options for the advanced AI practitioners.
  • Cloud Build enables you to develop continuous integration / continuous deployment pipelines. Private Git repositories that are hosted on GCP are known as Cloud Source Repositories. Artifact Registry expands on the capabilities of Container Registry and is the recommended container registry for Google Cloud. It provides a single location for storing and managing your language packages and Docker container images.
  • IAM stands for Identity and Access Management, and it enables users and apps to have roles assigned to them. Everything you store in the GCP is by default encrypted. Companies now have the ability to control their encryption keys thanks to Cloud Key Management. Your API keys, passwords, certificates, and any other sensitive information may be safely stored in the Secret Manager.
  • The Monitoring, Logging, Error Reporting, Trace, Debugger, and Profiler functions are all included in the Cloud Operations suite. The Active Security Threats and Vulnerabilities, as well as Compliance Infractions, are Presented to You by the Security Command Center. The development of Google Cloud Platform resources may be automated with the help of Cloud Deployment Manager.

Cloud Service Model – Basics of Google Cloud Platform

The cloud platform offers a variety of services, all of which may be roughly placed into one of three distinct categories:

  • Infrastructure as a service (IAAS)
  • Platform as a service (PAAS)
  • Software as a service (SAAS)

The difference between cloud service models is illustrated in the Figure 1.9

Figure 1.9: Cloud Service Model

Let us imagine we are working on an application and hosting it at the same time on a server that is located on our premises. In this particular circumstance, it is our obligation to own and maintain the appropriate infrastructure, as well as the appropriate platforms, and of course, our application.

  • Infrastructure as a service: In the case of IAAS, it will be the cloud’s obligation to provide the necessary infrastructure, which may include virtual machines, networking, and storage devices. We are still responsible for ensuring that we have the appropriate platform for development and deployment. We have no other option for exercising control over the underlying infrastructure but to make use of it. One example of an infrastructure as a service provided by Google with its compute engine and Kubernetes engine.
  • Platform as a service: In the case of PAAS, the responsibility for providing the appropriate platform for development and deployment, such as an operating system and tools for the environment in which programming languages are used, lies with the cloud service provider. This responsibility exists in addition to the infrastructure responsibility to provide such things. One example of a PAAS platform is Google App Engine.
  • Software as a service: In the instance of SAAS, a cloud service provider will rent out to their customers apps that are theirs to run on their infrastructure. The maintenance of the software applications will also fall within the purview of the cloud service provider, in addition to the platform and the underlying infrastructure. These software programs are accessible to us on whatever device we choose by way of web browsers, app browsers, and so on. Email (Gmail) and cloud storage (Google Drive) are two excellent instances of SAAS.
  • Data as a service (DAAS): DAAS is a service that is now starting to gain broad use, in contrast to the three service models that were mentioned before, which have been popular for more than a decade. This is partly owing to the fact that general cloud computing services were not originally built for the management of enormous data workloads; rather, they catered to the hosting of applications and basic data storage needs (as opposed to data integration, analytics, and processing).

SaaS eliminates the need to install and administer software on a local computer. Similarly, Data-as-a-Service methodology centers on the on-demand delivery of data from a number of sources using application programming interfaces (APIs). It is intended to make data access more straightforward, and provide curated datasets or streams of data that can be consumed in a variety of forms. These formats are often unified via the use of data virtualization. In fact, a DaaS architecture may consist of a wide variety of data management technologies such as data virtualization, data services, self-service analytics.

In its most basic form, DaaS enables organizations to have access to the ever-growing quantity and sophistication of the data sources at their disposal in order to give consumers with the most relevant insights. The democratization of data is something that is absolutely necessary for every company that wants to get actual value from its data. It gives a significant potential to monetize an organization’s data and acquire a competitive edge by adopting a more data-centric approach to the operations and procedures.

Footprint of Google Cloud Platform – Basics of Google Cloud Platform

Independent geographical areas are known as regions, while zones make up regions. Zones and regions are logical abstractions of the underlying physical resources that are offered in one or more datacenters physically located throughout the world. Within a region, the Google Cloud resources are deployed to specific locations referred to as zones. It is important that zones are seen as a single failure area within a region. Figure 1.8 shows the footprint of GCP:

Figure 1.8: Footprint of GCP

The time this book was written there were about 34 regions, 103 zones and 147 network edge location across 200+ countries. GCP is constantly increasing its presence across the globe, please check the link mentioned below to get the latest numbers.

Image source: https://cloud.google.com/about/locations

The services and resources offered by Google Cloud may either be handled on a zonal or regional level, or they can be managed centrally by Google across various regions.:

  • Zonal resources: The resources in a zone only work in that zone. When a zone goes down, some or all of the resources in that zone can be affected.
  • Regional resources: They are spread across multiple zones in a region to make sure they are always available.
  • Multiregional resources: Google manages a number of Google Cloud services to be redundant and spread both inside and between regions. These services improve resource efficiency, performance, and availability.
  • Global resources: Any resource within the same project has access to global resources from any zone. There is no requirement to specify a scope when creating a global resource.

Network edge locations are helpful for hosting static material that is well-liked by the user base of the hosting service. The material is temporarily cached on these edge nodes, which enables users to get the information from a place that is much closer to where they are located. Users will have a more positive experience as a result of this.

There are few benefits associated with the GCP’s regions and zones. When it comes to ensuring high availability, high redundancy, and high dependability, the notion of regions and zones is helpful. Obey the laws and regulations that have been established by the government. Data rules might vary greatly from one nation to the next.

Introduction to Google Cloud Platform – Basics of Google Cloud Platform

Google Cloud Platform is one of the hyper scale infrastructure providers in the industry. It is a collection of cloud computing services that are offered by Google. These services operate on the same infrastructure that Google employs for its end-user products, including YouTube, Gmail, and a number of other offerings. The Google Cloud Platform provides a wide range of services, such as computing, storage, and networking, among other things.

Google Cloud Platform was first launched in 2008, and as of now, it is the third cloud platform that sees the most widespread use. Additionally, there is a growing need for platforms that are hosted on the cloud.

The Google cloud gives us a service-centric perspective of all our environments in addition to providing a standard platform and data analysis for deployments, regardless of where they are physically located. Using the capabilities of sophisticated analytics and machine learning offered by Google Cloud, we can extract the most useful insights from our data. Users will be able to automate procedures, generate predictions, and simplify administration and operations with the support of Google’s serverless data analytics and machine learning platform. The services provided by Google Cloud encrypt data while it is stored, while it is being sent, and while it is being used. Advanced security mechanisms protect the privacy of data.

Account creation on Google Cloud Platform

Users can create free GCP account from the link https://cloud.google.com/free.

Free account provides 300$ credit for a period of 90 days.

Steps for creating a free account are as follows:

  1. Open https://cloud.google.com/free.
  2. Click on Get started for free.

The opening screen looks like Figure 1.2:

Figure 1.2: GCP account creation

  1. Login with your Gmail credentials, create one if you do not have. This can be seen illustrated in Figure 1.3:

Figure 1.3: GCP account creation enter valid mail address

  1. Selection of COUNTRY and needs:

Figure 1.4: GCP account creation country selection

  1. Select the Country and project. Check the Terms of service and click on CONTINUE.
  2. Provide phone number for the identity verification as shown in Figure 1.5:

Figure 1.5: GCP account creation enter phone number

  1. Free accounts require a credit card. Verification costs Rs 2. Addresses must be provided. Click on START MY FREE TRIAL on this page:

Figure 1.6: GCP account creation enter valid credit card details

  1. Users will land into this page once the free trail has started. The welcome page can be seen in Figure 1.7:

Figure 1.7: Landing page of GCP

Deploying with Terraform – Deploying Skills Mapper

To deploy the environment, you need to run the Terraform commands in the terraform directory.

First, initialize Terraform to download the needed plugins with:

terraform
init

Then check that you have set the required variables in your terraform.tfvars with:

terraform
validate

All being well, you should see Success! The configuration is valid.

Although Terraform can enable Google Services, and these scripts do, it can be unreliable as services take time to enable. Use the enable_service.sh script to enable services with gcloud:

./enable_services.sh

Terraform will then show how many items would be added, changed, or destroyed. If you have not run Terraform on the projects before, you should see a lot of items to be added.

When you are ready, run the apply command:

terraform
apply

Again, Terraform will devise a plan for meeting the desired state. This time, it will prompt you to approve applying the plan. Enter yes and watch while Terraform creates everything from this book for you. This may take 30 minutes, the majority of which will be the creation of the Cloud SQL database used by the fact service.

When completed, you will see several outputs from Terraform that look like this:

application-project = “skillsmapper-application”
git-commit = “3ecff393be00e331bb4412f4dc24a3caab2e0ab8”
management-project = “skillsmapper-management”
public-domain = “skillsmapper.org”
public-ip = “34.36.189.201”
tfstate_bucket_name = “d87cf08d1d01901c-bucket-tfstate”

The public-ip is the external IP of the global load balancer. Use this to create an A record in your DNS provider for the domain you provided.

Reapplying Terraform

If you make a change to the Terraform configuration, there are a few things you need to do before deploying Terraform again.

First, make sure you are using the application project:

gcloud
config
set
project
$APPLICATION_PROJECT_ID

Terraform is unable to change the API Gateway configuration, so you will need to delete it and allow Terraform to recreate it.

Also, if Cloud Run has deployed new versions of the services, you will need to remove them and allow Terraform to recreate them, too, as Terraform will have the wrong version.

This time you will notice only a few added, changed, or destroyed resources, as Terraform only applies the differences to what is already there.

Deleting Everything

When you have finished with Skills Mapper, you can also use Terraform to clean up completely using:

terraform
destroy

This will remove all the infrastructure that Terraform has created.

At this point, you may also like to unlink the billing accounts from the projects so they can no longer be billed: gcloud
beta
billing
projects
unlink
$APPLICATION_PROJECT_ID
gcloud
beta
billing
projects
unlink
$MANAGEMENT_PROJECT_ID