Category Reintroducing Terraform

Deletion of bucket – Basics of Google Cloud Platform-1

Follow steps described in Figure 1.28 to delete the bucket:

Figure 1.28: Bucket deletion

  1. Select the bucket which needs to be deleted.
  2. Click on DELETE, you will be prompted with a pop-up where user need to type delete.

Compute

The provision of compute services is an essential component of any cloud services offering. One of the most important products that Google provides to companies who want to host their applications on the cloud is called Google Compute Services. The umbrella term “computing” covers a wide variety of specialized services, configuration choices, and products that are all accessible for your consumption.

Users have access to the computing and hosting facilities provided by Google Cloud, where they may choose from the following available options:

  • Compute Engine: Users may construct and utilize virtual machines in the cloud as a server resource via the usage of Google Compute Engine, which is an IAAS platform. Users do not need to acquire or manage physical server hardware. Virtual Machines are servers that are hosted in the cloud, and Google’s data centers will be the locations where they are operated. They provide configurable choices for the central processing unit (CPU) and graphics processing unit (GPU), memory, and storage, in addition to several options for operating systems. Both a command line interface (CLI) and a web console are available for use when accessing virtual machines. When the compute engine is decommissioned, all the data will be erased. Therefore, the persistence of data may be ensured by using either traditional or solid-state drives.

The Hypervisor is what runs Virtual Machines. The hypervisor is a piece of software that is installed on host operating systems and is responsible for the creation and operation of many virtual machines. The hypervisor on the host computer makes it possible to run many virtual machines by allowing them to share the host’s resources. Figure 1.29 shows the relationship between VMs, hypervisor and physical infrastructure.

Figure 1.29: Compute Engine

When it comes to pricing, stability, backups, scalability, and security, using Google Compute Engine is a good choice. It is a cost-efficient solution since consumers only need to pay for the amount of time that they utilize the resources. It allows live migration of VMs from one host to another, which helps to assure the system’s reliability. In addition to that, it has a backup mechanism that is reliable, built-in, and redundant. It does this by making reservations in order to assist and guarantee that applications have the capacity they need as they expand. Additional security will be provided by the compute engine for the applications running on it.

Compute Engine is a good solution for migrating established systems or fine-grained management of the operating system or other operational features.

  • Kubernetes Engine: The Google Kubernetes Engine, sometimes known as GKE, is a container as a service of GCP. When installing containerized apps, clusters are managed via the open-source Kubernetes cluster management system. Kubernetes makes it possible for users to communicate with container clusters by providing the necessary capabilities. Both a control plane and a worker node are components of the Kubernetes system. Worker nodes are supplied to function as compute engines.

Figure 1.30: Kubernetes Engine

VM operates on Guest operating systems and hypervisors, however with Kubernetes Engine, compute resources are segregated in containers, and both the container manager and the Host operating system are responsible for their management.

Working with Google Cloud Storage – Basics of Google Cloud Platform

Majority of the exercises covered in this book inputs the data from the cloud storage, so let us see how we can create cloud storage bucket and upload files to the bucket.

Step 1: Open cloud storage:

Follow the steps described in the Figure 1.20 to open the cloud storage:

Figure 1.20: Cloud Storage Creation

  1. Users can type cloud storage in the search box.
  2. Alternatively, users can navigate under STORAGE to Cloud Storage | Browser.

Step 2: Bucket creation:

Users can upload files or folders into cloud storage after creating a bucket. Initiate the bucket creation process as shown in Figure 1.21:

Figure 1.21: New bucket creation

  1. Click on CREATE BUCKET.

Step 3: Provide name for the bucket:

Follow steps shown in Figure 1.22 to add names, labels to bucket:

Figure 1.22: Bucket name

  1. Users need to provide globally unique name for the bucket.
  2. Labels are optional, it provides key/value pairs to group buckets (or even services) together.
  3. Click on Continue.

Step 4: Choosing location for bucket:

Follow steps in Figure 1.23 to choose the location of the bucket:

Figure 1.23: Location for the bucket

  1. Users can select location type. Multi-region gives options to choose multiple regions of America or Europe or Asia. Dual-region provides options to choose two regions belonging to same continent (America, Europe and Asia-pacific). Region provides options to choose one region from the drop-down list. For majority of the exercises which this book covers we will have the data uploaded to bucket belonging to single region.
  2. Click Continue.

Step 5: Selecting storage class

Follow the steps shown in Figure 1.24 to select the class of storage:

Figure 1.24: Storage class for the bucket

  1. Choose Standard (Exercises which this book covers we will create standard bucket).
  2. Click on Continue.

Note: Try providing different options with location and storage class and observe the variation in the price estimates.

Step 6: Access control for buckets:

Follow the steps described in the Figure 1.25 to configure access control for the bucket:

Figure 1.25: Access control for the bucket

  1. Selection of the box prevents public access from the internet, if this option is chosen it will not be possible to provide public access through IAM policies.
  2. Uniform access controls apply same IAM policies for all the folders and files belonging to the bucket (select Uniform)
  3. Opposite to Uniform access controls, Fine-grained specifies policies to individual files and folders.
  4. Click Continue.

Step 7: Data protection in bucket:

Follow the steps described in Figure 1.26 for data protection:

Figure 1.26: Data protection for bucket

  1. GCP provides additional options for data protection, object versioning helps users for data recovery and retention policy helps for compliance (data cannot be deleted for a minimum period of time once it is uploaded).
  2. All the data uploaded to GCP will be encrypted by google managed encryption key, if users can choose customer managed encryption keys (CMEK) for more control.
  3. Click on Create.

Step 8: Uploading files to the bucket:

Follow the steps described in Figure 1.27 to upload files to cloud storage:

Figure 1.27: File upload to bucket

  1. OBJECTS provide users options to upload the data.
  2. CREATE FOLDER inside the bucket.
  3. Directly UPLOAD FOLDER from the local system.
  4. Upload files.
  5. All the options which users have chosen during the process of bucket creation will be listed under configuration.
  6. PERMISSIONS provides options to enable prevent public access, or change the uniform access/fine-grained access. Also, it provides options to add users for accessing the bucket.
  7. PROTECTION tab provides options to enable or disable object versioning and retention policy.
  8. When specific criteria are satisfied, LIFECYCLE rules allow you to perform operations on the items in a bucket.

Command Line Interface – Basics of Google Cloud Platform-2

The primary distinction between Persistent disk and network file storage, as the name suggests, provides a disk storage over the network. This makes it possible to create systems with many parallel services that are able to read and write files from the same disc storage that is mounted across the network. These systems may be developed with the help of this capability.

The following are some examples of uses for Filestore:

  • Majority of the on-premises applications needs a file system interface, Filestore makes it easy to migrate these kind of enterprise applications to the cloud
    • It is used in the rendering of media in order to decrease latency.
  • Cloud Storage: The service for storing objects that is provided by Google Cloud is known as Google Cloud Storage. It offers a number of extremely intriguing features that are pre-installed, such as object versioning and fine-grained permissions (either by object or bucket), both of which have the potential to simplify the development process and cut down on operational overhead. The Google Cloud Storage platform is used as the basis for a variety of different services.

Having this kind of storage is not at all usual in ordinary on-premises systems, which often have a capacity that is more restricted and connection that is both quick and exclusive. Object storage, on the other hand, has a very user-friendly interface in terms of how it works. In layman’s words, its value proposition is such that you are able to acquire and put whatever file you want using a REST API; in addition, this may extend forever with each object expanding up to the terabyte scale; and last, its value proposition is such that it is possible to store any amount of data. Buckets are the namespaces that are used in Cloud Storage to organize the many items that are stored there. Even while a bucket has the capacity to carry a number of items, each individual item will only ever belong to a single bucket.

The inexpensive cost of this storage type (cents per GB), along with its serverless approach and its ease of use, has contributed to its widespread adoption in cloud-native system architectures. The cloud service provider is then responsible for handling the laborious tasks of data replication, availability, integrity checks, capacity planning, and so on. APIs make it possible for applications to both save and retrieve items.

Based on factors like cost, availability, and frequency of access, cloud storage has four different storage classes. They are Standard, Nearline, Coldline, and Archive as shown in Figure 1.19:

Figure 1.19: GCP Storage

  • Standard class: This class of storage allows for high frequency access and is the type of storage that is most often used by software developers.
  • Nearline storage class: This class is used for data that are not accessed very regularly, generally for data that are not accessed more than once a month. Generally speaking, nearline storage is used for data.
  • Lowline storage class: This class is used for records that are normally accessed not more often than once every three months.
  • Archive storage class: This class is used for data that is accessed with the lowest frequency and is often used for the long-term preservation of data.

Command Line Interface – Basics of Google Cloud Platform-1

You may control your development process and your GCP resources using the gcloud command-line tool provided by the Google Cloud SDK, if you are a developer. To get command-line access to Google Cloud’s computational resources, you may use the Cloud Shell. With a 5-GB home directory, Cloud Shell is a Debian-based virtual computer from which you can easily manage your GCP projects and resources. Cloud Shell is pre-installed with the gcloud command-line tool and other necessary tools, allowing you to get up and running fast. To use cloud shell, follow the steps below:

Activate Cloud shell as shown in Figure 1.17:

Figure 1.17: GCP CLI

Click on Activate cloud shell.It will take few mins to open the Cloud shell command window

Once the cloud shell is activated, black space appears at the bottom of the screen to type commands as shown in Figure 1.18:

Figure 1.18: GCP CLI

  1. Type commands:

gcloud projects create project_id – For Project creation

gcloud projects delete project_id – For project deletion

APIs

It is common for apps to communicate with Google Cloud via Software Development Kit (SDK). Go, Python, and node.js are few of the many programming languages for which Google Cloud SDKs are available.

Note: We will use this method while getting predictions from the deployed models.

Storage

Along with computing and network, storage is considered to be one of the fundamental building components. Applications benefit from storage services’ increased levels of persistence and durability. These services are located deep inside the platform and serve as the foundation for the vast majority of Google Cloud’s services as well as the systems that you construct on top of it. They are the platform’s central pillars.

Three types of storage options are provided by Google Cloud:

  • Persistent Disks
  • Filestore
  • Cloud Storage (GCS)

They are explained as follows:

  • Persistent Disks: Block storage is provided by a Google Cloud Persistent Disk, which is used by virtual machine hosted on Google Cloud (Google Cloud Compute Engine). Imagine those Persistent Disks as simple USB sticks; this will help you comprehend it far better than any other method. They may be connected to virtual machines or detached from them. They allow you to construct data persistence for your services whether virtual machines are started, paused, or terminated. These Persistent Disks are put to use to power not just the virtual machines that are hosted on Google Cloud Compute Engine, but also the Google Kubernetes Engine service.

A Google Cloud Persistent Disk operates similarly to a virtual disc on your local PC. Persistent Disk can either be HDD or SSD, with the latter offering superior I/O performance. In addition, there is the choice of where they are placed as well as the sort of availability that is required, which may be either regional, zonal, or local.

Other capabilities of Google Cloud Persistent Disks that are lesser known but are prove useful include automatic encryption, the ability to resize the disc while it is being used, and a snapshot capability that can be used for both backing up data and creating images for virtual machines. Read and write access can be configured for multiple VMs. One VM can have write access and all other VMs can have read access for a Persistent disk.

  • Filestore: Filestore is a network file storage service that is provided by Google Cloud. The idea of network file storage has been around for quite some time, and similar to block storage, it can also be found in the on-premises data centers that most businesses use. You should be comfortable with the notion if you are used to dealing with NAS, which stands for network-attached storage. In response to the dearth of services that are compatible with network-attached storage (NAS), Google has expanded its offerings to include a cloud file storage service.

Cloud Service Model – Basics of Google Cloud Platform

The cloud platform offers a variety of services, all of which may be roughly placed into one of three distinct categories:

  • Infrastructure as a service (IAAS)
  • Platform as a service (PAAS)
  • Software as a service (SAAS)

The difference between cloud service models is illustrated in the Figure 1.9

Figure 1.9: Cloud Service Model

Let us imagine we are working on an application and hosting it at the same time on a server that is located on our premises. In this particular circumstance, it is our obligation to own and maintain the appropriate infrastructure, as well as the appropriate platforms, and of course, our application.

  • Infrastructure as a service: In the case of IAAS, it will be the cloud’s obligation to provide the necessary infrastructure, which may include virtual machines, networking, and storage devices. We are still responsible for ensuring that we have the appropriate platform for development and deployment. We have no other option for exercising control over the underlying infrastructure but to make use of it. One example of an infrastructure as a service provided by Google with its compute engine and Kubernetes engine.
  • Platform as a service: In the case of PAAS, the responsibility for providing the appropriate platform for development and deployment, such as an operating system and tools for the environment in which programming languages are used, lies with the cloud service provider. This responsibility exists in addition to the infrastructure responsibility to provide such things. One example of a PAAS platform is Google App Engine.
  • Software as a service: In the instance of SAAS, a cloud service provider will rent out to their customers apps that are theirs to run on their infrastructure. The maintenance of the software applications will also fall within the purview of the cloud service provider, in addition to the platform and the underlying infrastructure. These software programs are accessible to us on whatever device we choose by way of web browsers, app browsers, and so on. Email (Gmail) and cloud storage (Google Drive) are two excellent instances of SAAS.
  • Data as a service (DAAS): DAAS is a service that is now starting to gain broad use, in contrast to the three service models that were mentioned before, which have been popular for more than a decade. This is partly owing to the fact that general cloud computing services were not originally built for the management of enormous data workloads; rather, they catered to the hosting of applications and basic data storage needs (as opposed to data integration, analytics, and processing).

SaaS eliminates the need to install and administer software on a local computer. Similarly, Data-as-a-Service methodology centers on the on-demand delivery of data from a number of sources using application programming interfaces (APIs). It is intended to make data access more straightforward, and provide curated datasets or streams of data that can be consumed in a variety of forms. These formats are often unified via the use of data virtualization. In fact, a DaaS architecture may consist of a wide variety of data management technologies such as data virtualization, data services, self-service analytics.

In its most basic form, DaaS enables organizations to have access to the ever-growing quantity and sophistication of the data sources at their disposal in order to give consumers with the most relevant insights. The democratization of data is something that is absolutely necessary for every company that wants to get actual value from its data. It gives a significant potential to monetize an organization’s data and acquire a competitive edge by adopting a more data-centric approach to the operations and procedures.

Footprint of Google Cloud Platform – Basics of Google Cloud Platform

Independent geographical areas are known as regions, while zones make up regions. Zones and regions are logical abstractions of the underlying physical resources that are offered in one or more datacenters physically located throughout the world. Within a region, the Google Cloud resources are deployed to specific locations referred to as zones. It is important that zones are seen as a single failure area within a region. Figure 1.8 shows the footprint of GCP:

Figure 1.8: Footprint of GCP

The time this book was written there were about 34 regions, 103 zones and 147 network edge location across 200+ countries. GCP is constantly increasing its presence across the globe, please check the link mentioned below to get the latest numbers.

Image source: https://cloud.google.com/about/locations

The services and resources offered by Google Cloud may either be handled on a zonal or regional level, or they can be managed centrally by Google across various regions.:

  • Zonal resources: The resources in a zone only work in that zone. When a zone goes down, some or all of the resources in that zone can be affected.
  • Regional resources: They are spread across multiple zones in a region to make sure they are always available.
  • Multiregional resources: Google manages a number of Google Cloud services to be redundant and spread both inside and between regions. These services improve resource efficiency, performance, and availability.
  • Global resources: Any resource within the same project has access to global resources from any zone. There is no requirement to specify a scope when creating a global resource.

Network edge locations are helpful for hosting static material that is well-liked by the user base of the hosting service. The material is temporarily cached on these edge nodes, which enables users to get the information from a place that is much closer to where they are located. Users will have a more positive experience as a result of this.

There are few benefits associated with the GCP’s regions and zones. When it comes to ensuring high availability, high redundancy, and high dependability, the notion of regions and zones is helpful. Obey the laws and regulations that have been established by the government. Data rules might vary greatly from one nation to the next.

Introduction to Google Cloud Platform – Basics of Google Cloud Platform

Google Cloud Platform is one of the hyper scale infrastructure providers in the industry. It is a collection of cloud computing services that are offered by Google. These services operate on the same infrastructure that Google employs for its end-user products, including YouTube, Gmail, and a number of other offerings. The Google Cloud Platform provides a wide range of services, such as computing, storage, and networking, among other things.

Google Cloud Platform was first launched in 2008, and as of now, it is the third cloud platform that sees the most widespread use. Additionally, there is a growing need for platforms that are hosted on the cloud.

The Google cloud gives us a service-centric perspective of all our environments in addition to providing a standard platform and data analysis for deployments, regardless of where they are physically located. Using the capabilities of sophisticated analytics and machine learning offered by Google Cloud, we can extract the most useful insights from our data. Users will be able to automate procedures, generate predictions, and simplify administration and operations with the support of Google’s serverless data analytics and machine learning platform. The services provided by Google Cloud encrypt data while it is stored, while it is being sent, and while it is being used. Advanced security mechanisms protect the privacy of data.

Account creation on Google Cloud Platform

Users can create free GCP account from the link https://cloud.google.com/free.

Free account provides 300$ credit for a period of 90 days.

Steps for creating a free account are as follows:

  1. Open https://cloud.google.com/free.
  2. Click on Get started for free.

The opening screen looks like Figure 1.2:

Figure 1.2: GCP account creation

  1. Login with your Gmail credentials, create one if you do not have. This can be seen illustrated in Figure 1.3:

Figure 1.3: GCP account creation enter valid mail address

  1. Selection of COUNTRY and needs:

Figure 1.4: GCP account creation country selection

  1. Select the Country and project. Check the Terms of service and click on CONTINUE.
  2. Provide phone number for the identity verification as shown in Figure 1.5:

Figure 1.5: GCP account creation enter phone number

  1. Free accounts require a credit card. Verification costs Rs 2. Addresses must be provided. Click on START MY FREE TRIAL on this page:

Figure 1.6: GCP account creation enter valid credit card details

  1. Users will land into this page once the free trail has started. The welcome page can be seen in Figure 1.7:

Figure 1.7: Landing page of GCP

Importance of Cloud for data scientist – Basics of Google Cloud Platform

Since the beginning of the previous decade, the expansion of data has followed an exponential pattern, and this trend is expected to continue. The safe and secure storage of data should be one of the top priorities of every company. The cloud is usually the top option when it comes to storing and processing the enormous quantity of data since it has all of the advantages that were discussed above. As a consequence of this, a data scientist in today’s world has to have experience with cloud computing in addition to their expertise in statistics, machine learning algorithms, and other areas.

However, due to the low processing capacity of their CPU, they are unable to carry out these responsibilities in a timely way, assuming that they are even capable of doing so at all. In addition, the memory of the machine is often incapable of storing massive datasets because of their size. It determines how quickly the assignment is performed and how well it was accomplished overall. Data scientists are now able to investigate more extensive collections of data without being constrained by the capabilities of their local workstations thanks to the cloud. Utilizing the cloud might result in a decrease in the cost of infrastructure since it eliminates the requirement for a physical server. In addition, depending on the cloud for data storage can lead to a reduction in the cost of infrastructure. In addition to offering data storage services, many cloud platforms including google cloud platform also has other services caterings to data ingestion, data processing, analytics, AI and data visualization.

Types of Cloud

There are three types of cloud based on different capabilities:

  • Public Cloud
  • Private Cloud
  • Hybrid Cloud

Public Cloud: The public cloud is a massive collection of readily available computing resources, including networking, memory, processing elements, and storage. Users can rent these resources, which are housed in one of the public cloud vendors globally dispersed and fully managed datacenters, to create your IT architecture. Using a web browser, users have access to your resources in this form of cloud. Google Cloud Platform is an example for Public Cloud.

A major advantage of the public cloud is that the underlying hardware and logic are hosted, owned, and maintained by each vendor. Customers are not responsible for purchasing or maintaining the physical components that comprise their public cloud IT solutions. In addition, Service Level Agreements (SLAs) bind each provider to a monthly uptime percentage and security guarantee in accordance with regulations.

Private Cloud: Unlike public clouds, private clouds are owned and operated only by a single organization. They have usually been housed in the company’s datacenter and run on the organization ‘s own equipment. To host their private cloud on their equipment, however, an organization may use a third-party supplier. Even if the resources are housed in a remotely managed datacenter, private cloud has certain characteristics with public cloud in this case. They may be able to provide certain administrative services but they would not be able to offer the full range of public cloud services.

If the private cloud is housed in your own datacenter, organization have complete control over the whole system. A self-hosted private cloud may help to comply with some of the stricter security and compliance regulations.

Hybrid Cloud: This kind of cloud computing is a blend and integration of both public and private clouds, as the name of this form of cloud computing indicates. In this manner, it will be able to provide you with the advantages associated with a variety of cloud kinds when it comes to cloud computing. It enables a larger degree of flexibility in terms of the transmission of data and expands the alternatives available to a company for its adoption. This guarantees a high level of control as well as an easy transition while giving everything at rates that are more economical.

Advantages of Cloud – Basics of Google Cloud Platform

There are various advantages of cloud as shown in Figure 1.1, and mentioned as follows:

Figure 1.1: Advantages of Cloud platform

  • Cost efficiency: In terms of IT infrastructure management, cloud computing is undoubtedly the most cost-effective option. It is incredibly affordable for organizations of any size to transition from on-premises hardware to the cloud thanks to a variety of pay-as-you-go and other scalable choices. Using cloud resources instead of purchasing costly server equipment and PCs that need a lot of time to set up and maintain, such as long hours of setup and maintenance. Cloud also helps in reduced spending on compute, storage, network, operational and upgrade expenses.
  • Scalability and elasticity: Overall, cloud hosting is more flexible than hosting on a local machine. You do not have to undertake a costly (and time-consuming) upgrade to your IT infrastructure if you need more bandwidth. This increased degree of latitude and adaptability may have a major impact on productivity.

Elasticity is only employed for a short amount of time to deal with rapid shifts in workload. This is a short-term strategy used to meet spikes in demand, whether they are unanticipated or seasonal. The static increase in workload is met through scalability. To cope with an anticipated rise in demand, a long-term approach to scalability is used.

  • Security: Cloud platform provides a multitude of cutting-edge security measures, which ensure the safe storage and management of any data. Granular permissions and access control using federated roles are two examples of features that may help limit access to sensitive data to just those workers who have a legitimate need for it. This helps reduce the attack surface that is available to hostile actors. Authentication, access control, and encryption are some of the fundamental safeguards that providers of cloud storage put in place to secure their platforms and the data that is processed on those platforms. After that, users can implement additional security measures of their own, in addition to these precautions, to further strengthen cloud data protection and restrict access to sensitive information stored in the cloud.
  • Availability: The vast majority of cloud service providers are quite dependable in terms of the provision of their services; in fact, the vast majority of them maintain an uptime of 99.9 percent. Moving to the cloud should be done with the intention of achieving high availability. The goal is to make your company’s goods, services, and tools accessible to your clients and workers at any time of day and from any location in the world using any device that can connect to the internet.
  • Reduced downtime: Cloud based solutions provide the ability to operate critical systems and data directly from the cloud or to restore them to any location. During a catastrophic event involving information technology, they make it easier for you to get these systems back online, reducing the amount of manual work required by conventional recovery techniques.
  • Increased Collaboration: Developers, QA, operations, security, and product architects are all exposed to the same infrastructure and may work concurrently without tripping on one another’s toes in cloud settings. To minimize disputes and misunderstanding, cloud roles and permissions provide more visibility and monitoring of who performed what and when. Different cloud environments, such as staging, QA, demo, and pre-production, may be created for specialized reasons. The cloud makes transparent collaboration simpler and promotes it.
  • Insight: A bird’s-eye perspective of your data is also provided through the integrated cloud analytics that are offered by cloud platforms. When your data is kept in the cloud, it is much simpler to put in place, monitoring systems and create individualized reports for doing information analysis throughout the whole organization. You will be able to improve efficiency and construct action plans based on these insights, which will allow your organization to fulfil its objectives.
  • Control over data: Cloud provides you total visibility and control over your data. You have complete control over which users are granted access to which levels of specified data. This not only gives you control, but also helps simplify work by ensuring that staff members are aware of the tasks they have been allocated. Additionally, it will make working together much simpler. Because several users may make edits to the same copy of the text at the same time, there is no need that multiple copies of the document be distributed to the public.
  • Automatic software updates: There is nothing more cumbersome than being required to wait for the installation of system upgrades, especially for those who already have a lot on their plates. Applications that are hosted in the cloud instantly refresh and update themselves, eliminating the need for an IT personnel to carry out manual updates for the whole organization. This saves critical time and money that would have been spent on consulting from other sources.
  • Ease of managing: The use of cloud can streamline and improve IT maintenance and management capabilities through the use of agreements supported by SLA, centralized resource administration, and managed infrastructure. Users can take advantage of a simple user interface without having to worry about installing anything. In addition, users are provided with management, maintenance, and delivery of the IT services.

Deploying with Terraform – Deploying Skills Mapper

To deploy the environment, you need to run the Terraform commands in the terraform directory.

First, initialize Terraform to download the needed plugins with:

terraform
init

Then check that you have set the required variables in your terraform.tfvars with:

terraform
validate

All being well, you should see Success! The configuration is valid.

Although Terraform can enable Google Services, and these scripts do, it can be unreliable as services take time to enable. Use the enable_service.sh script to enable services with gcloud:

./enable_services.sh

Terraform will then show how many items would be added, changed, or destroyed. If you have not run Terraform on the projects before, you should see a lot of items to be added.

When you are ready, run the apply command:

terraform
apply

Again, Terraform will devise a plan for meeting the desired state. This time, it will prompt you to approve applying the plan. Enter yes and watch while Terraform creates everything from this book for you. This may take 30 minutes, the majority of which will be the creation of the Cloud SQL database used by the fact service.

When completed, you will see several outputs from Terraform that look like this:

application-project = “skillsmapper-application”
git-commit = “3ecff393be00e331bb4412f4dc24a3caab2e0ab8”
management-project = “skillsmapper-management”
public-domain = “skillsmapper.org”
public-ip = “34.36.189.201”
tfstate_bucket_name = “d87cf08d1d01901c-bucket-tfstate”

The public-ip is the external IP of the global load balancer. Use this to create an A record in your DNS provider for the domain you provided.

Reapplying Terraform

If you make a change to the Terraform configuration, there are a few things you need to do before deploying Terraform again.

First, make sure you are using the application project:

gcloud
config
set
project
$APPLICATION_PROJECT_ID

Terraform is unable to change the API Gateway configuration, so you will need to delete it and allow Terraform to recreate it.

Also, if Cloud Run has deployed new versions of the services, you will need to remove them and allow Terraform to recreate them, too, as Terraform will have the wrong version.

This time you will notice only a few added, changed, or destroyed resources, as Terraform only applies the differences to what is already there.

Deleting Everything

When you have finished with Skills Mapper, you can also use Terraform to clean up completely using:

terraform
destroy

This will remove all the infrastructure that Terraform has created.

At this point, you may also like to unlink the billing accounts from the projects so they can no longer be billed: gcloud
beta
billing
projects
unlink
$APPLICATION_PROJECT_ID
gcloud
beta
billing
projects
unlink
$MANAGEMENT_PROJECT_ID