Summary of Compute services of GCP – Basics of Google Cloud Platform

The summary of compute services of GCP can be seen in Figure 1.32:

Figure 1.32: Summary Compute Service

Creation of compute Engine (VM instances)

Compute Engine instances may run Google’s public Linux and Windows Server images and private custom images. Docker containers may also be deployed on Container-Optimized OS public image instances. Users’ needs to specify zone, OS, and machine configuration type (number of virtual CPUs and RAM) while creation. Each Compute Engine instance has a small OS-containing persistent boot drive and more storage can be added. VM instance default time is Coordinated Universal Time (UTC). Follow the steps below to create a virtual machine instance:

Step 1: Open compute engine:

Follow steps described in Figure 1.33 to open the compute engine of the platform:

Figure 1.33: VM Creation

  1. Users can type Compute Engine in the search box.
  2. Alternatively, users can navigate under Compute-to-Compute Engine | VM instances.

Step 2: Enable API for compute engine:

Users will be prompted to enable APIs when the resources are accessed for the first time as shown in Figure 1.34:

Figure 1.34: VM Creation API enablement

  1. If promoted for API enablement, then enable the API.

Step 3: VM instance selection:

Follow the steps described in Figure 1.35 to select the VM:

Figure 1.35: VM Creation

  1. Select VM instances.
  2. Click on CREATE INSTANCE to begin the creation process.
  3. GCP provides option of IMPORT VM for the migration of compute engine.

Step 4: VM instance creation:

Follow the steps described in Figure 1.36 to select location and machine type for the VM instance:

Figure 1.36: Location, machine selection for VM instance

  1. GCP provides multiple options for VM creation, first one for creating VM from scratch. We will create a New VM instance.
  2. VM’s can be created from template. Users can create a template and use it for future purpose.
  3. VM’s can be created from machine images, machine images is resource that stores configuration, metadata and other information to create a VM.
  4. GCP also provides option to choose deploy ready to go solution on to VM instance.
  5. Instance name has to be given by the user.
  6. Labels are optional, they provides key/value pairs to group VMs together.
  7. Select region and zone.
  8. GCP provides a wide range of options for users in machine configurations starting from GENERAL-PURPOSE, COMPUTE-OPTIMIZED, MEMORY-OPTIMIZED AND GPU based.
  9. Under each machine family configuration, GCP provides few series machines and machine type (to choose between CPU and memory).
  10. Users can choose CPU platform and GPU, if they want to select vCPUs to core ratio and visible core count.

Note: Try selecting different machine configurations and observe the variation in the price estimates.

Step 5: VM instance disk and container selection:

Follow the steps described in Figure 1.37 to select boot disks for the instance:

Figure 1.37: Boot disk selection for VM instance

  1. Enable display tools, enable use of screen recording tools.
  2. Adds protection to your data in use by keeping memory of this VM encrypted.
  3. Deploy Container option is helpful when there is a need to deploy a container to VM instance by using a container-optimized OS image.
  4. Users can change the operating system, size and type the hard disk (HDD/SDD) by clicking on change.

Step 6: VM instance creation access settings:

Follow the steps described in Figure 1.38 to configure access:

Figure 1.38: access control for VM instance

  1. Choose to set the default service account associated with the VM instance or users can create a service account and use the same for the compute engine.
  2. Users’ needs select how VM needs to be accessed, they can choose allow default access, or allow all the APIs or only few APIs to access the VM
  3. By default, all the internet traffic will be blocked, we need to enable them.
  4. Additional options such as disk protection, reservations, network tags, change host name, delete or retain boot disk when instance is deleted etc., are provided under networking, disks, security and management.
  5. Click on Create.

Deletion of bucket – Basics of Google Cloud Platform-2

Unique features include auto upgrades, node auto maintenance and auto scaling (Additional resources will be allotted by Kubernetes Engine in the event that the strain is placed on the applications).

  • Advanced cluster administration functions, such as load balancing, node pools, logging and monitoring, will be made available to the users as a bonus.
    • Compute Engine instance provides load balancing and distribution
    • Within a cluster, node pools are used to identify subsets of nodes in order to provide greater flexibility.
    • Logging and monitoring in conjunction with Cloud Monitoring so that you can see within your cluster.
  • Google APP Engine: GCP’s provision of a platform as a service for the development and deployment of scalable apps is known as Google APP Engine. It is a kind of computing known as serverless computing, and gives the users the ability to execute their code in a computing environment that does not involve the setting up of virtual machines or Kubernetes clusters.

It is compatible with a variety of programming languages, including Java, Python, Node.js, and Go. Users have the option of developing their apps in any of the supported languages. The Google App Engine is equipped with a number of APIs and services that make it possible for developers to create applications that are powerful and packed with features. These characteristics are as follows:

  • Access to the application log
    • Blobstore, serve large data objects

Other important characteristics include a pay-as-you-go strategy, which means that you only pay for the resources employed. When there is a spike in the number of users using an application, the app engine will immediately increase the number of available resources, and vice versa.

Effective Diagnostic Services include Cloud Monitoring and Cloud Logging, which assist in running app scans to locate faults in the application. The app reporting document provides developers with assistance in immediately fixing any faults they find.

As a component of A/B testing, Traffic Splitting is a feature that allows app engine to automatically direct different versions of incoming traffic to various app iterations. Users are able to plan the subsequent increments depending on which version of the software functions the most effectively.

There are two distinct kinds of app engines:

  • Standard APP Engine Applications are completely separate from the operating system of the server and any other applications that may be executing on that server at the same time. Along with the application code, there is no need for any operating system packages or other built software to be installed.
  • The second kind is called Flexible APP Engine.

Docker containers are executed by users inside the App Engine environment. Additionally, it is necessary to install libraries or other third-party software in order to execute application code.

  • Google Cloud Functions: The Google Cloud Function is a lightweight serverless computing solution that is used for event-driven processing. It is a function as a service (FAAS) product of GCP. Through the use of Cloud Functions, the time-consuming tasks of maintaining servers, setting software, upgrading frameworks, and patching operating systems are eliminated.

The user has to provide code for Cloud Functions to start running in response to an event since GCP completely manages both the software and the infrastructure. Cloud events are occurrences that take place inside a cloud computing environment, such as changes to the data stored in a database, additions of new files to a storage system, and even the construction of new virtual machines are all instances of these kinds of operations. A trigger is a declaration that you are interested in a certain event or set of events. Binding a function to a trigger allows user to capture and act on events. Event data is the data that is passed to Cloud Function when the event trigger results in function execution:

Figure 1.31: Cloud Function

Deletion of bucket – Basics of Google Cloud Platform-1

Follow steps described in Figure 1.28 to delete the bucket:

Figure 1.28: Bucket deletion

  1. Select the bucket which needs to be deleted.
  2. Click on DELETE, you will be prompted with a pop-up where user need to type delete.

Compute

The provision of compute services is an essential component of any cloud services offering. One of the most important products that Google provides to companies who want to host their applications on the cloud is called Google Compute Services. The umbrella term “computing” covers a wide variety of specialized services, configuration choices, and products that are all accessible for your consumption.

Users have access to the computing and hosting facilities provided by Google Cloud, where they may choose from the following available options:

  • Compute Engine: Users may construct and utilize virtual machines in the cloud as a server resource via the usage of Google Compute Engine, which is an IAAS platform. Users do not need to acquire or manage physical server hardware. Virtual Machines are servers that are hosted in the cloud, and Google’s data centers will be the locations where they are operated. They provide configurable choices for the central processing unit (CPU) and graphics processing unit (GPU), memory, and storage, in addition to several options for operating systems. Both a command line interface (CLI) and a web console are available for use when accessing virtual machines. When the compute engine is decommissioned, all the data will be erased. Therefore, the persistence of data may be ensured by using either traditional or solid-state drives.

The Hypervisor is what runs Virtual Machines. The hypervisor is a piece of software that is installed on host operating systems and is responsible for the creation and operation of many virtual machines. The hypervisor on the host computer makes it possible to run many virtual machines by allowing them to share the host’s resources. Figure 1.29 shows the relationship between VMs, hypervisor and physical infrastructure.

Figure 1.29: Compute Engine

When it comes to pricing, stability, backups, scalability, and security, using Google Compute Engine is a good choice. It is a cost-efficient solution since consumers only need to pay for the amount of time that they utilize the resources. It allows live migration of VMs from one host to another, which helps to assure the system’s reliability. In addition to that, it has a backup mechanism that is reliable, built-in, and redundant. It does this by making reservations in order to assist and guarantee that applications have the capacity they need as they expand. Additional security will be provided by the compute engine for the applications running on it.

Compute Engine is a good solution for migrating established systems or fine-grained management of the operating system or other operational features.

  • Kubernetes Engine: The Google Kubernetes Engine, sometimes known as GKE, is a container as a service of GCP. When installing containerized apps, clusters are managed via the open-source Kubernetes cluster management system. Kubernetes makes it possible for users to communicate with container clusters by providing the necessary capabilities. Both a control plane and a worker node are components of the Kubernetes system. Worker nodes are supplied to function as compute engines.

Figure 1.30: Kubernetes Engine

VM operates on Guest operating systems and hypervisors, however with Kubernetes Engine, compute resources are segregated in containers, and both the container manager and the Host operating system are responsible for their management.

Working with Google Cloud Storage – Basics of Google Cloud Platform

Majority of the exercises covered in this book inputs the data from the cloud storage, so let us see how we can create cloud storage bucket and upload files to the bucket.

Step 1: Open cloud storage:

Follow the steps described in the Figure 1.20 to open the cloud storage:

Figure 1.20: Cloud Storage Creation

  1. Users can type cloud storage in the search box.
  2. Alternatively, users can navigate under STORAGE to Cloud Storage | Browser.

Step 2: Bucket creation:

Users can upload files or folders into cloud storage after creating a bucket. Initiate the bucket creation process as shown in Figure 1.21:

Figure 1.21: New bucket creation

  1. Click on CREATE BUCKET.

Step 3: Provide name for the bucket:

Follow steps shown in Figure 1.22 to add names, labels to bucket:

Figure 1.22: Bucket name

  1. Users need to provide globally unique name for the bucket.
  2. Labels are optional, it provides key/value pairs to group buckets (or even services) together.
  3. Click on Continue.

Step 4: Choosing location for bucket:

Follow steps in Figure 1.23 to choose the location of the bucket:

Figure 1.23: Location for the bucket

  1. Users can select location type. Multi-region gives options to choose multiple regions of America or Europe or Asia. Dual-region provides options to choose two regions belonging to same continent (America, Europe and Asia-pacific). Region provides options to choose one region from the drop-down list. For majority of the exercises which this book covers we will have the data uploaded to bucket belonging to single region.
  2. Click Continue.

Step 5: Selecting storage class

Follow the steps shown in Figure 1.24 to select the class of storage:

Figure 1.24: Storage class for the bucket

  1. Choose Standard (Exercises which this book covers we will create standard bucket).
  2. Click on Continue.

Note: Try providing different options with location and storage class and observe the variation in the price estimates.

Step 6: Access control for buckets:

Follow the steps described in the Figure 1.25 to configure access control for the bucket:

Figure 1.25: Access control for the bucket

  1. Selection of the box prevents public access from the internet, if this option is chosen it will not be possible to provide public access through IAM policies.
  2. Uniform access controls apply same IAM policies for all the folders and files belonging to the bucket (select Uniform)
  3. Opposite to Uniform access controls, Fine-grained specifies policies to individual files and folders.
  4. Click Continue.

Step 7: Data protection in bucket:

Follow the steps described in Figure 1.26 for data protection:

Figure 1.26: Data protection for bucket

  1. GCP provides additional options for data protection, object versioning helps users for data recovery and retention policy helps for compliance (data cannot be deleted for a minimum period of time once it is uploaded).
  2. All the data uploaded to GCP will be encrypted by google managed encryption key, if users can choose customer managed encryption keys (CMEK) for more control.
  3. Click on Create.

Step 8: Uploading files to the bucket:

Follow the steps described in Figure 1.27 to upload files to cloud storage:

Figure 1.27: File upload to bucket

  1. OBJECTS provide users options to upload the data.
  2. CREATE FOLDER inside the bucket.
  3. Directly UPLOAD FOLDER from the local system.
  4. Upload files.
  5. All the options which users have chosen during the process of bucket creation will be listed under configuration.
  6. PERMISSIONS provides options to enable prevent public access, or change the uniform access/fine-grained access. Also, it provides options to add users for accessing the bucket.
  7. PROTECTION tab provides options to enable or disable object versioning and retention policy.
  8. When specific criteria are satisfied, LIFECYCLE rules allow you to perform operations on the items in a bucket.

Command Line Interface – Basics of Google Cloud Platform-2

The primary distinction between Persistent disk and network file storage, as the name suggests, provides a disk storage over the network. This makes it possible to create systems with many parallel services that are able to read and write files from the same disc storage that is mounted across the network. These systems may be developed with the help of this capability.

The following are some examples of uses for Filestore:

  • Majority of the on-premises applications needs a file system interface, Filestore makes it easy to migrate these kind of enterprise applications to the cloud
    • It is used in the rendering of media in order to decrease latency.
  • Cloud Storage: The service for storing objects that is provided by Google Cloud is known as Google Cloud Storage. It offers a number of extremely intriguing features that are pre-installed, such as object versioning and fine-grained permissions (either by object or bucket), both of which have the potential to simplify the development process and cut down on operational overhead. The Google Cloud Storage platform is used as the basis for a variety of different services.

Having this kind of storage is not at all usual in ordinary on-premises systems, which often have a capacity that is more restricted and connection that is both quick and exclusive. Object storage, on the other hand, has a very user-friendly interface in terms of how it works. In layman’s words, its value proposition is such that you are able to acquire and put whatever file you want using a REST API; in addition, this may extend forever with each object expanding up to the terabyte scale; and last, its value proposition is such that it is possible to store any amount of data. Buckets are the namespaces that are used in Cloud Storage to organize the many items that are stored there. Even while a bucket has the capacity to carry a number of items, each individual item will only ever belong to a single bucket.

The inexpensive cost of this storage type (cents per GB), along with its serverless approach and its ease of use, has contributed to its widespread adoption in cloud-native system architectures. The cloud service provider is then responsible for handling the laborious tasks of data replication, availability, integrity checks, capacity planning, and so on. APIs make it possible for applications to both save and retrieve items.

Based on factors like cost, availability, and frequency of access, cloud storage has four different storage classes. They are Standard, Nearline, Coldline, and Archive as shown in Figure 1.19:

Figure 1.19: GCP Storage

  • Standard class: This class of storage allows for high frequency access and is the type of storage that is most often used by software developers.
  • Nearline storage class: This class is used for data that are not accessed very regularly, generally for data that are not accessed more than once a month. Generally speaking, nearline storage is used for data.
  • Lowline storage class: This class is used for records that are normally accessed not more often than once every three months.
  • Archive storage class: This class is used for data that is accessed with the lowest frequency and is often used for the long-term preservation of data.

Command Line Interface – Basics of Google Cloud Platform-1

You may control your development process and your GCP resources using the gcloud command-line tool provided by the Google Cloud SDK, if you are a developer. To get command-line access to Google Cloud’s computational resources, you may use the Cloud Shell. With a 5-GB home directory, Cloud Shell is a Debian-based virtual computer from which you can easily manage your GCP projects and resources. Cloud Shell is pre-installed with the gcloud command-line tool and other necessary tools, allowing you to get up and running fast. To use cloud shell, follow the steps below:

Activate Cloud shell as shown in Figure 1.17:

Figure 1.17: GCP CLI

Click on Activate cloud shell.It will take few mins to open the Cloud shell command window

Once the cloud shell is activated, black space appears at the bottom of the screen to type commands as shown in Figure 1.18:

Figure 1.18: GCP CLI

  1. Type commands:

gcloud projects create project_id – For Project creation

gcloud projects delete project_id – For project deletion

APIs

It is common for apps to communicate with Google Cloud via Software Development Kit (SDK). Go, Python, and node.js are few of the many programming languages for which Google Cloud SDKs are available.

Note: We will use this method while getting predictions from the deployed models.

Storage

Along with computing and network, storage is considered to be one of the fundamental building components. Applications benefit from storage services’ increased levels of persistence and durability. These services are located deep inside the platform and serve as the foundation for the vast majority of Google Cloud’s services as well as the systems that you construct on top of it. They are the platform’s central pillars.

Three types of storage options are provided by Google Cloud:

  • Persistent Disks
  • Filestore
  • Cloud Storage (GCS)

They are explained as follows:

  • Persistent Disks: Block storage is provided by a Google Cloud Persistent Disk, which is used by virtual machine hosted on Google Cloud (Google Cloud Compute Engine). Imagine those Persistent Disks as simple USB sticks; this will help you comprehend it far better than any other method. They may be connected to virtual machines or detached from them. They allow you to construct data persistence for your services whether virtual machines are started, paused, or terminated. These Persistent Disks are put to use to power not just the virtual machines that are hosted on Google Cloud Compute Engine, but also the Google Kubernetes Engine service.

A Google Cloud Persistent Disk operates similarly to a virtual disc on your local PC. Persistent Disk can either be HDD or SSD, with the latter offering superior I/O performance. In addition, there is the choice of where they are placed as well as the sort of availability that is required, which may be either regional, zonal, or local.

Other capabilities of Google Cloud Persistent Disks that are lesser known but are prove useful include automatic encryption, the ability to resize the disc while it is being used, and a snapshot capability that can be used for both backing up data and creating images for virtual machines. Read and write access can be configured for multiple VMs. One VM can have write access and all other VMs can have read access for a Persistent disk.

  • Filestore: Filestore is a network file storage service that is provided by Google Cloud. The idea of network file storage has been around for quite some time, and similar to block storage, it can also be found in the on-premises data centers that most businesses use. You should be comfortable with the notion if you are used to dealing with NAS, which stands for network-attached storage. In response to the dearth of services that are compatible with network-attached storage (NAS), Google has expanded its offerings to include a cloud file storage service.

New Project Creation – Basics of Google Cloud Platform

Google Cloud projects serve as the foundation for the creation, activation, and use of all Google Cloud services, such as managing APIs, enabling invoicing, adding and deleting collaborators, and managing permissions for Google Cloud resources.

Project can be created by following the steps in web console:

  1. Navigation from IAM and admin | Manage resources. The sequence can be seen in Figure 1.12:

Figure 1.12: Project Creation

  1. Click on CREATE PROJECT.

The project creation can be seen in the following figure:

Figure 1.13: Project Creation

  1. Users to provide name for the project, follow the steps as shown in Figure 1.14:

Figure 1.14: Project Creation

  1. Provide Project name.
  2. Project ID will be automatically populated, users can edit it while creation of the project. If users need to access the resources under Project through SDK or APIs project-ID is needed. Once project is created users cannot change the Project-ID
  3. If users are creating a project under Organization, select the organization. Users with free account cannot create organization or folder. All the projects will be created under No Organization.

Note: Users who are accessing through free account will be given limited amount of project creation.

Deletion of Project

To delete any project that is active:

  1. Select the project that needs to be deleted.
  2. Click on DELETE, users will be prompted for confirmation.

This can be seen illustrated in Figure 1.15:

Figure 1.15: Project deletion

Once the users will confirm the deletion of project, it will be marked for deletion and will be in same state for 30 days. Users can restore the project within a period of 30 days, post that project and the resources associated under that project will be deleted permanently and cannot be recovered back. Also, project which is marked under deletion is not usable.

Interacting with GCP services

When we talk about resources, let us discuss how we can work with them. GCP offers three basic ways to interact with the services and resources:

Google Cloud Platform Console

When working with Google Cloud, the Console or Web User Interface is the most common method of communication. At the same time, it delivers a wealth of functionality and an easy-to-use interface for the users who are just getting started with GCP.

Cloud console can be accessed with the link https://console.cloud.google.com.

Landing page of the google cloud console is as shown in Figure 1.16:

Figure 1.16: GCP Console

Hierarchy of GCP – Basics of Google Cloud Platform

The term resource is used to describe anything that is put to use on Google Cloud Platform. Everything in the Google cloud has a clear hierarchy that resembles a parent-child connection. Hierarchy followed in Google Cloud Platform is as shown in Figure 1.11:

Figure 1.11: Hierarchy of GCP

The Organization node serves as the starting point for the GCP Hierarchy and may stand for either an organization or a firm. The organization is the progenitor of both the folder and the project, as well as their respective resources. The rules for controlling access that have been implemented on the organization are applicable to all of the projects and resources that are affiliated with it.

But, if we establish an account with the personal mail ID as we did in the previous section, we would not be able to view the organization. On the other hand, if we login with our Google Workspace account and then start a project, the organization will be provided for us immediately. In addition, without organization, only a small number of the functions of the resource manager will be available.

Under organization we have folders. We are able to have an extra grouping mechanism at our disposal with the assistance of folders, and we may conceptualize this as a hierarchy of sub-organizations contained inside the larger organization. It is possible for a folder to have extra subfolders included inside it. You have the option of granting rights to access the project and all of its resources, either completely or partially, depending on the folder in question.

A project is an entity that exists at the most fundamental level. It is possible to have many projects nested inside of organization’s and folders. The project is absolutely necessary in order to make use of GCP resources, and it serves as the foundation for making use of cloud services, managing APIs, and enabling billing. A project has two different IDs connected with it. The first of these is the project ID, which is a one-of-a-kind identification for the project. And the second one is the project number, which is automatically issued whenever a project is created, and we are unable to modify it in any way.

The term resources refers to the components that make up Google Cloud Platform. Resources include things like cloud storage, databases, virtual machines, and so on. Each time we establish a cloud storage bucket or deploy a virtual machine, we link those resources to the appropriate project.

Services offered by GCP – Basics of Google Cloud Platform

Users may make use of a comprehensive selection of services provided by Google Cloud Platform. Every one of the services may be placed into one of the categories that are shown in the Figure 1.10:

Figure 1.10: Services of GCP

  • Google offers Cloud Storage for storing unstructured items, Filestore for sharing files in the traditional manner, and persistent disk for virtual machines in the storage space. Compute Engine, App Engine, Cloud Run, Kubernetes Engine, and Cloud Functions are the core computing services that Google Cloud Platform provides.
  • Cloud SQL, supports MySQL, PostgreSQL, and Microsoft SQL Server; and Cloud Spanner, which is a massively scalable database that is capable of running on a global scale. These are the relational database services that GCP provides.
  • Bigtable, Firestore, Memorystore, and Firebase Realtime Database are different NoSQL services that Google provides. When dealing with massive amounts of analytical work, Bigtable is the most effective solution. Firestore is well suited for use in the construction of client-side web and mobile apps. Firebase Realtime Database is ideal for real-time data synchronization between users, such as is required for collaborative app development. Memorystore is a kind of datastore that operates entirely inside memory and is generally used to accelerate application performance by caching data that is frequently accessed.
  • BigQuery is the name of the data warehouse service offered by Google.
  • A Virtual Private Cloud (VPC) is an on-premises network on GCP. By using VPC Network Peering, virtual private clouds may be linked to one another. Users may utilize Cloud VPN, which operates over the internet, to establish a protected connection between a VPC and an on-premises network. Alternatively, users can establish a dedicated, private connection by using either Cloud Interconnect or Peering. To facilitate the migration of applications and data sets to its platform, the platform provides a diverse set of options. Offers Anthos as an alternative for hybrid cloud deployments.
  • The field of data analytics is one in which Google excels in particular. Pub/Sub is used as a buffer for services that may not be able to deal with large surges in the amount of data coming in. Dataproc is a Hadoop and Spark implementation that is controlled by Dataproc. Apache Beam is the underlying technology that powers Dataflow, a managed implementation. You can do data processing using Dataprep even if you do not know how to write code, and it leverages Dataflow behind the scenes. Users may use google looker studio to visualize or show your data using graphs, charts, and other such graphical representations.
  • Platform provides AI and ML services for a diverse group of customers. Vertex AI provides AUTOML option for the novices, for more experienced users, it provides trained models that make predictions via API and also provides various options for the advanced AI practitioners.
  • Cloud Build enables you to develop continuous integration / continuous deployment pipelines. Private Git repositories that are hosted on GCP are known as Cloud Source Repositories. Artifact Registry expands on the capabilities of Container Registry and is the recommended container registry for Google Cloud. It provides a single location for storing and managing your language packages and Docker container images.
  • IAM stands for Identity and Access Management, and it enables users and apps to have roles assigned to them. Everything you store in the GCP is by default encrypted. Companies now have the ability to control their encryption keys thanks to Cloud Key Management. Your API keys, passwords, certificates, and any other sensitive information may be safely stored in the Secret Manager.
  • The Monitoring, Logging, Error Reporting, Trace, Debugger, and Profiler functions are all included in the Cloud Operations suite. The Active Security Threats and Vulnerabilities, as well as Compliance Infractions, are Presented to You by the Security Command Center. The development of Google Cloud Platform resources may be automated with the help of Cloud Deployment Manager.

Cloud Service Model – Basics of Google Cloud Platform

The cloud platform offers a variety of services, all of which may be roughly placed into one of three distinct categories:

  • Infrastructure as a service (IAAS)
  • Platform as a service (PAAS)
  • Software as a service (SAAS)

The difference between cloud service models is illustrated in the Figure 1.9

Figure 1.9: Cloud Service Model

Let us imagine we are working on an application and hosting it at the same time on a server that is located on our premises. In this particular circumstance, it is our obligation to own and maintain the appropriate infrastructure, as well as the appropriate platforms, and of course, our application.

  • Infrastructure as a service: In the case of IAAS, it will be the cloud’s obligation to provide the necessary infrastructure, which may include virtual machines, networking, and storage devices. We are still responsible for ensuring that we have the appropriate platform for development and deployment. We have no other option for exercising control over the underlying infrastructure but to make use of it. One example of an infrastructure as a service provided by Google with its compute engine and Kubernetes engine.
  • Platform as a service: In the case of PAAS, the responsibility for providing the appropriate platform for development and deployment, such as an operating system and tools for the environment in which programming languages are used, lies with the cloud service provider. This responsibility exists in addition to the infrastructure responsibility to provide such things. One example of a PAAS platform is Google App Engine.
  • Software as a service: In the instance of SAAS, a cloud service provider will rent out to their customers apps that are theirs to run on their infrastructure. The maintenance of the software applications will also fall within the purview of the cloud service provider, in addition to the platform and the underlying infrastructure. These software programs are accessible to us on whatever device we choose by way of web browsers, app browsers, and so on. Email (Gmail) and cloud storage (Google Drive) are two excellent instances of SAAS.
  • Data as a service (DAAS): DAAS is a service that is now starting to gain broad use, in contrast to the three service models that were mentioned before, which have been popular for more than a decade. This is partly owing to the fact that general cloud computing services were not originally built for the management of enormous data workloads; rather, they catered to the hosting of applications and basic data storage needs (as opposed to data integration, analytics, and processing).

SaaS eliminates the need to install and administer software on a local computer. Similarly, Data-as-a-Service methodology centers on the on-demand delivery of data from a number of sources using application programming interfaces (APIs). It is intended to make data access more straightforward, and provide curated datasets or streams of data that can be consumed in a variety of forms. These formats are often unified via the use of data virtualization. In fact, a DaaS architecture may consist of a wide variety of data management technologies such as data virtualization, data services, self-service analytics.

In its most basic form, DaaS enables organizations to have access to the ever-growing quantity and sophistication of the data sources at their disposal in order to give consumers with the most relevant insights. The democratization of data is something that is absolutely necessary for every company that wants to get actual value from its data. It gives a significant potential to monetize an organization’s data and acquire a competitive edge by adopting a more data-centric approach to the operations and procedures.