Deploying Azure Kubernetes Service (AKS) with Terraform

VAIBHAV HARIRAMANI
19 min readApr 25, 2024

--

A Comprehensive Guide to Deploying Azure Kubernetes Service (AKS) with Terraform

Introduction

In today’s fast-paced world of cloud computing, managing infrastructure efficiently is paramount. Azure Kubernetes Service (AKS) offers a robust platform for deploying, managing, and scaling containerized applications with ease. In this guide, we’ll explore how to harness the power of Terraform to provision an AKS cluster on Azure, leveraging infrastructure as code principles.

Terraform is an open-source IaC (Infrastructure-as-Code) tool for configuring and deploying cloud infrastructure. It codifies infrastructure in configuration files that describe the desired state for your topology.

Prerequisites

Before we dive into the world of Terraform and AKS, ensure you have the following prerequisites in place:

Azure Account: Make sure you have an active Azure account. If you don’t have one, you can sign up for a free Azure account here.

Terraform: Ensure that you have Terraform installed on your local machine. You can download the latest version of Terraform here.

Azure CLI: The Azure Command-Line Interface (CLI) is required for managing your Azure resources. You can download and install the Azure CLI here.

Below resources will be created using this terraform configuration:-

  • Resource Group
  • Service Principle
  • AKS cluster using the SPN
  • Azure key vault to store the client secret
  • Secret uploaded to key vault
  • kubeconfig for AKS

Login to Azure & getting Subscription ID

#login and follow prompts
az login --use-device-code
# view and select your subscription account
az account list -o table
$subscriptionId = az account list --query "[?isDefault].id" --output tsv
Write-Host $subscriptionId
az account set --subscription $subscriptionId

Setting Up Terraform

terraform init
touch main.tf

Terraform providers for Azure infrastructure

There are several Terraform providers that enable the management of Azure infrastructure:

  • AzureRM: Manage stable Azure resources and functionality such as virtual machines, storage accounts, and networking interfaces.
  • AzureAD: Manage Microsoft Entra resources such as groups, users, service principals, and applications.
  • AzureDevops: Manage Azure DevOps resources such as agents, repositories, projects, pipelines, and queries.
  • AzAPI: Manage Azure resources and functionality using the Azure Resource Manager APIs directly. This provider compliments the AzureRM provider by enabling the management of Azure resources that aren’t released. For more information about the AzAPI provider, see Terraform AzAPI provider.
  • AzureStack: Manage Azure Stack Hub resources such as virtual machines, DNS, virtual networks, and storage.

In this tutorial, we’ll utilize AzureRM as our provider. Therefore, to initialize our Terraform plan, we need to declare our providers as follows.

# We strongly recommend using the required_providers block to set the
# Azure Provider source and version being used
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}

Now, we’ll dive into the process of setting up Azure Kubernetes Service (AKS) using Terraform. But first, let’s understand what a resource group is.

Setting Up Resource group

Creating the resources

In Azure, all infrastructure elements such as virtual machines, storage, and our Kubernetes cluster need to be attached to a resource group. The resource group also defines the region in which resources are created. To create a resource group, we use the azure_resource_group stanza. The below example is using variables for the name and the location. Variables allow you to create a dynamic configuration and avoid hardcoding values.

In Azure, a resource group is a logical container used to organize and manage related Azure resources. These resources can include virtual machines, storage accounts, databases, networking interfaces, and more.

Resource groups allow you to manage and monitor these resources as a single entity, making it easier to deploy, manage, and track the costs associated with your Azure infrastructure for more info checkout this article — How to Use Azure Resource Groups for Better VM Management | by Jay Chapel | Medium.

for setting up resource group using terrafrom we will be using azurerm_resource_group along with required arguments location and name

resource "azurerm_resource_group" "name_of_your_rsg" {
name = ""
location = ""
}

Declaring an Input Variable

If you’re familiar with traditional programming languages, it can be useful to compare Terraform modules to function definitions:

Each input variable accepted by a module must be declared using a variable block:

variable "image_id" {
type = string
}

variable "availability_zone_names" {
type = list(string)
default = ["value_for_variables"]
}

Arguments in Terraform Variables

Terraform CLI defines the following optional arguments for variable declarations:

  • default - A default value which then makes the variable optional.
  • type - This argument specifies what value types are accepted for the variable.
  • description - This specifies the input variable's documentation.

and to pass value in our resource group in main.tf we will creating new file variables.tf

variable "rgname" {
type = string
description = "resource group name"
}

variable "location" {
type = string
default = "canadacentral"
}

and now edit our main.tf

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
features {}
}

resource "azurerm_resource_group" "rg1" {
name = var.rgname
location = var.location
}

if you have noticed we haven’t declared default value for rgname variable because we want to make this terraform template reusable for passing value at runtime we will create .tfvars file

.tfvars

Terraform allows you to define variable files called *.tfvars to create a reusable file for all the variables for a project. You may still define environmental variables. The exported values will override any of the variables in both the variables.tf file and the terraform.tfvars file.

You can create multiple versions of this file, and then, apply and destroy using this file with the -var-file= flag.

So for our template we will create terraform.tfvars and will pass following code

rgname = "test-AKS-rg"
location = "canadacentral"

terraform init

Command: plan

The terraform plan command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure. By default, when Terraform creates a plan it:

  • Reads the current state of any already-existing remote objects to make sure that the Terraform state is up-to-date.
  • Compares the current configuration to the prior state and noting any differences.
  • Proposes a set of change actions that should, if applied, make the remote objects match the configuration.

terraform plan

we can see after hitting terraform plan our terraform will create resource group

Command: apply

The terraform apply command executes the actions proposed in a Terraform plan.

You can pass the -auto-approve option to instruct Terraform to apply the plan without asking for confirmation.

Warning: If you use -auto-approve, we recommend making sure that no one can change your infrastructure outside of your Terraform workflow. This minimizes the risk of unpredictable changes and configuration drift.

terraform apply --auto-approve

Resource group test-AKS-rg is been create on Azure

Azure Service Principal

What is Azure Service Principal?

An Azure service principal is an identity created for use with applications, hosted services, and automated tools to access Azure resources. This access is restricted by the roles assigned to the service principal, giving you control over which resources can be accessed and at which level.

Think of it as a digital key that grants access to Azure resources. With finely tuned role assignments, you dictate precisely which resources the Service Principal can interact with and the actions it can perform.

Note: For security reasons, it’s always recommended to use service principals with automated tools rather than allowing them to log in with a user identity.

What is Azure Active Directory?

Azure Active Directory (Azure AD) is Microsoft’s enterprise cloud-based identity and access management (IAM) solution.

Why Do We Need Service Principals?

We need Service principal because our Kubernetes cluster — needs to interact with Azure resources securely.

Here, the Service Principal acts as the bridge, facilitating authenticated access from the application to Azure AD, ensuring a seamless and secure transactional flow.

Service Principal Creation & Role Assignment

Now, let’s dive into the practical, Terraform, with its declarative syntax and infrastructure-as-code paradigm, empowers us to orchestrate Azure resources effortlessly.

we are going to use azuread_application and azuread_service_principal

The depends_on

Use the depends_on meta-argument to handle hidden resource or module dependencies that Terraform cannot automatically infer. You only need to explicitly specify a dependency when a resource or module relies on another resource's behavior but does not access any of that resource's data in its arguments.

Here’s a snippet showcasing the module’s structure and syntax:

main.tf

# Fetching Azure configuration
data "azuread_client_config" "current" {}
# Creating Azure AD Application
resource "azuread_application" "main" {
display_name = var.service_principal_name
owners = [data.azuread_client_config.current.object_id]
depends_on = [
azurerm_resource_group.rg1
]
}
# Creating AzureAD Service Principle
resource "azuread_service_principal" "main" {
client_id = azuread_application.main.client_id
app_role_assignment_required = true
owners = [data.azuread_client_config.current.object_id]
}
# Creates and holds Service Principle Id created in previous step to be used by other resources
resource "azuread_service_principal_password" "main" {
service_principal_id = azuread_service_principal.main.object_id
}

variables.tf

variable "service_principal_name" {
type = string
description = "The Name of the Service Principal to create."
}

terraform.tfvars

service_principal_name = "test-AKSDemo-spn"

output.tf

We’ll craft an output.tf file within our Terraform project to extract the secrets generated during the provisioning process. Here's a snippet demonstrating how we can achieve this:

output "service_principal_name" {
description = "name of service principal"
value = azuread_service_principal.main.display_name
}
output "service_principal_object_id" {
description = "object id of service principal"
value= azuread_service_principal.main.object_id
}
output "service_principal_tenant_id" {
value = azuread_service_principal.main.application_tenant_id
}
output "service_principal_application_id" {
description = "application_id"
value = azuread_service_principal.main.client_id
}
output "client_id" {
description = "The application id of AzureAD application created."
value = azuread_application.main.client_id
}
output "client_secret" {
description = "Password for service principal."
value = azuread_service_principal_password.main.value
}

In this snippet, we define output variables to capture the values of secrets . These output variables can then be consumed by other Terraform modules or scripts, facilitating seamless integration into your infrastructure.

Role Assignments

Azure role-based access control (Azure RBAC) is the authorization system you use to manage access to Azure resources. To grant access, you assign roles to users, groups, service principals, or managed identities at a particular scope.

azurerm_role_assignment

Assigns a given Principal (User or Group) to a given Role. We will use this terraform module to grant contributor access to user. so now add this your main.tf file

#Fetching Subscription details
data "azurerm_subscription" "current" {}
# Assiging role
resource "azurerm_role_assignment" "rolespn" {

scope = "${data.azurerm_subscription.current.id}"
role_definition_name = "Contributor"
principal_id = data.azuread_client_config.current.id

depends_on = [
azuread_service_principal.main
]
}

terraform plan

terraform apply --auto-approve

New app is created on Azure Active Directory

Secure Secrets with Terraform and Azure Key Vault

In the ever-evolving landscape of cloud security, safeguarding sensitive information is paramount. Azure Key Vault emerges as a guardian, offering a secure repository for secrets and cryptographic keys.

We want to use client_id and Client_credentials created by Service principal generatd in previous step because they are required in provisioning of our Kubernetes Cluster but don’t want to expose it so will save those credentials in key vault

Understanding Azure Key Vault

Azure Key Vault serves as a centralized cloud service for securely storing and managing sensitive information such as passwords, API keys, and certificates. By leveraging Azure’s robust security measures, Key Vault ensures that secrets remain encrypted both at rest and in transit.

Storing Secrets in Key Vault Using Terraform

Terraform output values let you export structured data about your resources. You can use this data to configure other parts of your infrastructure with automation tools, or as a data source for another Terraform workspace. Outputs are also how you expose data from a child module to a root module.

To streamline our workflow, we’ll employ a Terraform module specifically crafted for Azure Key Vault management. This module abstracts away the complexity, allowing us to focus on defining our secrets and policies effortlessly.

will use azurerm_key_vault, azurerm_key_vault_access_policy & azurerm_key_vault_secret module from terraform

Argument Reference

The following arguments are supported:

  • name - (Required) Specifies the name of the Key Vault. Changing this forces a new resource to be created. The name must be globally unique. If the vault is in a recoverable state then the vault will need to be purged before reusing the name.
  • location - (Required) Specifies the supported Azure location where the resource exists. Changing this forces a new resource to be created.
  • resource_group_name - (Required) The name of the resource group in which to create the Key Vault. Changing this forces a new resource to be created.
  • sku_name - (Required) The Name of the SKU used for this Key Vault. Possible values are standard and premium.
  • tenant_id - (Required) The Azure Active Directory tenant ID that should be used for authenticating requests to the key vault.
  • access_policy - (Optional) A list of access_policy objects (up to 1024) describing access policies, as described below.
# Fetching Azure Resource Manager Client Configuration
data "azurerm_client_config" "current" {}
# Creating the Key Vault
resource "azurerm_key_vault" "kv" {
name = var.keyvault_name
location = var.location
resource_group_name = var.rgname
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = "premium"


depends_on = [
azuread_service_principal.main
]

}

azurerm_key_vault_access_policy

Manages a Key Vault Access Policy.

resource "azurerm_key_vault_access_policy" "example" {
key_vault_id = azurerm_key_vault.kv.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id


# key_permissions = [
# "Get", "List", "Update", "Create", "Import", "Delete", "Recover", "Backup", "Restore"
# ]

secret_permissions = ["Get", "List", "Set", "Delete"]

}
# Creating Key Vault Secret
resource "azurerm_key_vault_secret" "example" {
name = azuread_service_principal.main.client_id
value = azuread_service_principal_password.main.value
key_vault_id = azurerm_key_vault.kv.id

depends_on = [
azurerm_key_vault.kv
]
}

terraform plan

terraform apply --auto-approve

key vault and credentials are created

So far we have configured

  • Resource Group
  • Service Principle
  • Role Assignments
  • Azure key vault to store the client secret
  • Secret uploaded to key vault

Now we are going to create Azure Kubernetes Service(AKS)

Azure Kubernetes Service

Creating AKS using azurerm_kubernetes_cluster module from Terraform

Argument Reference

The following arguments are supported:

  • name - (Required) The name of the Managed Kubernetes Cluster to create. Changing this forces a new resource to be created.
  • location - (Required) The location where the Managed Kubernetes Cluster should be created. Changing this forces a new resource to be created.
  • resource_group_name - (Required) Specifies the Resource Group where the Managed Kubernetes Cluster should exist. Changing this forces a new resource to be created.
  • default_node_pool - (Required) A default_node_pool block as defined below.
  • dns_prefix - (Optional) DNS prefix specified when creating the managed cluster. Possible values must begin and end with a letter or number, contain only letters, numbers, and hyphens and be between 1 and 54 characters in length. Changing this forces a new resource to be created.
  • service_principal - (Optional) A service_principal block as documented below. One of either identity or service_principal must be specified.
  • A linux_profile block supports the following:
  • admin_username - (Required) The Admin Username for the Cluster. Changing this forces a new resource to be created.
  • ssh_key - (Required) An ssh_key block as defined below. Only one is currently allowed. Changing this will update the key on all node pools. More information can be found in the documentation.

It is recommended one of either identity or service_principal blocks must be specified.

So before jumping on syntax we have to setup SSH key to connect with our cluster

Generate SSH key

ssh-keygen -t rsa -b 4096 -N "VeryStrongSecret123!" -C "your_email@example.com" -q -f  ~/.ssh/id_rsa
SSH_KEY=$(cat ~/.ssh/id_rsa.pub)

Using our SSH Keys

To use your SSH keys, copy your public SSH key to the system you want to connect to. Use your private SSH key on your own system. Your private key will match up with the public key, and grant access.

Setting Up AKS Resources

Okay we know the and for that we are going to use child modules

Modules in Terraform

Modules are containers for multiple resources that are used together. A module consists of a collection of .tf and/or .tf.json files kept together in a directory.

Modules are the main way to package and reuse resource configurations with Terraform.

The Root Module

Every Terraform configuration has at least one module, known as its root module, which consists of the resources defined in the .tf files in the main working directory.

Child Modules

A Terraform module (usually the root module of a configuration) can call other modules to include their resources into the configuration. A module that has been called by another module is often referred to as a child module.

Child modules can be called multiple times within the same configuration, and multiple configurations can use the same child module.

# for creating AKS module will create folder called AKS inside modules 
# our directory will look like this
../
Main.tf
Modules/
../../AKS
Main.tf
# Datasource to get Latest Azure AKS latest Version
data "azurerm_kubernetes_service_versions" "current" {
location = var.location
include_preview = false
}

resource "tls_private_key" "pk" {
algorithm = "RSA"
rsa_bits = 4096
}

We are setting the standard attributes, name of the cluster, location, and the resource_group_name. Then we set the dns_prefix; the dns_prefix forms part of the fully qualified domain name used to access the cluster.

The linux_profile stanza allows us to configure the settings which enable logging into the worker nodes using ssh.

With AKS you only pay for the worker nodes and in the next block agent_pool_profile we configure the details for these. This block includes the number of workers we would like to create and the type of workers. Should we need to scale up or scale down the cluster at a later date, we can change the count of the workers defined in this block.

The service_principle block allows us to set the client_id and client_secret that Kubernetes uses when creating Azure load balancers, for this example we can set this to the main client_id and secret which is used to create the resources. When running in a production environment, we would usually set this to a specific restricted account. We would never want to hard code this information, so we are getting these values from a variable. To set the variable, you can either pass the information when running terraform plan:



# Setting up first Node pool
resource "azurerm_kubernetes_cluster" "aks-cluster" {
name = var.aks_cluster_name
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = "${var.resource_group_name}-cluster"
kubernetes_version = data.azurerm_kubernetes_service_versions.current.latest_version
node_resource_group = "${var.resource_group_name}-nrg"

default_node_pool {
name = "defaultpool"
vm_size = "Standard_DS2_v2"
zones = [1, 2, 3]
enable_auto_scaling = true
max_count = 3
min_count = 1
os_disk_size_gb = 30
type = "VirtualMachineScaleSets"
node_labels = {
"nodepool-type" = "system"
"environment" = "staging"
"nodepoolos" = "linux"
}
tags = {
"nodepool-type" = "system"
"environment" = "staging"
"nodepoolos" = "linux"
}
}

service_principal {
client_id = var.client_id
client_secret = var.client_secret
}



linux_profile {
admin_username = "ubuntu"
ssh_key {
key_data = tls_private_key.pk.public_key_openssh
}

}

provisioner "local-exec" { # Create "myKey.pem" to your computer!!
command = "echo '${tls_private_key.pk.private_key_pem}' > ./myKey.pem"
}

network_profile {
network_plugin = "azure"
load_balancer_sku = "standard"
}


}

# Setting up second Node
resource "azurerm_kubernetes_cluster_node_pool" "monitoring" {
name = "monitoring"
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks-cluster.id
vm_size = "Standard_DS2_v2"
node_count = 1
os_disk_size_gb = 250
os_type = "Linux"
}

and to pass value in our resource group in main.tf we will creating new file variables.tf

variable "location" {
description = "The location where the resources will be created."
}
variable "aks_cluster_name" {
type = string
description = "Name of your AKS cluster"
}
variable "resource_group_name" {
type = string
description = "Resource group name that the AKS cluster is located in"
}
variable "service_principal_name" {
type = string
description = "value for service principal application Id"
}
variable "ssh_public_key" {
description = "value"
default = "~/.ssh/id_rsa.pub"
}
variable "client_id" {
type=string
description="Azure Service Principal client id"
}
variable "client_secret" {
type = string
description = "value for Azure Service Principal password"
sensitive = true
}

Now to call this module in our main source code

main.tf

#create Azure Kubernetes Service
module "aks" {
source = "./modules/aks"
service_principal_name = var.service_principal_name
client_id = azuread_application.main.client_id
client_secret = azuread_service_principal_password.main.value
aks_cluster_name = var.cluster-name
location = var.location
resource_group_name = var.rgname

depends_on = [
azuread_service_principal.main
]

}

variables.tf

variable "cluster-name" {
type = string
description = "AKS cluster-name"

}

terraform.tfvars

cluster-name           = "test-aks-demo-cluster"

output.tf

output "kube_config" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config_raw}"
}

output "host" {
value = "${azurerm_kubernetes_cluster.k8s.kube_config.0.host}"
}

terraform plan

terraform apply --auto-approve

New AKS Cluster is created

Alright our cluster is created let’s create one more node pool using azurerm_kubernetes_cluster_node_pool

a terraform module which manages a Node Pool within a Kubernetes Cluster

resource "azurerm_kubernetes_cluster_node_pool" "monitoring" {
name = "monitoring"
kubernetes_cluster_id = azurerm_kubernetes_cluster.aks-cluster.id
vm_size = "Standard_DS2_v2"
node_count = 1
os_disk_size_gb = 250
os_type = "Linux"
}
it has provisioned 2 node pool

Access Your AKS Cluster with kubectl

After completing the Terraform request process, I connected to my AKS cluster using the Azure CLI. I ran the following commands:

az aks get-credentials --resource-group MyAKSResourceGroup --name MyAKSCluster

This command configured kubectl to connect to my AKS cluster.

Note: Replace MyAKSResourceGroup and MyAKSCluster with your actual Azure resource group and AKS cluster name.

Verify Your AKS Cluster

I Ran the following command to ensure that my AKS cluster is up and running:

terraform output kube_config > azurek8s
set KUBECONFIG=azurek8s
kubectl get nodes

Example output (assuming cluster is healthy):

NAME                                  STATUS   ROLES    AGE   VERSION
aks-defaultpool-12512361-vmss000000 Ready <none> 53m v1.29.2
aks-monitoring-33570760-vmss000000 Ready <none> 51m v1.29.2

You should see the nodes in your AKS cluster, indicating a successful setup.

For those familiar with Kubernetes, setting up a cluster is just the beginning. Beyond the basic cluster setup, there’s often a need to provision additional infrastructure to make the Kubernetes cluster fully operational. This includes essential components like Ingress controllers, monitoring tools such as Prometheus, and logging services like Loki, Grafana, and Fluentd.

Setting Up Services in Kubernetes

Traditionally, configuring these components involved manually applying YAML files to the Kubernetes cluster, a process prone to errors and inconsistencies. However, Terraform offers a compelling alternative with its Kubernetes provider. This provider enables users to define and manage Kubernetes resources using Terraform configuration files, streamlining the provisioning process and ensuring consistency across deployments.

Let’s delve into how we can leverage Terraform to deploy a Kubernetes workload consisting of two pods and a load balancer. To achieve this, we’ll create a Terraform module dedicated to Kubernetes resource management. Within this module, we’ll define the desired Kubernetes resources using Terraform’s declarative syntax.

While our example focuses on deploying a basic workload, Terraform’s flexibility allows users to extend these modules to provision various components such as monitoring systems, Ingress controllers, or service meshes. By modularizing infrastructure provisioning, Terraform empowers users to efficiently manage Kubernetes environments while adhering to best practices and maintaining scalability.

# for creating k8s module will create folder inside modules 
# our directory will look like this
../
Main.tf
Modules/
../../k8s
k8.tf
../../AKS
Main.tf

we’re going to call it the kubernetes provider and which needs authentication details. so how do we want terraform to authenticate with kubernetes so there’s a few ways to do that but the easiest way is to tell it not to load a config file and we’re going to pass the host of the api server of aks, client certificate and the client key and ca certificate of aks now as we know we get we’re getting these from variables inside k8.tf file put this

provider "kubernetes" {
host = var.host
client_certificate = var.client_certificate
client_key = var.client_key
cluster_ca_certificate = var.cluster_ca_certificate
}

create variables.tf file in this module folder

variable "host" {}
variable "client_certificate" {}variable "client_key" {}variable "cluster_ca_certificate" {}

Once we have declared config provider we will setup deployment configuration for that we will use kubernetes_deployment

resource "kubernetes_deployment" "example" {
metadata {
name = "terraform-example"
labels = {
test = "MyExampleApp"
}
}
  spec {
replicas = 3
selector {
match_labels = {
test = "MyExampleApp"
}
}
template {
metadata {
labels = {
test = "MyExampleApp"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "example"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
liveness_probe {
http_get {
path = "/nginx_status"
port = 80
http_header {
name = "X-Custom-Header"
value = "Awesome"
}
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}

Hooray!

Congratulations! 🎉 Together, we’ve successfully built a simple AKS cluster using Terraform. I hope you enjoyed this hands-on journey into the world of Azure Kubernetes Service and Infrastructure as Code.

Github Repo

But wait, there’s more to explore! In upcoming articles, I’ll delve into exciting topics like Ansible, Helm charts, and beyond. Your feedback and questions are invaluable, so feel free to share as I continue this learning adventure. Stay curious, and let’s keep building amazing things! 🚀

Thank You for reading

Please give 👏🏻 Claps if you like the blog.

GEEKY BAWA

just a silly Geek who love to seek out new technologies and experience cool stuff.

Do Checkout My other Blogs

Do Checkout My Youtube channel

If you want to get in touch and by the way, you know a good joke you can connect with me on Twitter or LinkedIn.

Thanks for reading!😄 🙌

Connect with me on Twitter and LinkedIn

Do find time check out my other articles and further readings in the reference section. Kindly remember to follow me so as to get notified of my publications.

Made with ❤️by Vaibhav Hariramani

Don’t forget to tag us

if you find this blog beneficial for you don’t forget to share it with your friends and mention us as well. And Don’t forget to share us on Linkedin, instagram, facebook , twitter, Github

More Resources

To learn more about these Resources you can Refer to some of these articles written by Me:-

Download THE VAIBHAV HARIRAMANI APP

The Vaibhav Hariramani App (Latest Version)

Download THE VAIBHAV HARIRAMANI APP consist of Tutorials,Projects,Blogs and Vlogs of our Site developed Using Android Studio with Web View try installing it in your android device.

Follow me

on Linkedin, instagram, facebook , twitter, Github

Happy coding ❤️ .

--

--

VAIBHAV HARIRAMANI

Hi there! I am Vaibhav Hariramani a Travel & Tech blogger who love to seek out new technologies and experience cool stuff.