Wednesday, April 23, 2025

Terraform [HCL] Language - Write a terraform code

April 23, 2025 0

 

Hashi Corp Language:


Expressions:
  * Expression work with values in the configuration
  * They can be simple values as text or number
  * We can be use as complex such as data, loops and conditions
   Example:
 list(tuple) - ["us-east1", "us-east-2"]
 map - { name = "user1", department = "devops"}
 bool - true or false
 
versoin = "-> 4.16"
-> will consider minior. It will allow a minor version upto .99 but it will not change a major number in our case is "4".

"*" operation:
It will allow the number in the loop and avoid overloading a variable into memory.
Example:
output "ebs_block_device" {
  description = "block device volume IDs"
  value = aws_instance.splat_lab_labs.ebs_block_device[*}.volume_id
}
Functions:
  * Function is one or more insructions that perfrom a specific task.
  * Terraform functions are used to add functionality or transform and combine values.
  Example:
 resource "aws_iam_user" "web_user" {
   name ="user-${count.index}"
   count = 5
   tags = {
     time_created = timestamp()
department = "OPS"
   }
}
Example2:
resource "aws_iam_user" "functional_user" {
  name = "functional-user"
  tags = {
    department = "OPS"
time_created = timestamp()
time2= formatdate("MM DD YYYY hh:mm ZZZ", timestamp()}
  }
}

Meta Arguments:
count - It allow to set multiple resources within a block.
Example:
resource "aws_instance" "count_test" {
  count = 2
  ami = "ami-0c7c4e3c6b4941f0f"
  instance_type = "t2.micro"
  tags = {
    Name ="Count-Test-${count.index}"
  }
}
for-each meta arugment:
A for loop is using for iterating/executing over the sequence within the block.
Example:
Creating four users while creating an AWS instance.

resource "aws_iam_user" "Accounts" {
  for_each =toset{("Shiva", "Dev", "John", "Abdul")}
  name = each.key
  }
}
Local Values:
  * Local value assigns a name to an expression that can be reused easily.
  * Use case such as various list [ports, username] & reference to other values.
Example:
resource "aws_iam_user" "accounts" {
  for_each=local.accounts
  name = each.key
}
we will define a local function within the block.
locals {
  accounts = toset {("James", "Don")}
}

Dynamic block:
The codes will be reusable within resource block. It will speed up the code execution time.

Version Constraints:
  * Version constraints are configurable strings that manage the version of software to be used with Terraform includes providers and TF version as well.
  * TF version is followed by semantic versioning (Major:Minor:Patch)

= constraint - It will allow the exact version only
!= constraint - Excludes exact version number
< > - Grater than, less than a version number
>= <= - Grater than or equal to or less than or equal to that version
~> - ONlythe rightmost number increments [minor or patch number]

Stored the state file in the remote object through TF code.
terraform {
  required_providers {
    aws = {
  source = "hasicorp/aws"
  version = "5.01"
}
  }
  required_version = " <= 1.4.6"
}

module "s3_bucket" {
  source= "terraform-aws-modules/s3-bucket/aws"
  version "3.14.0"
  bucket =""
  acl = "private"
  force_destroy = true
  
  control_object_ownership = true
  object_ownership = "ObjectWriter"
  
  versioning = {
    enabled =true
  }
|

TF state file saves in bucket and set as provide inside of backet. 

Life cycle management:
create_before_destroy : It will create instead of destroying at first
prevent_destroy : It will prevent from destroying of instance
ignore_changes : It will implement of any changes.
replace_triggered_by : It will overwrite the changes

Saturday, April 12, 2025

Use case study of CloudFormation and Terraform

April 12, 2025 0

 

Scope: CloudFormation is very powerful because it is developed and supported directly by AWS, but Terraform has a great community that always works at a fast pace to ensure new resources, and features are implemented for providers quickly.
Type: CloudFormation is a managed service by AWS, but Terraform has a CLI tool that can run from your workstation, a server, or a CI/CD system (such as Jenkins, GitHub Actions, etc.) or Terraform Cloud (a SaaS automation solution from HashiCorp).
License and support: CloudFormation is a native AWS service, and AWS Support plans cover it as well. Terraform is an enterprise product and an open source project. HashiCorp offers 24/7 support, but at the same time, the huge Terraform community and provider developers are always helpful.
Syntax/language: CloudFormation supports both JSON and YAML formats. Terraform uses HashiCorp Configuration Language (HCL), which is human-readable as well as machine-friendly.
Architecture: CloudFormation is an AWS-managed service to which you send/upload your templates for provisioning; on the other hand, Terraform is a decentralized system with which you can provision infrastructure from any workstation or server.
Modularization: In CloudFormation, nested stacks and cross-stack references can be used to achieve modularization, while Terraform is capable of creating reusable and reproducible modules.
User experience/ease of use: In contrast to CloudFormation, which is limited to AWS services, Terraform spans multiple cloud service providers such as AWS, Azure, and Google Cloud Platform, among others. This flexibility allows Terraform to provide a unified approach to managing cloud infrastructure across multiple providers, making it a popular choice for organizations that use more than one cloud provider.
Life cycle and state management: CloudFormation stores the state and manages it with the use of stacks. Terraform stores the state on disk in JSON format and allows you to use a remote state system, such as an AWS S3 bucket, that gives you the capability of tracking versions.
Import from existing infrastructure: It is possible to import resources into CloudFormation, but only a few resources are supported. It is possible to import all resources into Terraform state, but it does not generate configuration in the process; you need to handle that. But there are third-party tools that can generate configuration, too.
Verification steps: CloudFormation uses change sets to verify the required changes. Terraform has a powerful plan for identifying changes and allows you to verify your changes to existing infrastructure before applying them.
Rolling updates and rollbacks: CloudFormation automatically rolls back to the last working state. Terraform has no feature for rolling updates or rollbacks, but you can build a rollback system using a CI/CD system.
Multi-cloud management: CloudFormation is AWS-only, but Terraform supports multiple cloud providers and many more services.
Compliance integration: CloudFormation is built by AWS, so compliance is already assured, but for Terraform, you need to implement third-party tools yourself to achieve compliance.
Deployment type: CloudFormation has a built-in CI/CD system that takes care of everything concerning deployment and rollbacks. Terraform can be deployed from any system, but you need to build your CI/CD workflow or adopt a service that can fill the gaps.
Drift detection: Both tools have drift detection by default.
Cost: Using AWS CloudFormation does not incur any additional charges beyond the cost of the AWS resources that are created, such as Amazon EC2 instances or Elastic Load Balancing load balancers. In contrast, Terraform is an open source project that can be used free of charge. However, to obtain enterprise-level features such as CI/CD automation and state management, you may need to consider using additional services and systems provided by HashiCorp or third-party service providers. These additional services may come with their own costs.

Terraform - Part 2

April 12, 2025 0

 

Terraform Workflow:
Terraform workflows consist of five fundamental steps:


Write - Create  a module of your code
Init - Initialize your code with download of required plugins of provider.
Plan - Review and predict the changes and determine whether to accept this changes.
Apply - Implement the changes in the real environment.
Destroy - Destroying the infra structure which we created.




We can validated the file format through terraform fmt command.

[root@thiru project]# terraform fmt main.tf
│ Error: Invalid multi-line string
│   on main.tf line 15:
│   15: resource "aws_instance" "Web_server {
│   16:   ami =
│ Quoted strings may not be split over multiple lines. To produce a multi-line string, either use the \n escape to represent a newline character or use the
│ "heredoc" multi-line template syntax.

╷[root@thiru project]# terraform fmt main.tf
[root@thiru project]#














Friday, April 4, 2025

Ansible - Ansible Tower

April 04, 2025 0

 

Ansible Tower:
Ansible Tower is a web based platform that makes working with Ansible easier in large-scale environments, Ansible tower has renamed as Ansible automation platform in the latest version.
Ansible Automation platform has classified as below:
  • Event Driven Ansible controller - It triggers playbooks on specific events or react on specific events.
  • Ansible Automation Hub - It integrated platform to manage Ansible Content Collections.
  • Ansible Light Speed AI [Required separate subscription]

* Installation is controlled by inventory file.  Inventory files define the hosts and containers created and variables on it.
Managing machines with Tower
Managed machines with Tower is similar things from managing machines with Ansible from the command line.
Identify the managing machine from Tower
* setup the /etc/hosts to resolve the DNS of managed machines.
We need to ensure that below setups are in places in the managed machines.
* Ensure sshd is running and accept the incoming connection from the firewall
* Need a user account with Sudo privileges
* Need to enable the password less connection between Tower and managed servers.
Ansible Tower components:
* Organization - It is a collection of managed devices
* Users - Administrative users that can be granted access to specific tasks
* Inventories - Managed servers. It can be created statically or dynamically.
* Credentials - Credentials that are used to log into managed machines (like AWS or cloud credentials)
* Project - It is a collection of playbooks obtained from a certain location (ex. GIT)
* Template - The job definition with all of its parameters. It must be launched or scheduled.
Setup the project in AWX:
We need to follow up below steps to create our first project under AWX.
1) Create a organization
2) Create a inventory 
3) Configure a credentials
4) Setup the project
5) Define a Job Template
6) Run the Job 
Created a Inventory:
Login into AWX console and navigate into Inventories under Resources.


Create a Host file under Inventory:


Create a credentials:
Navigate into Credentials under resources from AWX GUI.

Click on create Credential and define a username and password which is used to maintain a resources and Templates.
Create a Project:
Navigate into Project under resources.
Define a project name and select the AWX environment and GIT repo.

Create a work flow Template for more than 1 job.


.
Submit a job and monitor it. We can able to schedule a job for particular time as well.





Wednesday, April 2, 2025

GCP - VPC - part 2

April 02, 2025 0

 

VPC - Virtual Private Cloud

VPC has classified two types in the GCP.

Auto Mode : It is a default VPC in the GCP. The network has configured by automatically and firewall has been pre-configured as well. We should not use this Mode into Production environment.

Custom Mode : The IP allocation and firewall setup needs to take care by us. It is safe and secure setup for the production environment.

Subnet is playing a vital role in the VPC network.



The project can communicate from Subnet A & B  across the Regions through internal networks. C & D needs to communicate through external network even though both are belongs into same region.



Firewall Rule Configuration:


Load Balancing:


Application Load Balancer:

Proxy Load Balancer:


Tuesday, April 1, 2025

GCP - Compute Engine - part 1

April 01, 2025 0

 

Compute Engine:
Compute engine is a computing and hosting services that let you create and manage a infra structure. Google compute engine is the Infra structure as a Service [IaaS] components of Google cloud.
General purpose of Compute Engine:


E series is used for Dev and test environments. It is very efficient for lowest cost per core.
* Virtual Desktops
* Web and apps server with low latency
N series is a balanced and performance workloads. 
* CRM, BI or back office
* Data pipelines
* Databases
C series is used for high performance application.
* Game servers
* Ad servers
* Data analytics
* Media streaming and transcoding
* CPU based AI/ML
H series is a highest compute per cores.
* Game servers
* Media streaming and transcoding
* High performance computing (HPC)
* CPU based AI/ML
M series is highest memory per cores.
* Large database
* Modeling and simulation
Z series is a highest storage per cores.
* Data analytics
* Large horizontal database scale out
G series is an inference and visualization with GPUs
* Video transcoding
A series is a highest performance of GPUs
* Deep learning
* Recommendation Models
Preemptible Instances
  * Instances offered at a discount (60 to 91%) in periods of excess Compute Engine capacity
  * Compute Engine might stop these instance in case of need a more computes.
  * Run for a maximum of 24 hours
  * No SLA
Spot instances are similar like preemptible instances but it will extent after 24 hours.
Cloud Function [Platform as service]
  * Serverless light weight compute service
  * It will support the standalone function that respond to the events
  * It can be written using Java script, Python 3 or Java runtimes
Cloud Run: [Platform as Service]
  *  Container based  serverless platform
  * Request based auto scaling and scale to Zero
  * Built in traffic management
Cloud Storage:
Classified the cloud storage in GCP as follows:

Persistent Disks * Durable, high performance block storage for virtual machines * Performance scales with size of the disk and with number of vCPUs on the VM * Data stored redundantly Local SSD * High performance block storage for virtual machines. * Physically attached to the server * Higher throughput and low latency than persistent disks * Each local SSD is 375GB.

Choose the storage depends upon Database:


User case study of storage:
Cloud Storage - Un structure data [Videos, images, backups and achieve]
Persistent Disk - Disk for Virtual machines
Local SSD - Flash optimized databases, Hot caching layer for analytics
File store - Web content management, Rendering and media processing
Bigtable - High through put application such as Big data, IoT
BigQuery - Big data analytics, Business intelligent
Cloud Fire store - User profiles, cross device data synchronization








Sunday, March 30, 2025

GCP - Introduction

March 30, 2025 0

 

GCP is a public cloud vendor like competitors of Azure and AWS.  Customers are able to access server resources housed in Google's data centers around the world on a pay-per-user basis.

GCP offers a suite of computing services to do everything from Cost management to data management to delivering web and video over the web to AI and machine learning tools.

Google's global infrastructure has given 24X7 services around the world with highest speed and reliability. GCP starts with a region and within a region are availability zones. These availability zones are isolated from a single point of failure. Some resources such as HTTP global load balancer are global and can receive requests from any of the Google cloud Edge locations and regions. Others resources like storage can be regional.  The storage is distributed across multiple zones within a region for redundancy.
We need to select the locations depending on the performance, reliability and scalability and security needs of your organization. 

Plan to create a GCP setup:





Policies are inherited from the Organization root folder. It will act as parent of the policies within organization.



Setting the bill account is very important before start the project. We need a billing administrator role to perform this task. we can able to set a budget from project level or billing account level.

Cloud Shell:
    GCP includes command line tools for Google cloud product and services:
        gcloud - Main CLI for GCP cloud
        gsutil   - Cloud storage
        bq - biq query

Sytex of gcloud:
gcloud + component + entity + operation + positional args + flags


Cloud Identify:
Role - Define a permission of each entity within the group/principal. To make a permission is available into principals including user, group and service accounts, we need to give a proper roles assign into principals.

  • Policies are inherit from top to bottom approach,  There is a no ways to remove the permission that was grant at the top level into resource level.
Different types of Roles in GCP:
  • Basic Role - Owner, Editor and view
  • Predefined Role - Service specific role [pub/sub subscriber]
  • Custom Role - Based on user specified list of permission

Service account:

We can create a service account for automation of manual task. We can able to create a service account through GUI or glcoud CLI.
#gcloud compute instances create myinstance --service-account servicename

Create a service account for PubSub subscription
#gcloud pubsub subscriptions create [subscription_name] --topic [Topic_name] --push-endpoint=[Cloud_Run_Service_URL] --push-auth-service-account=[serviceacountname]@prjectid.iam.gserviceaccount.com

Best practice of Access Management:
  •    Donot grant a basic roles [Owner, Editor, viewer]
  •    Have more than one organization admin
  •    Grant roles to Google groups instead of individuals
  •    Be cautious when granting the Service Account user role
  •    Regularly check Cloud Audit logs and audit IAM policy changes.