Wednesday, April 2, 2025

GCP - VPC - part 2

April 02, 2025 0

 

VPC - Virtual Private Cloud

VPC has classified two types in the GCP.

Auto Mode : It is a default VPC in the GCP. The network has configured by automatically and firewall has been pre-configured as well. We should not use this Mode into Production environment.

Custom Mode : The IP allocation and firewall setup needs to take care by us. It is safe and secure setup for the production environment.

Subnet is playing a vital role in the VPC network.



The project can communicate from Subnet A & B  across the Regions through internal networks. C & D needs to communicate through external network even though both are belongs into same region.



Firewall Rule Configuration:


Load Balancing:


Application Load Balancer:

Proxy Load Balancer:


Tuesday, April 1, 2025

GCP - Compute Engine - part 1

April 01, 2025 0

 

Compute Engine:
Compute engine is a computing and hosting services that let you create and manage a infra structure. Google compute engine is the Infra structure as a Service [IaaS] components of Google cloud.
General purpose of Compute Engine:


E series is used for Dev and test environments. It is very efficient for lowest cost per core.
* Virtual Desktops
* Web and apps server with low latency
N series is a balanced and performance workloads. 
* CRM, BI or back office
* Data pipelines
* Databases
C series is used for high performance application.
* Game servers
* Ad servers
* Data analytics
* Media streaming and transcoding
* CPU based AI/ML
H series is a highest compute per cores.
* Game servers
* Media streaming and transcoding
* High performance computing (HPC)
* CPU based AI/ML
M series is highest memory per cores.
* Large database
* Modeling and simulation
Z series is a highest storage per cores.
* Data analytics
* Large horizontal database scale out
G series is an inference and visualization with GPUs
* Video transcoding
A series is a highest performance of GPUs
* Deep learning
* Recommendation Models
Preemptible Instances
  * Instances offered at a discount (60 to 91%) in periods of excess Compute Engine capacity
  * Compute Engine might stop these instance in case of need a more computes.
  * Run for a maximum of 24 hours
  * No SLA
Spot instances are similar like preemptible instances but it will extent after 24 hours.
Cloud Function [Platform as service]
  * Serverless light weight compute service
  * It will support the standalone function that respond to the events
  * It can be written using Java script, Python 3 or Java runtimes
Cloud Run: [Platform as Service]
  *  Container based  serverless platform
  * Request based auto scaling and scale to Zero
  * Built in traffic management
Cloud Storage:
Classified the cloud storage in GCP as follows:

Persistent Disks * Durable, high performance block storage for virtual machines * Performance scales with size of the disk and with number of vCPUs on the VM * Data stored redundantly Local SSD * High performance block storage for virtual machines. * Physically attached to the server * Higher throughput and low latency than persistent disks * Each local SSD is 375GB.

Choose the storage depends upon Database:


User case study of storage:
Cloud Storage - Un structure data [Videos, images, backups and achieve]
Persistent Disk - Disk for Virtual machines
Local SSD - Flash optimized databases, Hot caching layer for analytics
File store - Web content management, Rendering and media processing
Bigtable - High through put application such as Big data, IoT
BigQuery - Big data analytics, Business intelligent
Cloud Fire store - User profiles, cross device data synchronization








Sunday, March 30, 2025

GCP - Introduction

March 30, 2025 0

 

GCP is a public cloud vendor like competitors of Azure and AWS.  Customers are able to access server resources housed in Google's data centers around the world on a pay-per-user basis.

GCP offers a suite of computing services to do everything from Cost management to data management to delivering web and video over the web to AI and machine learning tools.

Google's global infrastructure has given 24X7 services around the world with highest speed and reliability. GCP starts with a region and within a region are availability zones. These availability zones are isolated from a single point of failure. Some resources such as HTTP global load balancer are global and can receive requests from any of the Google cloud Edge locations and regions. Others resources like storage can be regional.  The storage is distributed across multiple zones within a region for redundancy.
We need to select the locations depending on the performance, reliability and scalability and security needs of your organization. 

Plan to create a GCP setup:





Policies are inherited from the Organization root folder. It will act as parent of the policies within organization.



Setting the bill account is very important before start the project. We need a billing administrator role to perform this task. we can able to set a budget from project level or billing account level.

Cloud Shell:
    GCP includes command line tools for Google cloud product and services:
        gcloud - Main CLI for GCP cloud
        gsutil   - Cloud storage
        bq - biq query

Sytex of gcloud:
gcloud + component + entity + operation + positional args + flags


Cloud Identify:
Role - Define a permission of each entity within the group/principal. To make a permission is available into principals including user, group and service accounts, we need to give a proper roles assign into principals.

  • Policies are inherit from top to bottom approach,  There is a no ways to remove the permission that was grant at the top level into resource level.
Different types of Roles in GCP:
  • Basic Role - Owner, Editor and view
  • Predefined Role - Service specific role [pub/sub subscriber]
  • Custom Role - Based on user specified list of permission

Service account:

We can create a service account for automation of manual task. We can able to create a service account through GUI or glcoud CLI.
#gcloud compute instances create myinstance --service-account servicename

Create a service account for PubSub subscription
#gcloud pubsub subscriptions create [subscription_name] --topic [Topic_name] --push-endpoint=[Cloud_Run_Service_URL] --push-auth-service-account=[serviceacountname]@prjectid.iam.gserviceaccount.com

Best practice of Access Management:
  •    Donot grant a basic roles [Owner, Editor, viewer]
  •    Have more than one organization admin
  •    Grant roles to Google groups instead of individuals
  •    Be cautious when granting the Service Account user role
  •    Regularly check Cloud Audit logs and audit IAM policy changes.








Friday, March 28, 2025

Large Language Model [LLM] - Introduction

March 28, 2025 0

 


LLM stands for Large Language Model. It is specifically a deep learning model, trained on massive amounts of text data to understand and generate human language, enabling tasks like text generation, translation. It often sing "Transformer" models which are neural networks that can process relationships within language.


Reasoning LLMs


Traditional LLM workflow



Traditional LLM model is refine a dataset into pretraining workflow. The pretraining send a data into fine tuning model and give a precise collected output data. It will send it to human feed back and correct incase of any mismatch with fine tuning model.

Traditional LLMs
  • Direct pattern based prediction
  • Quick but less reliable on complex tasks
  • No explicit reasoning steps

Reasoning LLM:
  • Language models are designed complex and multiple set problems
  • Break down tasks into logical sub tasks.
  • Generate intermediate reasoning steps "thought processes"
Key Capabilities of Reasoning LLMs:
1) Chain-of-Thought Reasoning
        Internal dialogue approach
        step-by-step problem solving
2) Self consistency
        Verified own answers
        Revisits problematic solutions
3) Structured Outputs
        Organized reasoning steps

Practical Applications of Reasoning LLMs
Data Analysis
Medical diagnostics
Complex data interpretation
Anomaly detection
Background Processing
Batch processing workflows
Overnight analysis jobs
Evaluation Tasks
LLM as judge
Quality assessment
Verification workflows
Limitation of Reasoning LLM
Performance Trade-offs
* Increased latency : extended thinking process leads to significantly longer response times
* Higher resource requirements: ofent require more computational resoures
* cost-implications: More tokens and processing time translate to higher operational costs
DeepSeek:

    DeepSeek applied supervised fine-tuning to refine the models' capabilities. This involved training on datasets containing reasoning and non-reasoning tasks. Notably, reasoning data was generated by specialized "expert models" trained for specific domains such as mathematics, programming, and logic. These expert models were developed through supervised fine-tuning on both original responses and synthetic data generated by internal models like DeepSeek-R1-Lite. The use of expert models allowed DeepSeek to generate high-quality synthetic reasoning data to enhance the primary model's performance.






Sunday, March 9, 2025

Terraform - Part 1

March 09, 2025 0

Terraform Installation

yum install -y yum-utils shadow-utils
yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
yum -y install terraform
terraform version
terram -help
terraform -help plan

Create AWS user for the terraform setup

Create a user in AWS:

1) Login into Aws console

2) Navigate into IAM

 

3) Click on create user button



4) Set a permission of new user

 


5) Click on created user and we can able to see the option for create a key for that specific user.

 



6) It will prompt the use case of your requirement.

 


7) Choose the Command Line Interface option 

8) Take a note of Access Key and secret access key. We need to define these parameter inside of Terraform while automate the infrastructure.

 



Friday, March 7, 2025

Terraform Introduction

March 07, 2025 0


Terraform Introduction:
Terraform helps user to build, manage or change infrastructure through code. 

Terraform Vs Ansible

IAC [Infrastrucure as code]
  • Manage infrastructure with the help of code
  • It's the code used to provision resources including virtual machines such as instances on AWS , Network infrastructure including gateways etc
  • You write and execute the code to define, deploy, update and destroy your infrastructure
  • Code is tracked in a SCM repository
  • Automation makes the provisioning process consistent, repeatable and updates fast and reliable.
  • Ability to programmatically deploy and configure resources
  • IAC standardize your deployment workflow
  • IAC can be shared, reused and versioned.
• IAC Tools:
    1.Terraform
    2.CloudFormation
    3.Azure Resource Manager
    4.Google Cloud Deployment Manager  
Terraform Overview:
Terraform is an Infrastructure Building Tool (Provisioning Infrastructure)
Written in Go Language
Integrates with configuration management and provisioning tools like Chef, Puppet and Ansible.
Extension of the file is .tf or .tf.json (Json Based)
Terraform maintain a state with the .tfstate extension
Deployment of infrastructure happens with a push-based approach (no agent to be installed on remote machines)
Terraform is Immutable. It can’t be changed after it’s created and destroy is the only option.
Terraform is using a Declarative method, Declarative Language is Describing what you're trying to achieve without instructing how to do it.
Terraform is Idempotent ,what ever looking for you which already is present means don't apply and exit without any changes.
Providers are services or systems that Terraform interacts with to build infrastructure on.
Current Terraform Version is 1.11
Terraform is cloud-agnostic but requires a specific provider for the cloud platform
Single Terraform configuration file can be used to manage multiple providers
Terraform can simplify both management and orchestration of deploying large-scale, multi-cloud infrastructure
Terraform is designed to work with both public cloud platforms and on-premises infrastructure (private cloud)
Terraform Workflow
        1.Scope 2.Author 3.Intilaize 4.Plan 5.Apply
 
Configuration file of Terraform:










Monday, March 3, 2025

Troubleshooting of sendmail issues in Linux machine

March 03, 2025 0

 


Troubleshooting of Sendmail issues:

Sendmail servers can produce some wide range of problems that any Unix server can generate, most daily Sendmail issues fall into just a few categories which is related to mail connection, Sendmail relay configuration and SMTP auth issues.

1) Email not deliverable:

We can able to valid whether user email ID or local domain is able to deliver from the system.

#sendmail -bv usernameEmailID
#sendmail -bv root@localdomain

2) Check the status of sendmail service. the mail server may go down if the server has a high workload.
#systemctl status sendmail
Start the sendmail if the service is down,
#systemctl start sendmail

3) Check the sendmail relay server details in the sendmail.cf configuration file.
#grep ^DS /etc/mail/sendmail.cf
The relay server should be resolve from the host, otherwise the mail request will not able to resolve by DNS and through Transient parse error -- message queued for future delivery error.

4) mqueue is got filled due to mail thread is not able to deliver or in the mail queue.
We used to face a var file system reached 100% utilization due to this problem.
    i) check the current status of mail
        #mailq
    ii) Try to deliver the pending mail queue 
        #sendmail -v -q
    iii) Stop the sendmail service
        #systemctl stop sendmail
    iv) move or delete the mqueue list
        #mv /var/spool/mqueue/* /temporary_location
      v) Start the sendmail service
        #systemctl start sendmail

5) Validated the sendmail functions
       i) send a testmail from the system
            # echo "This is test email" | mailx -v -s "Test mail subject" -S smtp="smtpserver:port" "usermail_ID"
      ii) Open a other terminal and monitor the mail thread 
            #tail -f /var/log/maillog
    
Sendmail Log location : /var/log/maillog

    iii) Check sendmail connectivity from the system
            #nc -vz sendmailsever 25
    iv) check the connection of sendmail
            #ps auxw| grep [a]ccepting
    v) check the system is listening of sendmail
            #netstat  -antp| grep sendmail








GCP - VPC - part 2