Posts

Day 17 - AWS Elastic Beanstalk Blue Green Deployment with Terraform

Image
Today I worked on Blue Green Deployment using AWS Elastic Beanstalk and Terraform. The main goal was to understand how to release a new version of an application without taking production down. In Azure App Service, this is commonly done using deployment slots. In AWS Elastic Beanstalk, the same idea can be implemented by running two separate environments and swapping their environment URLs. What I Built In this project, I created two Elastic Beanstalk environments. The Blue environment represents production and runs application version 1.0. The Green environment represents staging and runs application version 2.0. Both environments are created using Terraform. Each environment has its own Elastic Beanstalk setup with load balancing, auto scaling, health checks, and application version deployment. Architecture Why Blue Green Deployment Matters In a normal deployment, we update the existing application in place. If something goes wrong during the deployment, users may see downtime or er...

Day 16 - Managing AWS IAM Users with Terraform using CSV

Image
  Introduction Today I worked on managing AWS IAM users using Terraform with a CSV-driven approach. Instead of creating users manually in the AWS Console, I treated the CSV file as a source of truth. Terraform reads this file, creates users, assigns tags, and dynamically places them into groups. This felt very similar to database thinking. Each row in the CSV behaves like a table row, and Terraform applies logic on top of it. Architecture Overview What I Built IAM users created from CSV file IAM groups for logical organization Dynamic group membership using filters Tags used as metadata to drive logic Step 1: Using CSV as a Data Source The users.csv file acts as a structured dataset. Example: first_name,last_name,department,job_title Michael,Scott,Education,Regional Manager Dwight,Schrute,Sales,Assistant to the Regional Manager Each row represents one user. Step 2: Reading CSV in Terraform locals { users = csvdecode(file("${path.module}/users.csv")) } csvdecode() converts...

Day 15 - Cross Region VPC Peering with Terraform

Image
There’s something powerful about watching two completely separate networks start talking to each other… quietly, privately, without the internet even noticing. Today’s build was exactly that. I created two VPCs in different AWS regions and connected them using VPC peering, allowing EC2 instances to communicate using private IP addresses. Architecture Here is the architecture I implemented: Simple Flow User → SSH → EC2 (Primary VPC) → Private Network → EC2 (Secondary VPC) What I Built I created: Two VPCs in different regions One public subnet in each VPC Internet gateways for both VPCs Route tables with peering routes VPC peering connection (cross region) Two EC2 instances with Apache installed Security groups allowing SSH, ICMP, and TCP Step 1: Initialize Terraform I started by initializing Terraform. terraform init Terminal showing Terraform has been successfully initialized Step 2: Review Execution Plan terraform plan This step shows everything Terraform is going to create. Plan outp...

Day 14 - Static Website Hosting using Terraform

Image
Today I worked on my first mini project in my AWS Terraform journey. The goal was to deploy a static website using S3 and CloudFront. Instead of manually creating resources in AWS, I used Terraform to automate the entire setup. Architecture User requests first hit CloudFront. CloudFront securely fetches content from a private S3 bucket using Origin Access Control and delivers it globally. User → CloudFront → Private S3 Bucket Project Setup I organized my Terraform code into multiple files for clarity. Variables, provider configuration, and main resources are separated. S3 Bucket The S3 bucket stores the website files. Public access is completely blocked. This ensures that the bucket is not exposed directly to the internet. Uploading Files Terraform automatically uploads all files from the local www folder. This includes index.html, style.css, and script.js. I used a loop with fileset to avoid writing multiple resource blocks. CloudFront Distribution CloudFront acts ...

Day 13 - Terraform Data Sources

Image
Today’s learning felt like a shift from “building everything” to working intelligently with what already exists . Until now, most of my Terraform work was about creating infrastructure. But in real-world cloud environments, things are rarely that simple. Networks, security layers, and shared resources are often already in place, managed by different teams. This is where Terraform data sources come in. What Are Terraform Data Sources Terraform data sources allow us to read existing infrastructure instead of creating it. A simple way to think about it: Resources → Create and manage infrastructure Data Sources → Read and reference existing infrastructure This distinction is small in syntax, but huge in real-world usage. Scenario for This Demo In this lab, I simulated a real-world setup: A shared VPC already exists A shared subnet already exists My job is to launch an EC2 instance inside that network The key rule: I should NOT recreate the VPC or subnet I sho...