Posts

Day 20 - Deploying an Amazon EKS Cluster Using Custom Terraform Modules

Image
Introduction In this project, I deployed a complete Amazon EKS environment using Terraform custom modules. The goal of this implementation was to understand how production style Kubernetes infrastructure is organized using reusable Terraform modules instead of a single monolithic configuration file. The deployment included: Custom VPC across 3 Availability Zones Public and private subnets NAT Gateway IAM roles for EKS Amazon EKS cluster Managed node groups Spot and On Demand worker nodes IRSA and OIDC provider Kubernetes add-ons NGINX sample application deployment AWS LoadBalancer integration This project helped me better understand how Kubernetes networking, IAM, Terraform modules, and AWS managed services work together in real-world environments. Architecture Diagram Project Structure day20-eks-custom-modules/ ├── main.tf ├── variables.tf ├── outputs.tf ├── provider.tf ├── backend.tf ├── modules/ │ ├── vpc/ │ ├── iam/ │ ├── eks/ │ └── secrets-...

Day 19 - Understanding Terraform Provisioners with AWS EC2 and Nginx

Image
Introduction For Day 19 of my challenge, I explored Terraform Provisioners using AWS EC2. In this demo, I used: local-exec file provisioner remote-exec The goal was to: Deploy an EC2 instance Copy a shell script into the server Install nginx automatically Validate the deployment from the browser This exercise also helped me understand why HashiCorp considers provisioners a “last resort” approach in production environments. Architecture Diagram Types of Terraform Provisioners local-exec provisioner "local-exec" { command = "echo ${self.public_ip} >> inventory.txt" } file Provisioner provisioner "file" { source = "welcome.sh" destination = "/tmp/welcome.sh" } remote-exec provisioner "remote-exec" { inline = [ "chmod +x /tmp/welcome.sh", "sudo /tmp/welcome.sh" ] } Terraform Deployment Initialize Terraform terraform init Validate Terraform Con...

Day 18 - Serverless Image Processing with AWS Lambda, S3, and Terraform

Image
For Day 18 of my AWS Terraform learning journey, I built a backend only serverless image processing pipeline using Amazon S3, AWS Lambda, Lambda Layers, and Terraform. The idea was simple. I upload one image to an S3 bucket, and AWS automatically creates multiple processed versions of that image in another S3 bucket. There is no frontend in this project. There is no EC2 server. The workflow is completely event driven. Architecture The flow works like this: I upload a sample image to the upload S3 bucket. S3 sends an ObjectCreated event. Lambda is triggered automatically. Lambda uses the Pillow library to process the image. Five generated image variants are saved into the processed S3 bucket. Project Structure The project has a simple structure. day-18-image-processor/ ├── sample.jpg ├── deploy.sh ├── destroy.sh ├── lambda/ ├── scripts/ └── terraform/ The sample.jpg file is placed directly in the project root. This makes the test script simple because it always knows where to find the ...