Sipencari Cloud
Project Sipencari: Cloud and Infrastructure
Sipencari is a forum discussion platform designed to connect users where users can post about lost or missing items, helping each other recover lost belongings like pets, personal items, or other goods. In this post, I'll dive into the cloud architecture and infrastructure that powers Sipencari.
Note: This post focuses on the Cloud design & implementation.
Tech Stack Overview
Before I delve into the architecture, let's briefly explain each component of our tech stack:
-
Amazon Web Services (AWS): Our primary cloud provider, offering a suite of services for building and deploying scalable applications.
-
EC2 (Elastic Compute Cloud): AWS's virtual server service, where I host our main application.
-
RDS (Relational Database Service): AWS's managed database service, which I use for our PostgreSQL database.
-
GitHub Actions: Our chosen CI/CD platform, automating our build, test, and deployment processes.
-
S3 (Simple Storage Service): AWS's object storage service, used for storing static assets and user-generated content.
-
NGINX: A high-performance web server and reverse proxy, handling incoming requests to our application.
-
Docker: A platform for developing, shipping, and running applications in containers, ensuring consistency across different environments.
-
Certbot: An automated tool for obtaining and renewing SSL/TLS certificates from Let's Encrypt, securing our HTTPS connections.
Cloud Architecture Overview
Our cloud infrastructure is built on Amazon Web Services (AWS), leveraging several key services to ensure scalability, reliability, and security.
Key Components
- GitHub: Our source code repository and version control system.
- Amazon EC2: Hosts our main application server.
- Amazon S3: Stores static assets and user-generated content.
- Amazon RDS: Manages our PostgreSQL database.
- Docker: Containerizes our application for consistent deployment.
- Certbot: To ensures our SSL certificates are up-to-date
Security Groups
I've implemented strict security groups to control inbound and outbound traffic to our EC2 instances. The sipencari-group security group is configured with the following rules:
- Inbound rules for HTTP (80), HTTPS (443), and SSH (22) ports
- Outbound rules as needed for application functionality
Amazon S3 Configuration
Our S3 bucket, named "sipencari", is set up with the following considerations:
- Block public access is enabled to ensure data privacy
- A bucket policy is in place to manage access to objects
- Folders are organized for different types of content (e.g., comments, uploads)
Database Configuration
I use Amazon RDS with PostgreSQL as our database engine. The sipencaridb instance is configured for optimal performance and scalability:
- Instance class: db.t3.micro
- Multi-AZ deployment for high availability
- Automated backups enabled
EC2 Instance Details
Our main application server runs on an EC2 instance with the following specifications:
- Instance ID: i-0895689564c674426
- AMI: Ubuntu 22.04
- Instance Type: t3.micro
- VPC: vpc-0539fcc5e49ffeec6
The server is configured with Nginx as the web server and runs our Docker containers.
Continuous Integration and Deployment
I use a CI/CD pipeline that integrates with our GitHub repository:
- Code is pushed to the develop branch
- GitHub Actions triggers our automated workflow
- Docker containers are updated with the latest changes
Application Containerization
I use Docker to containerize our Golang application. Here's our Dockerfile:
This Dockerfile:
- Starts from the official Golang 1.19 image
- Creates and sets the working directory to /app
- Copies and downloads the Go module dependencies
- Adds the application source code
- Builds the application
- Sets the command to run the compiled application
Docker Compose Configuration
I use Docker Compose to define and run our multi-container Docker application. Here's a snippet from our docker-compose.yml:
This configuration:
- Builds the application using the Dockerfile in the current directory
- Sets the HTTP_PORT environment variable to 8080
- Maps port 8080 from the container to port 8080 on the host
Continuous Deployment
I use GitHub Actions for continuous deployment. Here's our workflow for deploying to production:
This GitHub Actions workflow:
- Triggers on pull request merges to the master branch or direct pushes to master
- Ignores changes to markdown files
- Uses SSH to connect to our deployment server
- Pulls the latest changes from the master branch
- Rebuilds and restarts our Docker containers
Conclusion
This cloud infrastructure setup provides Sipencari with a robust, scalable, and secure environment. By leveraging AWS services and following best practices in cloud architecture, I ensure that our forum discussion platform can grow and adapt to user needs while maintaining high performance and reliability. Our carefully chosen tech stack, including NGINX, Docker, and Certbot, further enhances our application's performance, consistency, and security.