Professional Frontend Deployment Pipelines: From Code to Production

β€”

Table of contents

Introduction

When building frontend applications, many developers stop at pushing code to GitHub or deploying via platforms like Vercel or Netlify. While these tools are powerful, they abstract away critical engineering practices used in professional and enterprise environments.

In real-world software companies, deploying a frontend application involves much more than a single click. It requires a well-structured CI/CD pipeline, robust source control management, reliable artifact storage, and intelligent deployment strategies that ensure reliability, scalability, and minimal downtime.

This article breaks down the full lifecycle of a frontend application, from code commit to production deployment, with a focus on TypeScript-based React apps, and explains how mature engineering teams handle deployments at scale.

Source Control & Git Workflows

Every deployment starts with code in a version-controlled repository. Most companies use Git, hosted on platforms like GitHub, GitLab, or Azure DevOps.

How teams organize their branching strategy depends on the size and maturity of the organization:

Trunk-Based Development

main

● Merge Feature A

● Merge Feature B

● Merge Feature C

Feature A

Feature B

Feature C

A,B,C

FA,FB,FC

Simple, fast, and ideal for CI/CD, but requires strong automation.

Each feature:

Feature Branch Workflow

main

Initial Commit

Create branch: feature/user-auth

Commit: Login UI

Commit: API Integration

Open Pull Request

Code Review + CI Tests

● Merge into main

Create branch: feature/payment

Commit: Stripe Setup

Commit: Checkout Flow

Open Pull Request

Code Review + QA

● Merge into main

C,I

D,E,J,K

F,L

G,M

H,N

Each feature is isolated and reviewed before merging into main.

GitFlow (Enterprise)

merges into

merges to main

finishes

merges to main

merges to develop

merges to main

finishes release

main

v1.0

hotfix

v1.1

develop

feature/a

Release v1.0

Hotfix

feature/b

Release v1.1

M2,M3,M4

D2,D5

D3,D6

From dev we branch into feature/a, which gets merged into release v1.0 after a PR, then merges into main generating a v1.0, something goes wrong and we do a hotfix from develop which gets merged into dev and main. We continue the work on feature/b and we do a release v1.1.

Complex but structured; supports parallel development and release management.

A pull request (PR) is not just a code review mechanism, it’s a gate that ensures code quality, test coverage, and security checks before integration. Automated pipelines are typically triggered upon PR creation, running linters, type checks, and unit tests.

Key takeaway: Your code must pass automated checks before being merged, this is the foundation of Continuous Integration (CI).

Build Pipeline: From TypeScript to Artifact

Once code is merged into the main branch, the build pipeline kicks in. This is where your human-readable TypeScript and JSX are transformed into optimized, production-ready assets.

πŸ”§ Build Pipeline Steps (in order)

Yes

No

Yes

No

Source Code

Lint

Pass?

Unit Tests

πŸ›‘ Fail Pipeline

Pass?

TS Compile

Bundle

Optimize Assets

Artifact

C,E

B,D,F,G,H

For a modern TypeScript + React application, the build process includes:

  1. Linting (ESLint)
    β†’ Catch syntax errors and enforce code style.

  2. Unit Testing (Jest)
    β†’ Run tests; fail fast if any fail.

  3. TypeScript Compilation (tsc)
    β†’ Transpile .ts files to JavaScript.

  4. Bundling & Optimization (Webpack/Vite)
    β†’ Minify, split chunks, compress images.

  5. Generate Artifact
    β†’ Output: dist/ folder with:

    • index.html
    • main.js, vendor.js
    • styles.css
    • assets/ (images, fonts)

Best Practice: Run linting and unit tests before compilation. Fail fast to avoid wasting time on expensive build steps.

This entire sequence runs inside a CI platform such as:

These systems execute the pipeline in isolated environments, ensuring consistency across builds.

Artifact Repositories: Why They Matter

After a successful build, the output artifact should be stored in an artifact repository, a versioned storage system for deployable packages.

Artifact Repository Flow

CI Pipeline

Build Artifact

dist/ folder, bundle

Store in Artifact Repository

GitHub Packages, S3, Nexus

Versioned Artifact

Tag: v1.0.0-commit-a1b2c3d

Available for Deployment

Staging Environment

Production Environment

Rollback Target

F,G,H

Examples include:

πŸ”„ Rollback Without Artifact Repo?

πŸ”΄ Broken Deploy

Revert Commit in Git

Trigger Rebuild Pipeline

Run: Lint, Test, Compile, Bundle

Generate New Artifact

Redeploy to Production

Service Restored

⏱️ Takes 5–10+ minutes

B,C,D,E,F

πŸ”„ Rollback With Artifact Repo?

πŸ”΄ Broken Deploy

Deploy Previous Version

Service Restored in Seconds

Without an artifact repository, rolling back means re-running the entire build pipeline, causing delays and potential downtime.

Deployment Pipeline: Staging and Production

With the artifact stored, the deployment pipeline takes over. This stage moves the application to its target environment.

Deployment Pipeline Diagram

Yes

No

Artifact Repository

Retrieve Version

Latest or Tagged

v1.2.3-commit-a1b2c3d

Deploy to Staging

Run E2E Tests

Cypress / Playwright

Manual QA / Product Review

Approve for Production?

Deploy to Production

Fix & Rebuild

Deploy to Feature Environment

PR-123.app.com

Preview for Review

C,D,E,F,I

Common destinations:

Static Hosting Architecture (SPA)

Cache Hit

Cache Miss

User

CDN

CloudFront / Cloudflare

S3 Bucket

Static Files: HTML, JS, CSS, Images

Artifact Uploaded Here

via CI/CD

Fast, scalable, cacheable. Ideal for SPAs.

End-to-End Testing

Run tools like Cypress or Playwright after deployment to staging:

npx playwright test --ci

Tip: Run E2E tests on staging after deployment to catch integration issues early.

Static vs. Dynamic Hosting

Understanding the difference between static and dynamic content is essential for choosing the right hosting model.

Static AssetsDynamic Content
HTML, CSS, JS, images, fontsServer-rendered pages, API responses
No server-side computationRequires backend processing
Served directly from CDNNeeds compute resources (CPU, memory)
Highly cacheable and scalableHarder to scale due to stateful logic

Most SPAs (Single Page Applications) are static, they download once and run entirely in the browser. These can be hosted efficiently on CDNs.

But when server-side rendering (SSR) enters the picture (e.g., Next.js, Nuxt), the app becomes dynamic, pages are rendered on-demand by a Node.js server.

Server-Side Rendering & Compute Requirements

With SSR, your artifact now includes server-side code, typically a Node.js application that listens for HTTP requests and renders HTML on the fly.

This changes the deployment model completely:

SSR Hosting Options

1. Virtual Machines (EC2)

πŸ‘€ User

Load Balancer

ALB / NGINX

πŸ–₯️ EC2 Instance

Node.js Server

πŸ—„οΈ Database

or External API

Where:

ComponentRole
Load BalancerDistributes incoming traffic across multiple instances for high availability and scalability.
Can be implemented using AWS ALB, NGINX, HAProxy, or cloud providers (e.g., Cloudflare Load Balancing).
Often handles SSL termination, health checks, and routing rules.
EC2Virtual server running your Node.js application (e.g., Express, Next.js in SSR mode).
Handles server-side rendering (SSR), API routes, authentication, and session management.
Scales vertically (bigger instance) or horizontally (more instances).
Database / APISource of dynamic data (e.g., PostgreSQL, MongoDB, REST API).
Typically not exposed directly to the internet β€” accessed only via backend servers.
Ensures data integrity, persistence, and secure access control.

Full control, but manual scaling and maintenance.

2. Containers (Docker + Kubernetes)

πŸ‘€ User

Ingress Controller

nginx, ALB, Traefik

Kubernetes Pod

Node.js Server

Kubernetes Service

Cluster Internal Endpoint

Scalable, consistent, but complex setup.

Where:

ComponentRole
IngressEntry point for external traffic; handles routing, TLS, load balancing
PodRuns your Node.js app (one or more containers); ephemeral and scalable
ServiceStable internal endpoint that exposes Pods; enables discovery and load balancing inside the cluster

πŸ” Example:
A request to app.example.com hits the Ingress, which routes it to a Pod running your Next.js app, which then calls an internal Service (e.g., user-service) to fetch data.

You can scale it as well with multiple pods:

User

Ingress

Pod v1

Pod v1

Pod v2

Service

3. Serverless (Vercel, AWS Lambda)

πŸ‘€ User

API Gateway

REST/WebSocket, AWS or Azure

Lambda Function

Next.js SSR / API Route

Auto-scaling, pay-per-use, minimal ops.

Where:

ComponentRole
API GatewayEntry point for HTTP requests; routes to backend functions.
Handles authentication, rate limiting, CORS, and SSL.
Examples: AWS API Gateway, Azure API Management, Vercel’s router.
Lambda FunctionRuns your Next.js SSR code (e.g., getServerSideProps) in a stateless, ephemeral environment.
Auto-scales to zero; billed per request and execution time.
Fully managed β€” no server maintenance required.

4. PaaS (Render, Heroku)

πŸ‘€ User

Render PaaS

App Instance

Auto-deployed from GitHub

Developer-friendly, good for mid-sized apps.

Where:

ComponentRole
UserEnd user accessing the application via a browser or client.
Makes HTTP requests to the public domain (e.g., myapp.onrender.com).
Render (PaaS)Platform-as-a-Service that hosts and runs your app.
Automatically pulls code from GitHub, builds it, and deploys on every git push.
Handles scaling, SSL, domains, and CI/CD β€” no infrastructure management needed.
App InstanceYour running application (e.g., Next.js, Express, Flask) in a managed environment.
Render spins up containers or VMs automatically.
Zero-downtime deploys, logs, and monitoring built-in.

Comparison between models

ModelYou ManagePlatform ManagesBest For
EC2OS, Server, AppHardware onlyFull control
KubernetesPods, ClusterNodes (if EKS)Scalable microservices
ServerlessCode onlyRuntime, ScalingSpiky traffic
PaaS (Render)App CodeBuild, Deploy, ScaleSimplicity & speed

Secrets Management

Never hardcode credentials. Use:

Example in deployment script:

export DB_PASSWORD=$(secrets-manager get db-pass --env production)
npm start

Manual server restarts cause downtime. Avoid in-flight updates in production.

Zero-Downtime Deployments: Blue-Green Strategy

To eliminate downtime during deployments, enterprises use advanced strategies like blue-green deployment.

πŸ”΅πŸŸ’ Blue-Green Deployment Diagram

Pass

User

Load Balancer

πŸ”΅ BLUE v1.0.0

Deployv1.1.0

🟒 GREEN v1.1.0

TestGreen

Switch LB to GREEN

🟒 GREEN Live

Keep BLUE 5–10 min

πŸ—‘οΈ Terminate BLUE

Benefits:

  1. Before Deployment
    β†’ Traffic goes to BLUE (v1.0.0)
    β†’ GREEN is idle or outdated

  2. Step 1
    β†’ Deploy v1.1.0 to GREEN
    β†’ No user impact

  3. Step 2
    β†’ Run automated tests on GREEN
    β†’ Only proceed if healthy

  4. Step 3
    β†’ Flip the load balancer to GREEN
    β†’ Users now see new version

  5. Step 4
    β†’ Keep BLUE alive for 5–10 minutes
    β†’ Instant rollback possible

  6. Step 5
    β†’ Terminate BLUE
    β†’ Cleanup complete

If issues arise, the load balancer can immediately switch back to BLUE.

Conclusion

Deploying a frontend application in a professional setting goes far beyond git push && deploy. It involves:

Whether you’re working on a small startup project or aiming for enterprise-grade reliability, understanding this full pipeline makes you a stronger, more valuable engineer.

Remember: Tools like Vercel and Netlify automate much of this, but knowing what happens under the hood separates juniors from seniors.

Master these concepts, and you’ll not only deploy better applications, you’ll also ace technical interviews where CI/CD knowledge is expected.

See you on the next post.

Sincerely,

Eng. Adrian Beria.