Edge Computing and Serverless in 2025: How Cloudflare, Vercel and AWS Are Redefining Infrastructure
Hello HaWkers, the way we deploy applications is undergoing a silent revolution. Edge computing has gone from being a buzzword to becoming the standard strategy for companies that take performance seriously.
Have you ever wondered why some websites load instantly while others take seconds? The answer often lies in where the code runs. And in 2025, that "where" is increasingly closer to the user.
What is Edge Computing
The Fundamental Concept
Edge computing is the practice of executing code on geographically distributed servers, as close as possible to end users.
Traditional model (centralized):
User (Brazil) --> Server (USA) --> Response (latency: 200ms+)Edge model (distributed):
User (Brazil) --> Edge (Sao Paulo) --> Response (latency: 20ms)Why This Matters
According to IDC data, edge computing usage grew 14% globally in 2024, and the trend continues strong in 2025.
Main benefits:
- Latency reduced by up to 90%
- Better user experience
- Optimized costs (pay per use)
- Automatic scaling
- Global resilience
💡 Context: In an e-commerce, every 100ms of latency can reduce conversions by 1%. At scale, this represents millions in revenue.
The 2025 Ecosystem
Cloudflare Workers
Cloudflare Workers has become the reference platform for edge computing, with presence in over 300 cities globally.
// worker.js - Edge function on Cloudflare
export default {
async fetch(request, env) {
const url = new URL(request.url);
// Routing logic at the edge
if (url.pathname === '/api/user') {
const userId = url.searchParams.get('id');
// Fetch from KV Storage (globally distributed)
const userData = await env.USERS_KV.get(userId, 'json');
if (!userData) {
return new Response('User not found', { status: 404 });
}
return Response.json(userData);
}
// Proxy to origin
return fetch(request);
}
};Cloudflare Workers resources:
- KV Storage: Distributed key-value store
- Durable Objects: Persistent state at the edge
- R2: S3-compatible object storage
- D1: SQLite database at the edge
- Queues: Queues for async processing
Vercel Edge Functions
Vercel popularized edge functions for frontend developers, especially with Next.js:
// app/api/hello/route.ts - Edge function in Next.js
export const runtime = 'edge';
export async function GET(request: Request) {
const { searchParams } = new URL(request.url);
const name = searchParams.get('name') || 'World';
// Detect user location
const country = request.headers.get('x-vercel-ip-country') || 'Unknown';
const city = request.headers.get('x-vercel-ip-city') || 'Unknown';
return Response.json({
message: `Hello, ${name}!`,
location: { country, city },
timestamp: new Date().toISOString(),
});
}Vercel differentiators:
- Native integration with Next.js
- Edge middleware by default
- Edge Config for feature flags
- Real-time analytics
AWS Lambda@Edge
AWS offers Lambda@Edge to execute code at CloudFront points of presence:
// Lambda@Edge for content personalization
exports.handler = async (event) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
// Detect device
const userAgent = headers['user-agent'][0].value;
const isMobile = /Mobile|Android|iPhone/i.test(userAgent);
// Modify URI based on device
if (isMobile && !request.uri.includes('/m/')) {
request.uri = '/m' + request.uri;
}
return request;
};
Practical Use Cases
1. Real-Time Personalization
// Edge function for personalization
export default async function middleware(request: Request) {
const country = request.headers.get('x-vercel-ip-country');
// Redirect to localized version
if (country === 'BR' && !request.url.includes('/pt')) {
return Response.redirect(new URL('/pt' + new URL(request.url).pathname, request.url));
}
// A/B testing at the edge
const bucket = Math.random() < 0.5 ? 'A' : 'B';
const response = await fetch(request);
const newResponse = new Response(response.body, response);
newResponse.headers.set('X-Experiment-Bucket', bucket);
return newResponse;
}2. API Gateway at the Edge
// Distributed rate limiting
export default {
async fetch(request, env) {
const ip = request.headers.get('CF-Connecting-IP');
const key = `rate_limit:${ip}`;
// Atomic counter in Durable Object
const id = env.RATE_LIMITER.idFromName(ip);
const rateLimiter = env.RATE_LIMITER.get(id);
const allowed = await rateLimiter.fetch(request);
if (!allowed.ok) {
return new Response('Too Many Requests', {
status: 429,
headers: { 'Retry-After': '60' }
});
}
// Continue to backend
return fetch(request);
}
};3. Smart Caching
// Cache with revalidation at the edge
export async function GET(request: Request) {
const url = new URL(request.url);
const cacheKey = url.pathname;
// Try cache first
const cache = caches.default;
let response = await cache.match(request);
if (response) {
// Revalidate in background if stale
const age = parseInt(response.headers.get('age') || '0');
if (age > 60) {
// Stale-while-revalidate
fetch(request).then(freshResponse => {
cache.put(request, freshResponse.clone());
});
}
return response;
}
// Fetch from origin
response = await fetch(`https://api.example.com${url.pathname}`);
// Cache at the edge
const cachedResponse = new Response(response.body, response);
cachedResponse.headers.set('Cache-Control', 'public, max-age=300');
cache.put(request, cachedResponse.clone());
return cachedResponse;
}
Platform Comparison
Cold Start Time
| Platform | Cold Start |
|---|---|
| Cloudflare Workers | ~0ms (always warm) |
| Vercel Edge Functions | ~5ms |
| AWS Lambda@Edge | ~50-200ms |
| AWS Lambda (Node) | ~100-500ms |
Limits and Resources
| Resource | Cloudflare | Vercel | AWS Lambda@Edge |
|---|---|---|---|
| CPU Time | 50ms (free) / 30s (paid) | 30s | 5s (viewer) / 30s (origin) |
| Memory | 128MB | 128MB | 128MB |
| Bundle Size | 10MB | 4MB | 10MB |
| Locations | 300+ | 35+ | 400+ |
Pricing (approximate)
Cloudflare Workers:
- 100k requests/day: Free
- After: $0.50/million requests
Vercel Edge:
- 100k executions/month: Free (Hobby)
- After: $0.65/million executions
AWS Lambda@Edge:
- $0.60/million requests
- $0.00005001/GB-second
Best Practices For Edge
1. Minimize Dependencies
// Bad - bundle too large
import lodash from 'lodash';
const result = lodash.groupBy(data, 'category');
// Good - only what is needed
const groupBy = (arr, key) => arr.reduce((acc, item) => {
(acc[item[key]] = acc[item[key]] || []).push(item);
return acc;
}, {});2. Use Streaming For Large Responses
export async function GET() {
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
for (let i = 0; i < 100; i++) {
const chunk = encoder.encode(`data: ${JSON.stringify({ count: i })}\n\n`);
controller.enqueue(chunk);
await new Promise(r => setTimeout(r, 100));
}
controller.close();
}
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
}
});
}3. Leverage Cache Strategically
export async function GET(request: Request) {
const url = new URL(request.url);
// Cache headers based on content
const isStaticAsset = /\.(js|css|png|jpg)$/.test(url.pathname);
const response = await fetch(request);
const newResponse = new Response(response.body, response);
if (isStaticAsset) {
// Static assets - long cache
newResponse.headers.set('Cache-Control', 'public, max-age=31536000, immutable');
} else {
// Dynamic content - stale-while-revalidate
newResponse.headers.set('Cache-Control', 'public, max-age=60, stale-while-revalidate=600');
}
return newResponse;
}
Integration with JavaScript Runtimes
Bun at the Edge
Platforms like Cloudflare and Vercel are integrating Bun for edge deployment:
// Using Bun on Cloudflare Workers
export default {
async fetch(request: Request): Promise<Response> {
// Bun APIs available
const file = Bun.file('./data.json');
const data = await file.json();
return Response.json(data);
}
};Deno Deploy
Deno Deploy offers a security-focused alternative:
// Deno Deploy edge function
Deno.serve((request) => {
const url = new URL(request.url);
if (url.pathname === '/api/time') {
return Response.json({
time: new Date().toISOString(),
region: Deno.env.get('DENO_REGION'),
});
}
return new Response('Not Found', { status: 404 });
});The Future of Edge Computing
Trends For 2026
What to expect:
- Edge databases becoming mainstream
- AI inference at the edge
- WebAssembly expanding possibilities
- Mesh of interconnected edges
Challenges to overcome:
- Distributed data consistency
- Debugging in distributed environment
- Vendor lock-in
Conclusion
Edge computing and serverless have gone from being trends to becoming the standard way to deploy modern applications. With virtually zero cold starts, global presence and usage-based costs, there is no longer any excuse for slow applications.
For JavaScript developers, the timing is ideal: the same skills you already have work at the edge. Cloudflare Workers, Vercel Edge Functions and AWS Lambda@Edge use standard web APIs, making migration easier.
The most important thing is to start. Move a single function to the edge, measure the difference, and expand from there. Your users will thank you.
If you want to understand more about new web development approaches, I recommend checking out the article about Server-First Development with Astro and Remix where you will discover how new frameworks are leveraging the edge to deliver incredible experiences.

