Back to blog

Serverless and Edge Functions: Why 75% of New Applications Use This Architecture

Hello HaWkers, the way we deploy applications has changed drastically in recent years. Serverless is no longer new, but Edge Functions added a new dimension by placing your code geographically close to users. Today, 75% of new web applications are born with some serverless or edge component.

But when to use each approach? What are the real trade-offs? And how do JavaScript and Node.js fit into this scenario?

The Current Landscape

In 2025, major platforms significantly expanded their edge and serverless offerings:

Main Players

Cloudflare Workers:

  • 320+ global points of presence
  • Startup in < 5ms (cold start)
  • WebSockets, Durable Objects support
  • Aggressive pricing (100k free requests/day)

Vercel Edge Functions:

  • Integrated with Next.js, Nuxt, SvelteKit
  • Automatic deploy via Git
  • Edge Middleware for smart routing
  • Granular edge cache

AWS Lambda@Edge / CloudFront Functions:

  • Integrated with AWS ecosystem
  • Most number of regions
  • CDN event triggers
  • Proven massive scale

Deno Deploy:

  • Native TypeScript
  • Instant deploy
  • Ultra-low latency
  • Framework-agnostic

Adoption Numbers

2025 statistics:

  • 75% of new web applications use serverless
  • 45% already use edge functions
  • Average 60% latency reduction with edge
  • 40-70% lower cost than traditional servers

Understanding the Differences

Serverless and Edge are related but distinct concepts.

Traditional Serverless (Lambda, Cloud Functions)

Your code runs in a data center, but you don't manage servers:

// AWS Lambda - Classic Handler
export const handler = async (event) => {
  const { body } = event;
  const data = JSON.parse(body);

  // Process data
  const result = await processData(data);

  // Save to DynamoDB
  await dynamoDB.put({
    TableName: 'orders',
    Item: result
  }).promise();

  return {
    statusCode: 200,
    body: JSON.stringify({ success: true, id: result.id })
  };
};

Characteristics:

  • Cold starts can reach 1-5 seconds
  • Access to entire cloud ecosystem (databases, queues, etc.)
  • No strict memory/time limits (configurable)
  • Ideal for heavy processing

Edge Functions

Your code runs at points of presence close to the user:

// Cloudflare Workers - Edge Function
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const url = new URL(request.url);

    // Routing logic at the edge
    if (url.pathname.startsWith('/api/')) {
      return handleApi(request, env);
    }

    // Geolocation personalization
    const country = request.cf?.country || 'US';
    const content = await getLocalizedContent(country, env);

    return new Response(content, {
      headers: { 'Content-Type': 'text/html' }
    });
  }
};

async function getLocalizedContent(country: string, env: Env) {
  // Fetch from KV Storage at edge
  const cached = await env.CONTENT_KV.get(`home-${country}`);
  if (cached) return cached;

  // Fallback
  return env.CONTENT_KV.get('home-default');
}

Characteristics:

  • Cold starts in < 50ms (usually < 5ms)
  • Runtime limitations (CPU time, memory)
  • Limited APIs (no filesystem, some Node APIs)
  • Ideal for light logic and personalization

Practical Use Cases

When to Use Edge Functions

A/B Testing and Feature Flags:

// Vercel Edge Middleware
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';

export function middleware(request: NextRequest) {
  // Determine A/B test bucket at edge
  const bucket = request.cookies.get('ab-bucket')?.value
    || (Math.random() < 0.5 ? 'A' : 'B');

  const response = NextResponse.next();

  // Set cookie if new user
  if (!request.cookies.get('ab-bucket')) {
    response.cookies.set('ab-bucket', bucket, {
      maxAge: 60 * 60 * 24 * 30 // 30 days
    });
  }

  // Rewrite to correct version
  if (bucket === 'B' && request.nextUrl.pathname === '/checkout') {
    return NextResponse.rewrite(new URL('/checkout-new', request.url));
  }

  return response;
}

export const config = {
  matcher: ['/checkout', '/product/:path*']
};

Authentication and Protection:

// JWT validation at edge
import { jwtVerify } from 'jose';

export default {
  async fetch(request: Request, env: Env) {
    const token = request.headers.get('Authorization')?.replace('Bearer ', '');

    if (!token) {
      return new Response('Unauthorized', { status: 401 });
    }

    try {
      const secret = new TextEncoder().encode(env.JWT_SECRET);
      const { payload } = await jwtVerify(token, secret);

      // Add user info to request
      const newHeaders = new Headers(request.headers);
      newHeaders.set('X-User-Id', payload.sub as string);

      return fetch(request, { headers: newHeaders });
    } catch {
      return new Response('Invalid token', { status: 401 });
    }
  }
};

When to Use Traditional Serverless

Heavy Processing:

// AWS Lambda for image processing
import sharp from 'sharp';
import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';

export const handler = async (event) => {
  const { bucket, key } = event;

  // Fetch original image
  const s3 = new S3Client({});
  const original = await s3.send(new GetObjectCommand({ Bucket: bucket, Key: key }));

  // Process with Sharp (not available at edge)
  const processed = await sharp(await original.Body.transformToByteArray())
    .resize(800, 600, { fit: 'inside' })
    .webp({ quality: 80 })
    .toBuffer();

  // Save optimized version
  await s3.send(new PutObjectCommand({
    Bucket: bucket,
    Key: key.replace(/\.\w+$/, '.webp'),
    Body: processed,
    ContentType: 'image/webp'
  }));

  return { processed: true };
};

Database Integration:

// Lambda with database connection
import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 10
});

export const handler = async (event) => {
  const { userId, action, data } = JSON.parse(event.body);

  const client = await pool.connect();
  try {
    await client.query('BEGIN');

    // Complex transaction
    const result = await client.query(
      'INSERT INTO orders (user_id, data) VALUES ($1, $2) RETURNING id',
      [userId, data]
    );

    await client.query(
      'UPDATE users SET last_order = NOW() WHERE id = $1',
      [userId]
    );

    await client.query('COMMIT');

    return {
      statusCode: 200,
      body: JSON.stringify({ orderId: result.rows[0].id })
    };
  } catch (e) {
    await client.query('ROLLBACK');
    throw e;
  } finally {
    client.release();
  }
};

Hybrid Architecture

The most common approach in 2025 is combining edge and serverless:

Recommended Pattern

User
   |
   v
[Edge Function] ─────────────────┐
   │                             │
   │ - Auth validation           │
   │ - A/B testing               │
   │ - Geo routing               │
   │ - Cache check               │
   │                             │
   v                             v
[CDN Cache] <──── or ────> [Origin/Lambda]
   │                             │
   │ - Static assets             │ - Business logic
   │ - Cached responses          │ - Database ops
   │                             │ - Heavy processing
   v                             v
[User]                     [Database/Storage]

Example with Next.js

// middleware.ts (Edge)
export function middleware(request: NextRequest) {
  // Light logic at edge
  const country = request.geo?.country || 'US';
  const response = NextResponse.next();
  response.headers.set('X-Country', country);
  return response;
}

// app/api/orders/route.ts (Serverless)
export async function POST(request: Request) {
  // Heavy logic in serverless
  const data = await request.json();

  const order = await prisma.order.create({
    data: {
      ...data,
      country: request.headers.get('X-Country')
    }
  });

  return Response.json(order);
}

Trade-offs and Considerations

Edge Limitations

Common restrictions:

  • Limited CPU time (usually 50-100ms)
  • Limited memory (128MB typical)
  • No filesystem access
  • Subset of Node.js APIs
  • More complex database connections

When NOT to Use Edge

Avoid edge for:

  • Complex database queries
  • Large file processing
  • Operations that take > 100ms
  • Heavy Node.js dependencies (sharp, puppeteer)

Costs

Approach Typical Cost Cold Start Latency
Edge Function $0.50/million req < 5ms 10-50ms
Lambda $0.20/million req 100-5000ms 50-200ms
Container $50-200/month 0ms 20-100ms

Conclusion

Serverless and Edge Functions are not exclusive approaches, but complementary. Edge is ideal for light logic, personalization, and latency reduction. Traditional serverless remains the best option for heavy processing and database integration.

In 2025, most well-architected applications use both: edge for the first processing layer and serverless for more complex operations.

If you want to understand more about modern JavaScript architectures, I recommend checking out the article on Async/Await in JavaScript where we explore patterns that work well in these architectures.

Let's go! 🦅

Comments (0)

This article has no comments yet 😢. Be the first! 🚀🦅

Add comments