Home/ Blog/ Tutorial

How to expose localhost to the internet — five ways, 2026.

You're running a dev server on localhost:3000 and you need the rest of the internet to reach it. A client wants a preview URL, Stripe is trying to POST a webhook, a colleague wants to test on a real phone. Here are your options, honestly ranked.

There are five genuinely different ways to do this, and the answer to “which is best” depends on three things: how long you need the exposure, who needs to reach you, and whether HTTPS is required. We'll cover each option concretely with the exact command to run, then a safety checklist at the end that applies regardless of which path you pick.

Why you'd do this

The usual reasons, from most common to least:

  • Webhook testing. Stripe, GitHub, Shopify, Twilio, and dozens of other SaaS providers POST to a URL when something happens. Your laptop isn't a URL.
  • Sharing work-in-progress. Show a client a feature without deploying first. Real URL beats screen share.
  • Testing on real devices. Emulators lie. Real iPhones hitting your dev server catch bugs simulators don't.
  • Agents and LLM tools. MCP servers and custom tools for AI assistants usually need a public URL.
  • Reaching a device behind NAT. SSH into a home server, access a Raspberry Pi from outside, connect to an IoT gateway on residential Wi-Fi.

A word before you expose anything. Your localhost probably has dev-only shortcuts enabled: hard-coded admin accounts, debug routes, unsafe CORS, verbose error messages that leak stack traces. Any URL you expose is exposed to the whole internet, including bots scanning for exactly these shortcuts. We'll come back to this in the checklist — don't skip it.

Your five options

Option 1 — a tunneling service

You run an agent on your laptop; it opens an outbound connection to a public relay; the relay forwards inbound traffic back through the tunnel. This is what ngrok invented and what almost everyone means today when they say “expose localhost.”

How it looks:

# ngrok
ngrok http 3000

# 21tunnel
tunnel21 http 3000

# Cloudflare Tunnel
cloudflared tunnel --url http://localhost:3000

# All three print a public HTTPS URL you can paste anywhere.

Pros: zero setup. HTTPS by default. Works through any network that lets you reach the internet (even hotel Wi-Fi, even corporate networks). No router config. Most services include a request inspector so you can see what's coming in.

Cons: a third party sees your traffic (on most SaaS options — one of the reasons our product is self-hostable). Free tiers often have rate caps. The URL changes every time you restart, unless you pay for a reserved subdomain.

Pick this when: the exposure is temporary (minutes to days), you want HTTPS automatically, and you'd rather not run any infrastructure. See our roundup of tunneling tools for seven specific options.

Option 2 — router port-forward + dyndns

Open a port on your router, point it at your machine's LAN IP, and register a domain with a dynamic-DNS service so people can type a name instead of an ever-changing public IP.

How it looks:

  1. Log into your router's admin page (usually 192.168.1.1). Find “port forwarding” or “virtual server.”
  2. Forward public port 80 (or 443, ideally) to your machine's LAN IP on port 3000.
  3. Sign up for a dyndns provider (No-IP, DuckDNS, Dynu). Install their client on your machine so it keeps your public IP up to date under yourname.duckdns.org.
  4. Terminate TLS yourself: run Caddy or Traefik in front, pointed at localhost:3000; they'll get a Let's Encrypt cert automatically.

Pros: no middleman. Your traffic goes directly from the client to your box. Works forever, no per-request limits, no SaaS dependency.

Cons: you need control over your router. Doesn't work on corporate or university networks. Many residential ISPs use CGNAT and won't give you a real public IP. Exposes your actual IP address, which you usually don't want. If your machine is off, the URL stops working.

Pick this when: you own a home lab, have a static public IP, and want permanent always-on exposure without paying monthly.

Option 3 — a mesh VPN (Tailscale / Nebula / WireGuard)

Install a small client on every device that needs to reach localhost. They join an overlay network; the local port becomes accessible to anyone on the same mesh. For public ingress, products like Tailscale Funnel let you expose a mesh service to the real internet.

How it looks:

# Tailscale
tailscale up
tailscale serve http://localhost:3000

# Publicly:
tailscale funnel 3000

Pros: strongest identity story in the list — every peer is authenticated. Works in any network that allows outbound UDP or HTTPS. The mesh model is genuinely powerful for internal tools.

Cons: your audience must install something. Great for “my team reaches my laptop,” bad for “Stripe posts a webhook to my laptop.” Funnel covers the public case but adds a SaaS dependency you were trying to avoid.

Pick this when: your users are teammates, not the public. If you already run Tailscale, Funnel is one command away.

Option 4 — run a tiny cloud VM as a reverse proxy

Spin up a $5/mo VPS. Set up nginx or Caddy to listen on public ports and forward to your laptop over an SSH reverse tunnel. Your laptop dials out to the VPS; the VPS holds the public IP.

How it looks:

# On your laptop:
ssh -N -R 127.0.0.1:3000:localhost:3000 user@my-vps.com

# On the VPS, nginx or Caddy listens on 443 and proxies to 127.0.0.1:3000
# (which is now your laptop, thanks to the reverse tunnel).

Pros: you own the whole pipeline. No SaaS, no per-request limits, your choice of cert + auth. Cheap at $5-10/mo for hobby traffic.

Cons: you have to maintain a VPS. You have to keep the SSH tunnel alive (autossh or systemd). You're reinventing a tunneling service from parts.

Pick this when: you genuinely enjoy maintaining one more thing, OR you have regulatory reasons to keep your users' traffic on infrastructure you own. This is essentially what self-hosting a tunneling service looks like with 100% homegrown parts — by the time you have it working well, you've rebuilt something close to what self-host 21tunnel gives you for free.

Option 5 — WebRTC / peer-to-peer

For interactive use cases (pair programming, screen sharing, real-time collab), a peer-to-peer approach skips the relay entirely. Tools like ngrok PaaS → or browserstack live → plus the underlying WebRTC stacks handle NAT traversal and encryption.

Pros: no server involved in the data path once connected; lowest latency. Good for streaming video or audio.

Cons: doesn't work for anything that needs a public URL (webhooks, link-sharing). The receiver has to be an active participant running compatible software.

Pick this when: you need real-time collaboration, not public exposure.

Short version:

  • Anything under an hour → a tunneling service (ngrok, tunnel21, cloudflared). The command is one line.
  • Permanent webhook endpoint from your laptop → tunneling service with a reserved domain. ngrok Personal, or our free Hobby tier (reserved domain included on free).
  • Long-lived home-lab service on your own IP → router port-forward + Caddy for TLS, if your ISP cooperates. If not, cloud VPS + reverse SSH tunnel.
  • Company-internal tools for teammates → Tailscale. Best-in-class ergonomics for this specific shape.
  • Production ingress for a product → something purpose-built. At that point you're not really exposing “localhost” — you're running a service, and you probably want a real load balancer (or a self-hosted tunnel service that does it for you).

Framework-specific notes

A quick cheat sheet for the common dev servers:

# Next.js — defaults to localhost:3000
next dev
tunnel21 http 3000

# Vite — defaults to localhost:5173
vite
tunnel21 http 5173

# Rails — default localhost:3000, may need --binding 0.0.0.0
rails server -b 0.0.0.0
tunnel21 http 3000

# Django
python manage.py runserver 0.0.0.0:8000
tunnel21 http 8000

# Flask
flask run --host=0.0.0.0
tunnel21 http 5000

# Express
node server.js  # (listens on your chosen port)
tunnel21 http 3000

The --host 0.0.0.0 (or framework equivalent) matters when your tunnel agent runs in a container or WSL — by default most frameworks bind only to 127.0.0.1 which other namespaces can't reach.

A safety checklist

Before you hit Enter on any of these, run through this. None of it is overkill. Bots start scanning exposed URLs within minutes.

  1. Disable debug mode. Rails config.consider_all_requests_local, Django DEBUG=False, Flask debug=False. Debug error pages leak environment variables, source paths, stack traces.
  2. Don't expose admin routes. /admin, /api/internal, /docs — either auth-gate them or block them at the tunnel layer before exposing. Many tunneling services (ours, Cloudflare Tunnel, ngrok paid) let you require OAuth at the edge.
  3. Turn on HTTPS-only cookies. If you're testing login flows, your session cookies need Secure; HttpOnly; SameSite=Lax or better.
  4. Disable CORS wildcards. Access-Control-Allow-Origin: * plus a public URL plus a browser-based attack = bad day. Restrict to the domains you actually need.
  5. Rate-limit at the tunnel. Don't let a bot hammer your Postgres; set a reasonable RPS cap on the tunnel. Every tunneling tool in this list supports this; just configure it.
  6. Log everything you care about. Traffic inspection is default-on in most tunneling services and it's worth reviewing — if something looks off, you want the receipts.
  7. Take it down when you're done. The #1 source of accidentally-public dev servers is leaving tunnels up overnight. Ctrl-C when you're finished, or use a timeout on long-running tunnels.

The short version: for almost everyone, the right answer is “run a tunneling service, maybe a self-hosted one if compliance cares.” The other options have specific niches, but the 80% case is solved by one command. Our comparison page walks through the decision between SaaS tunneling vendors.

If you want to try us: 21tunnel is free on Hobby (3 tunnels, 50 GB/mo, custom domain on signup, no request rate cap). Or run the whole thing on your own VM.