Browse Tools

The "Silent 502": Solving Internal Proxy Conflicts in Self-Hosted Postiz

Self-hosting Postiz behind Caddy and Cloudflare? Learn how to fix 502 Bad Gateway errors caused by internal loopback issues with Docker networking.

Fixing 502 Bad Gateway errors in self-hosted Postiz with Docker and reverse proxy
Text Size:

Self-hosting complex applications like Postiz—an open-source social media scheduling tool—behind a reverse proxy (like Caddy) and a CDN (like Cloudflare) adds layers of networking that can lead to the dreaded 502 Bad Gateway.

If your frontend loads but your registration or login forms fail, you are likely hitting the "Internal Loopback" problem. Here is how we diagnosed and fixed it.

The Challenge: The Monorepo Networking Trap

Postiz isn't just one service; it's a monorepo. Inside the Docker container, a process manager (PM2) runs the Frontend (Next.js) on port 4200 and the Backend (NestJS API) on port 3000.

The breakdown happened here:

  1. The user clicks "Register."
  2. The Frontend tries to send that data to the API.
  3. Because the app was configured with a public URL, the Frontend tried to exit the container, go out to the internet, pass through Cloudflare, and come back through Caddy.
  4. Docker's internal networking often blocks this "hairpin" turn, or the API rejects the request because it doesn't recognize the proxy headers.
  5. Caddy sees the Frontend hang while waiting for the API, times out, and serves the user a 502 Bad Gateway.

The Solution: Three Steps to Stability

To solve this, we had to move from a "standard" install to a "proxy-aware" configuration.

1. Manual Database Synchronization

Even with the correct networking, the backend will crash if the database tables haven't been created. In some production Docker images, the automatic migration fails. We solved this by manually reaching into the container and forcing a Prisma sync:

bash
docker exec postiz npx prisma db push --schema /app/libraries/nestjs-libraries/src/database/prisma/schema.prisma

2. Defining the Internal Bridge

We had to tell the Frontend exactly where the Backend was located inside the same container. This bypasses the internet entirely for API calls.

Fix: Added BACKEND_INTERNAL_URL="http://localhost:3000"

3. Establishing Proxy Trust

Because the request passes through Cloudflare → Caddy → Postiz, the app needs to know it's okay to trust the IP addresses it sees in the headers. Without this, the app's security layer drops the connection.

Fix: Added TRUST_PROXY="true"

The Final Configuration

The working solution required a robust docker run command that explicitly defined these internal relationships:

bash
docker run -d --name postiz --network postiz-network -p 5001:5000 \
  -e DATABASE_URL="postgresql://user:pass@postiz-db:5432/postiz" \
  -e MAIN_URL="https://www.mydomain.com" \
  -e NEXT_PUBLIC_BACKEND_URL="https://www.mydomain.com/api" \
  -e BACKEND_INTERNAL_URL="http://localhost:3000" \
  -e TRUST_PROXY="true" \
  -e STORAGE_PROVIDER="local" \
  -e UPLOAD_DIRECTORY="/uploads" \
  ghcr.io/gitroomhq/postiz-app:latest

Summary

When self-hosting monorepo apps, localhost is your friend. By pointing the frontend to the backend's internal port and ensuring the database schema is manually synced, we cleared the path for stable operation behind multiple proxy layers.

The key takeaway: internal container communication should never leave the host. Configure your applications to use localhost for inter-process communication, and reserve your public URLs for external client access only.

Joel Hansen

Joel Hansen

Joel Hansen is a full-stack problem-solver, spends days crafting Angular front ends, taming complex Node backends, and bending C# to his will. By night, Joel moonlights as an amateur sleuth — known for unraveling mysteries from puzzling codebases to actual real-world oddities.