The architectural difference in one paragraph
Puppeteer drives a real Chromium binary. Every PDF render starts by booting a 200–800 MB browser process, loading the page, running the page’s JavaScript, executing the layout engine, then capturing the printed output. That pipeline is brilliantly general — anything the browser can render, Puppeteer can turn into a PDF — but it pays for that generality every time, on every render.
gPdf takes a different shape. The input is a structured DocumentRequest JSON: a list of pages, each with positioned elements, tables, layers, watermarks. There’s no HTML, no CSS cascade, no JavaScript engine, no browser. A Rust core compiled to WebAssembly turns the JSON directly into a PDF byte stream. The whole renderer fits in a Cloudflare Workers isolate with an ~12 ms cold start and a ~3 ms p50.
The result is two products that nominally produce the same artefact but have almost no overlap in the workloads they’re good at.
When the architectural cost actually shows up
Puppeteer is fine when:
- You render a few hundred PDFs a day.
- Your latency budget is north of 500 ms anyway (a download link, a backend job).
- You already have HTML that renders correctly and reauthoring as JSON is expensive.
The architectural cost compounds once you cross roughly 10K renders per day:
- Cold-start latency under spike. A 3× traffic burst spins up new containers; each cold-starts in 1.5–2.5 s. Your p99 follows.
- Per-render compute. Rendering on-demand at 300 ms each, on a host that costs $0.40+/hour, dwarfs the actual document complexity.
- Memory pressure. Chromium leaks. Long-running Puppeteer workers reliably OOM after ~24 hours unless you recycle them.
- Region distribution. A centralised deploy means a Sydney user waits 200 ms each direction over the Pacific. Edge rendering cuts that to ~5 ms.
These problems aren’t hypothetical — they’re why every team scaling Puppeteer past low five figures eventually ends up doing one of: add a cache layer, add a background-render queue, switch to a different runtime, or move to a JSON-native renderer.
When Puppeteer is still the right answer
There’s a category gPdf doesn’t compete in: arbitrary HTML→PDF conversion. If your document is already rendered, your design-source-of-truth is the HTML, and you have no incentive to model the page structurally as JSON, Puppeteer remains the correct tool. The same applies to client-side-rendered visualisations (charts, dashboards) that need a JS runtime to produce their final look.
If you’re doing either of those things at small scale, the latency and cost arguments above don’t bite hard enough to justify rewriting your authoring model.
Migration shape
For teams moving an invoice or label workload from Puppeteer to gPdf, the migration usually looks like:
- // Before: render an HTML template through Chromium
- const browser = await puppeteer.launch({ headless: 'new' });
- const page = await browser.newPage();
- await page.setContent(invoiceHtml);
- const pdf = await page.pdf({ format: 'A4' });
+ // After: POST the structured DocumentRequest
+ const res = await fetch('https://api.gpdf.com/api/v1/template-render', {
+ method: 'POST',
+ headers: { Authorization: `Bearer ${KEY}`, 'Content-Type': 'application/json' },
+ body: JSON.stringify({ template_id: 'invoice-v2', data }),
+ });
+ const pdf = Buffer.from(await res.arrayBuffer());
The work isn’t the API call — it’s authoring the template once. After that, every render call is a single HTTPS POST.
See also
- The full gPdf API reference — endpoints, request shape, errors.
- Why edge-deployed PDF rendering matters once you cross 10K invoices/day — the long-form latency math.
- PDF/A and Factur-X explained for engineers — relevant if EU e-invoice mandates apply to your workload.