Blog 7 min read

How to Take Screenshots with Puppeteer in 2026: Complete Guide

Learn how to capture website screenshots with Puppeteer in Node.js. Covers full-page capture, device emulation, PDF generation, and a simpler API alternative.

SnapRender Team
|

How to Take Screenshots with Puppeteer: Complete Guide

Puppeteer is Google's official Node.js library for controlling headless Chrome. It's the most popular tool for capturing website screenshots programmatically because it renders pages exactly like a real browser, including JavaScript, CSS animations, and dynamic content.

In this guide, we cover everything you need to know about taking screenshots with Puppeteer, from basic captures to advanced techniques like full-page screenshots, device emulation, and handling cookie banners. We also show a simpler alternative if you don't want to manage a headless browser yourself.

Prerequisites

You need Node.js 18+ installed. Then install Puppeteer:

npm install puppeteer

This downloads Chromium automatically (~170MB). If you want to use your system's Chrome installation instead:

npm install puppeteer-core

Basic Screenshot

The simplest Puppeteer screenshot captures whatever is visible in the viewport:

import puppeteer from 'puppeteer';

const browser = await puppeteer.launch();
const page = await browser.newPage();

await page.goto('https://example.com', {
  waitUntil: 'networkidle2'
});

await page.screenshot({ path: 'screenshot.png' });

await browser.close();

waitUntil: 'networkidle2' tells Puppeteer to wait until there are no more than 2 active network connections for 500ms. This ensures most dynamic content has finished loading.

Full-Page Screenshots

By default, Puppeteer only captures the visible viewport. To capture the entire scrollable page:

await page.screenshot({
  path: 'fullpage.png',
  fullPage: true
});

This stitches together the full page into a single image. Be cautious with very long pages, since the resulting image can be several megabytes.

Setting Viewport Size

Control the browser window dimensions before capturing:

await page.setViewport({
  width: 1920,
  height: 1080,
  deviceScaleFactor: 2  // Retina-quality capture
});

await page.goto('https://example.com', { waitUntil: 'networkidle2' });
await page.screenshot({ path: 'hd-screenshot.png' });

deviceScaleFactor: 2 renders at 2x resolution, giving you crisp Retina-quality images. The output image will be 3840x2160 pixels.

Output Formats: PNG, JPEG, and WebP

Puppeteer supports three image formats:

// PNG — lossless, larger file size, supports transparency
await page.screenshot({ path: 'output.png', type: 'png' });

// JPEG — lossy, smaller file size, configurable quality
await page.screenshot({ path: 'output.jpg', type: 'jpeg', quality: 85 });

// WebP — best compression, good quality
await page.screenshot({ path: 'output.webp', type: 'webp', quality: 80 });

When to use which:

  • PNG: When you need pixel-perfect quality or transparency (logos, UI captures)
  • JPEG: When file size matters and the content is photographic
  • WebP: Best balance of quality and size for web display

Generating PDFs

Puppeteer can also generate PDF documents from web pages:

await page.pdf({
  path: 'page.pdf',
  format: 'A4',
  printBackground: true,
  margin: {
    top: '1cm',
    right: '1cm',
    bottom: '1cm',
    left: '1cm'
  }
});

printBackground: true is important. Without it, background colors and images are stripped out.

Mobile Device Emulation

Puppeteer includes built-in device presets for emulating mobile screenshots:

import puppeteer, { KnownDevices } from 'puppeteer';

const browser = await puppeteer.launch();
const page = await browser.newPage();

const iPhone = KnownDevices['iPhone 15 Pro'];
await page.emulate(iPhone);

await page.goto('https://example.com', { waitUntil: 'networkidle2' });
await page.screenshot({ path: 'mobile.png' });

await browser.close();

This sets the correct viewport size, device scale factor, and user agent string. The page renders its mobile layout just as it would on a real device.

Dark Mode Screenshots

Capture pages in dark mode by emulating the prefers-color-scheme media feature:

await page.emulateMediaFeatures([
  { name: 'prefers-color-scheme', value: 'dark' }
]);

await page.goto('https://example.com', { waitUntil: 'networkidle2' });
await page.screenshot({ path: 'dark-mode.png' });

This only works on sites that implement dark mode via CSS prefers-color-scheme media queries.

Handling Cookie Banners

One of the most common annoyances in automated screenshots is cookie consent banners. You can attempt to dismiss them like this:

await page.goto('https://example.com', { waitUntil: 'networkidle2' });

// Try to click common "Accept" buttons
const selectors = [
  '[id*="accept"]',
  '[class*="accept"]',
  '[id*="consent"] button',
  '.cookie-banner button',
  '#onetrust-accept-btn-handler'
];

for (const selector of selectors) {
  try {
    await page.click(selector);
    break;
  } catch {
    // Selector not found, try next
  }
}

// Wait for banner to disappear
await new Promise(resolve => setTimeout(resolve, 1000));
await page.screenshot({ path: 'clean.png' });

This approach is fragile because every website uses different cookie banner implementations. You'll constantly need to update selectors as sites change their banners.

Blocking Ads

Remove ads before capturing by intercepting network requests:

await page.setRequestInterception(true);

page.on('request', (request) => {
  const url = request.url();
  const blocklist = [
    'googlesyndication.com',
    'doubleclick.net',
    'adservice.google.com',
    'facebook.com/tr',
    'analytics.google.com'
  ];

  if (blocklist.some(domain => url.includes(domain))) {
    request.abort();
  } else {
    request.continue();
  }
});

await page.goto('https://example.com', { waitUntil: 'networkidle2' });
await page.screenshot({ path: 'no-ads.png' });

Maintaining a comprehensive ad domain blocklist is a significant ongoing effort. The list above barely scratches the surface.

Waiting for Dynamic Content

For JavaScript-heavy pages (SPAs, React apps), you may need to wait for specific elements:

await page.goto('https://example.com', { waitUntil: 'networkidle2' });

// Wait for a specific element to appear
await page.waitForSelector('.main-content', { timeout: 10000 });

// Or wait for a specific amount of time after load
await new Promise(resolve => setTimeout(resolve, 2000));

await page.screenshot({ path: 'dynamic.png' });

Clipping a Specific Area

Capture just a portion of the page:

await page.screenshot({
  path: 'cropped.png',
  clip: {
    x: 0,
    y: 0,
    width: 800,
    height: 600
  }
});

Handling Errors and Timeouts

Production Puppeteer code needs error handling:

const browser = await puppeteer.launch();
const page = await browser.newPage();

page.setDefaultNavigationTimeout(30000);

try {
  const response = await page.goto('https://example.com', {
    waitUntil: 'networkidle2',
    timeout: 30000
  });

  if (!response || response.status() >= 400) {
    throw new Error(`Page returned status ${response?.status()}`);
  }

  await page.screenshot({ path: 'output.png' });
} catch (error) {
  console.error('Screenshot failed:', error.message);
} finally {
  await browser.close();
}

The Hard Part: Running Puppeteer in Production

Taking screenshots locally with Puppeteer is simple enough. Running it in production is where things get complicated:

Memory management. Each Chromium tab uses 50-300MB of RAM. Under load, you need a browser pool with concurrency limits to prevent your server from running out of memory.

Crash recovery. Chromium crashes. In production, you need to detect disconnected browsers and restart them automatically.

Security. If users provide URLs, you need SSRF protection to block requests to localhost, private IP ranges, cloud metadata endpoints (169.254.169.254), and internal services.

Scaling. A single Chromium instance handles 10-20 concurrent pages before degrading. Beyond that, you need multiple browser instances or a queue system.

Infrastructure. Chromium needs shared memory (/dev/shm), specific system libraries, and careful Docker configuration. Getting all of this right takes significant engineering effort.

The Easier Alternative: Use a Screenshot API

If you don't want to manage Puppeteer infrastructure, a screenshot API handles all of the above for you. The examples below show how to capture the same screenshots using SnapRender:

Using cURL:

curl "https://app.snap-render.com/v1/screenshot?url=https://example.com&format=png&width=1920&height=1080" \
  -H "X-API-Key: YOUR_API_KEY" \
  --output screenshot.png

Using the Node.js SDK:

npm install snaprender
import { SnapRender } from 'snaprender';

const client = new SnapRender('YOUR_API_KEY');

// Basic screenshot
const buffer = await client.screenshot({ url: 'https://example.com' });

// Full-page, dark mode, no cookie banners
const buffer2 = await client.screenshot({
  url: 'https://example.com',
  fullPage: true,
  darkMode: true,
  blockCookieBanners: true,
  blockAds: true,
  device: 'iphone_15_pro'
});

One API call replaces 50+ lines of Puppeteer code, browser pool management, SSRF protection, error handling, and infrastructure maintenance.

SnapRender offers 500 free screenshots per month with no credit card required. If you're evaluating options, it takes about 30 seconds to sign up and get an API key.

Summary

Approach Setup Time Ongoing Effort Cost
Self-hosted Puppeteer Hours to days High (memory, crashes, security, scaling) Server costs + engineering time
Screenshot API Minutes None $0-29/mo for most use cases

Puppeteer is an excellent tool for local development, testing, and automation. For production screenshot workflows, a managed API eliminates the operational complexity and lets you focus on building your product.

Try SnapRender Free

500 free screenshots/month, no credit card required.

Sign up free