Web Archiving
Screenshot & PDF Preservation via API
Capture timestamped screenshots and PDFs of any web page for legal compliance, competitive intelligence, and content preservation. Full JavaScript rendering, scheduled captures, and complete page archival in one API call.
Web pages are ephemeral
A competitor changes their pricing page. A client's terms of service update overnight. A regulatory body removes a policy document. Once it's gone, it's gone — unless you archived it.
SnapRender makes web archiving as simple as an API call. Capture any URL as a pixel-perfect screenshot or a searchable PDF. Schedule daily, weekly, or on-demand captures. Build a timestamped archive that proves exactly what a page looked like at any point in time.
Who archives the web?
Legal & compliance
Capture terms of service, privacy policies, and regulatory pages as PDF evidence. Timestamped records hold up in legal proceedings and audits.
Competitive intelligence
Track competitor pricing pages, feature lists, and marketing messaging over time. See exactly when and how they change positioning.
Content preservation
Archive blog posts, news articles, and research sources before they disappear. Journalists, researchers, and academics rely on permanent records.
Brand monitoring
Capture how your brand appears across partner sites, directories, and review platforms. Ensure consistent representation everywhere.
Archive pages with screenshots + PDFs
import requests
from datetime import datetime
API_KEY = 'YOUR_KEY'
HEADERS = {'Authorization': f'Bearer {API_KEY}'}
BASE = 'https://api.snaprender.dev/v1'
urls_to_archive = [
'https://competitor.com/pricing',
'https://competitor.com/features',
'https://example.com/terms-of-service',
]
timestamp = datetime.now().strftime('%Y-%m-%d_%H%M')
for url in urls_to_archive:
slug = url.split('//')[1].replace('/', '_')
# Capture screenshot
resp = requests.post(f'{BASE}/screenshot', headers=HEADERS,
json={'url': url, 'width': 1280, 'full_page': True,
'format': 'png'})
with open(f'archive/{timestamp}_{slug}.png', 'wb') as f:
f.write(resp.content)
# Capture PDF for legal records
resp = requests.post(f'{BASE}/pdf', headers=HEADERS,
json={'url': url, 'format': 'A4',
'print_background': True})
with open(f'archive/{timestamp}_{slug}.pdf', 'wb') as f:
f.write(resp.content)
print(f'Archived: {url}')import fs from 'fs';
const API = 'https://api.snaprender.dev/v1';
const HEADERS = {
'Authorization': 'Bearer YOUR_KEY',
'Content-Type': 'application/json'
};
const urls = [
'https://competitor.com/pricing',
'https://example.com/terms-of-service',
];
const timestamp = new Date().toISOString().slice(0, 16);
for (const url of urls) {
const slug = new URL(url).pathname.replace(/\//g, '_');
// Screenshot archive
const png = await fetch(`${API}/screenshot`, {
method: 'POST', headers: HEADERS,
body: JSON.stringify({
url, width: 1280, full_page: true, format: 'png'
})
});
fs.writeFileSync(
`archive/${timestamp}_${slug}.png`,
Buffer.from(await png.arrayBuffer())
);
// PDF archive for legal compliance
const pdf = await fetch(`${API}/pdf`, {
method: 'POST', headers: HEADERS,
body: JSON.stringify({
url, format: 'A4', print_background: true
})
});
fs.writeFileSync(
`archive/${timestamp}_${slug}.pdf`,
Buffer.from(await pdf.arrayBuffer())
);
console.log(`Archived: ${url}`);
}Three steps to web archiving
Define your archive targets
List the URLs you want to monitor — competitor pricing pages, legal documents, news articles, or any web content that matters to your business.
Capture screenshots + PDFs
Call the /screenshot endpoint for visual records and /pdf for searchable text documents. Include timestamps in filenames for easy organization.
Schedule and store
Run captures on a cron schedule (daily, weekly, or on events). Store results in S3, R2, or your own infrastructure with your preferred retention policy.
Simple pricing for archiving
Each screenshot or PDF counts as one request. Archive 10 URLs daily = 300 requests/month.
Frequently asked questions
Web archiving is the practice of capturing and preserving web content — pages, images, layouts — as they appear at a specific point in time. It's used for legal compliance, competitive intelligence, content preservation, and regulatory requirements.
SnapRender captures pixel-perfect screenshots and full-page PDFs of any URL via API. Schedule recurring captures to build a timestamped archive of how pages looked over time — complete with visual evidence and PDF records.
Yes. Use the /v1/pdf endpoint with any URL to generate a complete PDF document. PDFs are ideal for legal records because they preserve the full page content, are widely accepted in court, and include text that can be searched.
As often as you need. Many users set up daily cron jobs for compliance monitoring, weekly captures for competitive analysis, or on-demand captures triggered by content changes. Each capture counts as one API request.
SnapRender uses a full headless browser, so it renders JavaScript, loads async content, waits for network idle, and captures the page exactly as a user would see it. Single-page apps, dynamic dashboards, and JS-heavy sites all work.
SnapRender returns the screenshot/PDF in the API response — you store it wherever you want (S3, R2, local filesystem, database). You control retention, naming, and organization.
Preserve the web before it changes.
Timestamped screenshots and PDFs via API. Start free with 100 captures/month.
Start Free — 100 requests/month