Tutorial

How to Scrape Google Maps Business Data in 2026

|10 min read

Google Maps is the largest directory of local businesses on the planet. Scraping it gives you access to business names, ratings, reviews, phone numbers, addresses, and websites — the building blocks of lead generation, competitor analysis, and market research. Here is how to do it in 2026.

Why scrape Google Maps?

Google Maps holds structured data on over 200 million businesses worldwide. Here are the most common reasons developers and businesses scrape it:

1

Lead generation

Build targeted prospect lists with business name, phone, website, and address. Filter by rating and review count to find quality leads.

2

Competitor analysis

Monitor competitor locations, ratings, and review velocity. Identify underserved areas where demand exists but supply is weak.

3

Market research

Analyze business density, pricing signals, and customer sentiment across geographies to inform expansion or investment decisions.

What data can you extract?

Each Google Maps business listing contains a rich set of structured data:

Business name
Star rating (1-5)
Total review count
Full street address
Phone number
Website URL
Business category
Opening hours
Price level ($-$$$$)
Plus Code / coordinates

Why Google Maps is hard to scrape

Google Maps is not a traditional website. It is a JavaScript-heavy single-page application that loads data dynamically. A simple HTTP request returns an empty shell — you need a real browser to render the content.

Challenges

  • !The entire UI is JavaScript-rendered — no data in the initial HTML
  • !Search results load lazily as you scroll the sidebar panel
  • !Google detects and blocks automated browsers with CAPTCHAs
  • !DOM structure changes frequently — CSS selectors break without warning
  • !Rate limiting kicks in fast on repeated queries from the same IP
  • !No official API for bulk search results (Places API is per-lookup, $17/1K requests)

Method 1: DIY with Puppeteer

The classic approach: launch a headless browser, navigate to a Google Maps search, scroll the results panel, and extract data from the DOM. Here is a working example:

scraper.js
#E8A0BF">const puppeteer = #E8A0BF">require(#A8D4A0">'puppeteer');

(#E8A0BF">async () => {
  #E8A0BF">const browser = #E8A0BF">await puppeteer.#87CEEB">launch({
    headless: #A8D4A0">'new',
    args: [#A8D4A0">'--no-sandbox'],
  });
  #E8A0BF">const page = #E8A0BF">await browser.#87CEEB">newPage();

  // Search #E8A0BF">for plumbers in Austin
  #E8A0BF">await page.#87CEEB">goto(
    #A8D4A0">'https://www.google.com/maps/search/plumber+in+Austin+TX',
    { waitUntil: #A8D4A0">'networkidle2', timeout: 30000 }
  );

  // Wait #E8A0BF">for results to render
  #E8A0BF">await page.#87CEEB">waitForSelector(#A8D4A0">'[role=#A8D4A0">"feed"]', { timeout: 15000 });

  // Scroll the results panel to load more
  #E8A0BF">for (#E8A0BF">let i = 0; i < 5; i++) {
    #E8A0BF">await page.evaluate(() => {
      #E8A0BF">const feed = document.#87CEEB">querySelector(#A8D4A0">'[role=#A8D4A0">"feed"]');
      #E8A0BF">if (feed) feed.scrollTop = feed.scrollHeight;
    });
    #E8A0BF">await new Promise(r => setTimeout(r, 2000));
  }

  // Extract business names and ratings
  #E8A0BF">const businesses = #E8A0BF">await page.#87CEEB">$$eval(
    #A8D4A0">'[role=#A8D4A0">"feed"] > div',
    (els) => els.map(el => ({
      name: el.#87CEEB">querySelector(#A8D4A0">'.fontHeadlineSmall')?.innerText,
      rating: el.#87CEEB">querySelector(#A8D4A0">'.fontBodyMedium span[role=#A8D4A0">"img"]')
        ?.getAttribute(#A8D4A0">'aria-label'),
      address: el.#87CEEB">querySelector(#A8D4A0">'.fontBodyMedium > div:nth-child(4)')
        ?.innerText,
    })).filter(b => b.name)
  );

  console.#87CEEB">log(businesses);
  #E8A0BF">await browser.#87CEEB">close();
})();

This works for small-scale scraping, but it breaks constantly. Google Maps changes its DOM structure frequently, the scroll-to-load pattern is fragile, and you will get CAPTCHA-blocked within a few dozen searches per IP. At scale, you are managing a browser fleet instead of building your product.

Method 2: SnapRender API

Let SnapRender handle the browser rendering. The /render endpoint returns Google Maps pages as clean markdown, and /extract pulls structured fields with CSS selectors.

Step 1: Render search results as markdown

Get the full search results page as LLM-ready markdown. Perfect for AI-powered extraction or storing raw data.

search.py
#E8A0BF">import requests

# Extract business data #E8A0BF">from a Google Maps search
resp = requests.#87CEEB">post(
    #A8D4A0">"https://api.snaprender.dev/v1/render",
    headers={#A8D4A0">"x-api-key": #A8D4A0">"sr_live_YOUR_KEY"},
    json={
        #A8D4A0">"url": #A8D4A0">"https://www.google.com/maps/search/plumber+in+Austin+TX",
        #A8D4A0">"format": #A8D4A0">"markdown",
        #A8D4A0">"wait_for": #A8D4A0">"[role=#A8D4A0">'feed']",
        #A8D4A0">"timeout": 15000
    }
)
#E8A0BF">print(resp.#87CEEB">json()[#A8D4A0">"data"][#A8D4A0">"markdown"])

Step 2: Extract structured data from a business listing

Once you have the business URL, use CSS selectors to pull exactly the fields you need. Returns clean JSON.

extract.py
#E8A0BF">import requests

# Extract structured data #E8A0BF">from a single business listing
resp = requests.#87CEEB">post(
    #A8D4A0">"https://api.snaprender.dev/v1/extract",
    headers={#A8D4A0">"x-api-key": #A8D4A0">"sr_live_YOUR_KEY"},
    json={
        #A8D4A0">"url": #A8D4A0">"https://www.google.com/maps/place/Example+Plumbing",
        #A8D4A0">"selectors": {
            #A8D4A0">"name": #A8D4A0">"h1.fontHeadlineLarge",
            #A8D4A0">"rating": #A8D4A0">"div.fontBodyMedium span[role=#A8D4A0">'img']",
            #A8D4A0">"review_count": #A8D4A0">"button[jsaction*=#A8D4A0">'review'] span",
            #A8D4A0">"address": #A8D4A0">"button[data-item-id=#A8D4A0">'address'] div.fontBodyMedium",
            #A8D4A0">"phone": #A8D4A0">"button[data-item-id*=#A8D4A0">'phone'] div.fontBodyMedium",
            #A8D4A0">"website": #A8D4A0">"a[data-item-id=#A8D4A0">'authority'] div.fontBodyMedium"
        }
    }
)
#E8A0BF">print(resp.#87CEEB">json())

Example response

response.json
{
  #A8D4A0">"status": #A8D4A0">"success",
  #A8D4A0">"data": {
    #A8D4A0">"name": #A8D4A0">"Austin Premier Plumbing",
    #A8D4A0">"rating": #A8D4A0">"4.8 stars",
    #A8D4A0">"review_count": #A8D4A0">"312 reviews",
    #A8D4A0">"address": #A8D4A0">"2401 S Congress Ave, Austin, TX 78704",
    #A8D4A0">"phone": #A8D4A0">"(512) 555-0147",
    #A8D4A0">"website": #A8D4A0">"austinpremierplumbing.com"
  },
  #A8D4A0">"url": #A8D4A0">"https://www.google.com/maps/place/...",
  #A8D4A0">"elapsed_ms": 3210
}

Scaling your Google Maps scraper

Whether you are building a one-off lead list or a continuous monitoring pipeline, these patterns will help you scale:

  • 1.Vary your search queries — "plumber Austin TX" and "plumber near downtown Austin" return different result sets. Combine multiple queries to build comprehensive coverage.
  • 2.Use the wait_for parameter to ensure results are fully rendered before extraction. Google Maps loads data asynchronously, so timing matters.
  • 3.Process business listings in parallel. SnapRender handles concurrency on the server side — fire multiple requests and collect the results.
  • 4.Store raw markdown alongside extracted fields. Selectors may change, but the markdown preserves all the data for re-extraction later.
  • 5.Rate-limit your queries to stay under the radar. Even with SnapRender handling the browser, spacing requests 2-5 seconds apart is good practice.

Legal considerations

Scraping publicly available business listings is generally considered legal in the US, but you should always operate responsibly:

  • 1.Google's Terms of Service restrict automated access. Understand the risks before scraping at scale.
  • 2.Never scrape personal data — stick to public business information (name, address, phone, ratings).
  • 3.Rate-limit your requests. Excessive scraping that degrades service could constitute unauthorized access.
  • 4.The Google Maps Platform Terms of Service differ from general Google ToS — read both.
  • 5.Consider the Places API for use cases where ToS compliance is critical ($17 per 1,000 requests).

Start free — 100 requests/month

Get your API key in 30 seconds. Scrape Google Maps business data with a single API call. No browser fleet, no proxy bills, no CAPTCHA headaches.

Get Your API Key

Frequently asked questions

Scraping publicly visible business data is generally legal in the US after the hiQ v. LinkedIn ruling. However, Google's Terms of Service restrict automated access. Use scraped data responsibly, rate-limit your requests, and never scrape personal user data like reviews with identifying info. Consult a lawyer for your specific use case.

Google Maps shows a maximum of ~120 results per search query. To scrape more businesses, vary your search terms — for example, "plumber in Austin TX" then "plumber near downtown Austin." With SnapRender, each page render is one API request, so a $9/mo plan gives you 1,500 business pages.

Yes. When you open an individual business listing on Google Maps, the page displays the phone number, website URL, address, hours, and more. SnapRender's /extract endpoint can pull all of these fields using CSS selectors or return the full page as markdown for AI parsing.

Google Maps is a heavy single-page application that requires full JavaScript execution to display business data. SnapRender renders pages in a real Chromium browser, waits for content to load, then returns the rendered HTML, markdown, or extracted data via CSS selectors. No local browser setup required.