The architecture
This rank tracker uses three components: SnapRender to render Google search results (which are JavaScript-heavy), a Python script to parse the results and find your domain, and SQLite to store position history over time.
SnapRender API
Renders Google SERPs with full JavaScript execution. Returns markdown for easy link parsing.
Python + regex
Parses the rendered markdown to extract search result links and find your domain's position.
SQLite + cron
Stores daily position data. Cron runs the check automatically every morning.
Step 1: Set up the database
Create a SQLite database to store ranking snapshots. Each row records the keyword, your domain, the position found, and the timestamp.
#E8A0BF">import sqlite3
#E8A0BF">import requests
#E8A0BF">import re
#E8A0BF">from datetime #E8A0BF">import datetime
# Set up SQLite database
db = sqlite3.#87CEEB">connect(#A8D4A0">'rankings.db')
db.#87CEEB">execute(#A8D4A0">''#A8D4A0">'
CREATE TABLE IF NOT EXISTS rankings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
keyword TEXT NOT NULL,
target_domain TEXT NOT NULL,
position INTEGER,
result_title TEXT,
result_url TEXT,
checked_at TEXT NOT NULL
)
'#A8D4A0">'')
db.#87CEEB">commit()Step 2: Scrape and parse SERPs
For each keyword, build a Google search URL, render it with SnapRender (which handles JavaScript and Cloudflare), then parse the markdown output to find your domain's position.
API_KEY = #A8D4A0">"sr_live_YOUR_KEY"
KEYWORDS = [
#A8D4A0">"best web scraping api",
#A8D4A0">"screenshot api",
#A8D4A0">"html to pdf api",
#A8D4A0">"headless browser api",
]
TARGET_DOMAIN = #A8D4A0">"snaprender.dev"
#E8A0BF">def check_ranking(keyword):
# Build Google search URL
query = keyword.replace(#A8D4A0">" ", #A8D4A0">"+")
url = #A8D4A0">"https://www.google.com/search?q=" + query + #A8D4A0">"&num=20"
# Use SnapRender to render the search results page
resp = requests.#87CEEB">post(
#A8D4A0">"https://api.snaprender.dev/v1/scrape",
headers={#A8D4A0">"x-api-key": API_KEY},
json={
#A8D4A0">"url": url,
#A8D4A0">"format": #A8D4A0">"markdown",
#A8D4A0">"use_flaresolverr": #E8A0BF">True
}
)
markdown = resp.#87CEEB">json().#87CEEB">get(#A8D4A0">"content", #A8D4A0">"")
# Parse results — find links #E8A0BF">and match domain
links = re.findall(
r#A8D4A0">'#FFB347">\[([^#FFB347">\]]+)#FFB347">\]#FFB347">\((https?://[^)]+)#FFB347">\)',
markdown
)
position = #E8A0BF">None
result_title = #E8A0BF">None
result_url = #E8A0BF">None
rank = 0
#E8A0BF">for title, href #E8A0BF">in links:
# Skip Google internal links
#E8A0BF">if #A8D4A0">"google.com" #E8A0BF">in href:
continue
rank += 1
#E8A0BF">if TARGET_DOMAIN #E8A0BF">in href:
position = rank
result_title = title
result_url = href
break
#E8A0BF">return position, result_title, result_urlWhy Markdown format?
Google's HTML is complex and changes frequently. Markdown output from SnapRender normalizes the structure into clean links ([title](url)), making regex parsing much simpler and more resilient to layout changes.
Step 3: Run and store results
Loop through all keywords, check each ranking, and store the results in SQLite.
#E8A0BF">def main():
db = sqlite3.#87CEEB">connect(#A8D4A0">'rankings.db')
now = datetime.now().#87CEEB">strftime(#A8D4A0">'%Y-%m-%d %H:%M:%S')
#E8A0BF">print(#A8D4A0">"Rank check at " + now)
#E8A0BF">for keyword #E8A0BF">in KEYWORDS:
pos, title, url = check_ranking(keyword)
db.#87CEEB">execute(
#A8D4A0">"INSERT INTO rankings VALUES (NULL,?,?,?,?,?,?)",
(keyword, TARGET_DOMAIN, pos, title, url, now)
)
#E8A0BF">if pos:
#E8A0BF">print(#A8D4A0">" [" + str(pos) + #A8D4A0">"] " + keyword)
#E8A0BF">else:
#E8A0BF">print(#A8D4A0">" [--] " + keyword + #A8D4A0">" (#E8A0BF">not #E8A0BF">in top 20)")
db.#87CEEB">commit()
db.#87CEEB">close()
#E8A0BF">if __name__ == #A8D4A0">"__main__":
main()Step 4: Schedule with cron
Set up a cron job to run the tracker daily. One check per day is enough to spot trends.
# Run daily at 6am
# Edit #E8A0BF">with: crontab -e
0 6 * * * cd /home/user/rank-tracker && python3 track.py >> /var/log/rank-tracker.#87CEEB">log 2>&1Step 5: View ranking trends
Query the database to see how your rankings change over time.
#E8A0BF">import sqlite3
#E8A0BF">def show_trends(keyword, days=7):
db = sqlite3.#87CEEB">connect(#A8D4A0">'rankings.db')
cursor = db.#87CEEB">cursor()
cursor.#87CEEB">execute(#A8D4A0">''#A8D4A0">'
SELECT position, checked_at
FROM rankings
WHERE keyword = ?
ORDER BY checked_at DESC
LIMIT ?
'#A8D4A0">'', (keyword, days))
results = cursor.#87CEEB">fetchall()
#E8A0BF">print(#A8D4A0">"Ranking trend #E8A0BF">for: " + keyword)
#E8A0BF">for pos, date #E8A0BF">in reversed(results):
bar = #A8D4A0">"X" * (pos #E8A0BF">if pos #E8A0BF">else 0)
label = str(pos) #E8A0BF">if pos #E8A0BF">else #A8D4A0">"--"
#E8A0BF">print(#A8D4A0">" " + date[:10] + #A8D4A0">" | " + label.rjust(3) + #A8D4A0">" | " + bar)
db.#87CEEB">close()
show_trends(#A8D4A0">"best web scraping api")Start tracking your rankings
Get your API key in 30 seconds. 100 free requests/month — enough to track 3 keywords daily for a full month. No credit card required.
Get Your API KeyFrequently asked questions
Daily is the sweet spot for most sites. Google rankings can fluctuate throughout the day, but daily snapshots are enough to spot trends. SnapRender's $9/mo plan gives you 1,500 requests — enough to track 50 keywords daily for a month.
Scraping Google results is against Google's Terms of Service, but it exists in a legal gray area. Using a third-party API like SnapRender to render search pages is a common industry practice. For large-scale SERP tracking, consider dedicated SERP APIs for compliance.
Search Console shows your own site's average position but has limitations: 2-3 day data delay, averages that hide volatility, limited to 1,000 queries, and no competitor tracking. A custom tracker gives you real-time data for any keyword and any competitor.
Yes. Append location parameters to the Google search URL (e.g., &gl=us&uule=...) to simulate searches from specific countries or cities. This is essential for local SEO tracking.