I spend about $21,000 a day in Google Ads. Mistakes at that scale are expensive.
I manage it with five scripts that run automatically and one dashboard that I check every morning. The scripts handle the surveillance. I handle the decisions.
Here's exactly what's running and why.
Script 1: Anomaly detector (runs hourly)
The most important script. Pulls cost, clicks, impressions, conversions, and average CPC for every campaign over the past 24 hours. Compares to the 7-day rolling average for each metric. Flags any campaign where any metric has deviated more than 25%.
The alert format:
[ALERT] SETON|SRC|EXM|US|SAFETY|SIGNS
Metric: Cost
Today: $847 (14:00)
7-day avg (same time): $312
Deviation: +171%
When I see this alert, I check the search term report immediately. Usually it's a new close variant match that's burning budget. Occasionally it's a competitor who entered the auction and drove up CPCs. Rare cases it's a tracking issue inflating click counts.
Without the script, I might not catch this until the daily budget is exhausted. With it, I usually catch it within two hours.
Script 2: Naming convention validator (runs nightly)
Checks every campaign, ad group, and keyword against the naming taxonomy described in my naming conventions post. Sends a daily email listing any violations:
- Campaign name doesn't match
[BRAND]|[CHANNEL]|[MATCH]|[GEO]|[INTENT]|[PRODUCT]format - Ad group name doesn't match
[INTENT]|[MODIFIER]format - Paused campaigns that haven't been reviewed in 90+ days (candidates for archival)
This is essentially hygiene monitoring. Small teams drift. New campaigns get created in a rush without following the format. This script catches it before the pipeline breaks.
Script 3: Cross-brand cannibalization monitor (runs weekly)
The script I described in my cannibalization post. Pulls the auction insights data for each brand in the MCC. Identifies any keyword where two or more of our brands are showing in the same auction.
Weekly email lists every cannibalizing keyword pair along with the bid levels for each brand. I make manual decisions about which brand should own the term or whether to implement negative siloing.
At $650K/month, this script has identified and helped eliminate $40-50K/month in self-competition costs.
Script 4: Quality Score debt identifier (runs monthly)
Lists all keywords where Quality Score is below 5 AND revenue contribution is above a threshold I set (currently $5,000/month in pipeline attribution).
These are the keywords I call "high-value, low-score." They're generating real revenue despite a QS penalty. Two options for each: accept the CPC penalty (because the conversion economics still work) or investigate whether a landing page improvement could improve QS without sacrificing conversion rate.
This isn't a "fix the Quality Score" script. It's a "know where you're paying a premium and decide if it's worth it" script.
Script 5: Budget pacing monitor (runs daily)
Covered in detail in the pacing post. Compares actual vs expected spend for each campaign, generates a pacing score, alerts on deviations, and connects spend pace to pipeline velocity.
The dashboard
All script outputs feed into a Google Sheet. The Sheet has conditional formatting: green for on-track, yellow for attention needed, red for investigate immediately.
Every morning at 7am I have an email. Usually all green. When something is yellow or red, I handle it before anything else in the day.
The alternative is spending three hours each morning manually pulling reports across two MCCs and five brands. The scripts reduce that to 15 minutes of reviewing exceptions.
At $21K/day in spend, 15 minutes of focused daily attention is the minimum viable operating model. The scripts make it possible.
Why JavaScript and not Python
Google Ads Scripts run natively in the Google Ads interface as JavaScript. No external hosting. No API authentication setup. No infrastructure to maintain.
For scripts that need to run inside Google Ads (pulling campaign data, making bid changes, reading auction insights), JavaScript in the native script editor is the simplest possible setup.
For scripts that need to run outside Google Ads (joining with BigQuery data, Salesforce queries), Python running in Google Cloud Functions.
Keep each script as simple as possible. One job per script. No dependencies.
The script stack is the infrastructure that makes managing at scale possible without a team of 10 people.
Alex Langton
Senior B2B paid media manager · ~$650K/mo industrial spend
12+ years running B2B Google Ads accounts in industrial, manufacturing, and B2B e-commerce. Builds Langton Tools because generic PPC SaaS was never designed for the multi-MCC, complex- pacing, B2B-vocabulary reality of the accounts that actually drive industrial revenue.