{"id":3751,"date":"2025-09-26T10:03:42","date_gmt":"2025-09-26T10:03:42","guid":{"rendered":"https:\/\/www.ampliz.com\/resources\/bot-bloated-funnels-clean-lead-gen-edge\/"},"modified":"2026-03-30T07:57:56","modified_gmt":"2026-03-30T07:57:56","slug":"bot-bloated-funnels-clean-lead-gen-edge","status":"publish","type":"post","link":"https:\/\/www.ampliz.com\/resources\/bot-bloated-funnels-clean-lead-gen-edge\/","title":{"rendered":"Bot\u2011Bloated Funnels: How I Clean Lead Gen at the Edge"},"content":{"rendered":"<p class=\"last-updated\">Last updated on March 30th, 2026<\/p>\n<p>I don\u2019t lose pipeline to bad creative; I lose it to noise. Click farms, headless browsers, and throwaway emails shove themselves into my metrics until the truth gets fuzzy and the sales team stops trusting anything with a UTM tag. That\u2019s fixable, but only if you treat data quality as an attack surface, not an afterthought.<\/p>\n\n\n\n<p>So I run acquisition like a defensive sport. I isolate where the garbage sneaks in, I make it harder for bots to behave like people, and I front\u2011load decisions so junk never crosses into the CRM. The payoff isn\u2019t just fewer form fills\u2014it\u2019s a smaller, truer MQL pool and faster starts for sales because fraud never takes a seat in the queue.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Baseline: Make the Mess Measurable<\/strong><\/h2>\n\n\n\n<p>Before I add controls, I want proof that they\u2019ll matter. I start by quantifying drift between \u201cad platform success\u201d and \u201csite reality\u201d\u2014which campaigns have high click\u2011through but suspiciously low dwell or interaction depth, which referrers spike at odd hours, and which pages show sky\u2011high form views but oddly uniform field patterns. I\u2019m not hunting one silver bullet; I\u2019m building a pattern library for what fake looks like in my funnel. Global estimates put the problem in the tens of billions\u2014see<a href=\"https:\/\/www.forbes.com\/councils\/forbesagencycouncil\/2024\/07\/09\/how-blockchain-is-revolutionizing-trust-in-digital-advertising\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> <strong>ad fraud costs hit $84B<\/strong><\/a> for context.<\/p>\n\n\n\n<p>I also move critical signals server\u2011side (event receipts, basic page pings) so browser tampering doesn\u2019t erase them. A few days of passive observation gives me the shape of the problem: IP ranges that over\u2011contribute, user agents that never move the mouse, and session streaks that hit the form in three seconds flat. With that in hand, fixes become math, not vibes.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What \u201cgood\u201d looks like<\/strong><\/h3>\n\n\n\n<p>I keep a small north\u2011star set: rising human interaction rates (scroll, focus, paste avoidance), fewer bounces from first\u2011time segments, and conversion curves that remember the laws of friction. If those are trending in the right direction while raw click counts drop, I\u2019m doing it right.<\/p>\n\n\n\n<p>I also pin lightweight event logs to a durable store so I can compare apples to apples week over week: a single table with timestamp, campaign tag, referrer family, IP ASN, session duration buckets, and whether a human interaction occurred before the form loaded. When fraud creeps back, this view tells me which lever to pull without rerunning the whole investigation.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Instrument once, analyze forever<\/strong><\/h3>\n\n\n\n<p>I don\u2019t chase perfect telemetry; I chase consistent telemetry. A few high\u2011signal fields\u2014interaction depth, time to first input, copy\/paste frequency\u2014beat a sprawling dashboard where no one trusts the axes. When growth asks \u201cwhy did form fills drop 12%?\u201d I can show that human interactions rose 18% in the same window. That\u2019s the story that sticks.<\/p>\n\n\n\n<p>A legitimate research cohort lingers, compares, and comes back. Cheap placements sprint to the form, fat-finger email, and vanish. I align channels to<a href=\"https:\/\/www.ampliz.com\/resources\/lead-generation-for-b2b-saas-companies\/?utm_source=chatgpt.com\"> <strong>B2B SaaS lead generation strategies<\/strong><\/a> that historically correlate with real replies, not scripted fills. Cheap placements sprint to the form, fat\u2011finger email, and vanish.<\/p>\n\n\n\n<p>Instead of flipping the off switch wholesale, I nudge budgets away from patterns that correlate with bots: domains with inorganic time\u2011on\u2011page, placements where every \u201cvisitor\u201d shares an ASN, and publishers that deliver at 3 a.m. local time for weeks. I also set pre\u2011bid rules and blocklists where available, then confirm with post\u2011click checks on my side to avoid trusting the fox to guard the henhouse.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Ad ops meets revops<\/strong><\/h3>\n\n\n\n<p>Media tweaks live next to CRM truth. I mirror campaign tags into the CRM so sales can see which sources send people who actually reply, not just people who fill. That feedback loop makes pruning feel like growth, not loss.<\/p>\n\n\n\n<p>For ad platforms that allow it, I prefer allowlists over endless blocklists. Start from the partners and placements that have historically produced real replies, then expand carefully. If the tech supports pre\u2011bid brand\u2011safety or viewability filters, I switch them on and verify with my own post\u2011click data so the incentives stay aligned. On affiliates, I insist on transparent referrers and cut deals that pay on qualified goals, not raw clicks.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>When to cut versus coach<\/strong><\/h3>\n\n\n\n<p>If a source is strategic, I share my post\u2011click patterns with the rep and ask for inventory changes. If they can\u2019t deliver cleaner cohorts in a week, budgets shift. I don\u2019t wait months to protect my CRM.<\/p>\n\n\n\n<p>Did they pause before pasting a phone number? I also bias the offer toward<a href=\"https:\/\/www.ampliz.com\/resources\/14-fast-effective-lead-magnets-you-could-create-this-week\/\"> lead magnets that attract qualified leads<\/a> so automation has less incentive to hammer the form.<\/p>\n\n\n\n<p>I also keep \u201cpolite friction\u201d ready for risky cohorts: light bot challenges only when IP reputation stinks, a copy\u2011paste guard on the email field for suspect segments, and a post\u2011submit verification email that must be clicked before enrichment triggers. It\u2019s all reversible, measured, and tuned against abandonment lines I set upfront.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Don\u2019t wreck UX<\/strong><\/h3>\n\n\n\n<p>Everything ships behind feature flags and A\/Bs. If abandonment jumps for a healthy segment, the guardrail relaxes there first. People first; scripts later.<\/p>\n\n\n\n<p>I also rotate progressive profiling for genuine prospects: on first visit I ask for just an email and role; after verification I invite the rest. Scripts that shotgun fields hate the extra round\u2011trip; humans rarely mind. A hidden timer records reading time before the form can submit from risky segments\u2014it\u2019s invisible to normal visitors but poisonous to one\u2011second submitters.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Be explicit about privacy<\/strong><\/h3>\n\n\n\n<p>Every gate I add includes a plain\u2011language note on why it exists and how signals are used. Saying the quiet part out loud (\u201cwe challenge suspicious traffic to protect our users\u201d) builds trust\u2014especially in regulated markets.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Edge Scoring: Decide Early, Decide Fast<\/strong><\/h2>\n\n\n\n<p>I like decisions near the edge where latency is cheap and blast radius is small. A thin layer in front of the app computes a risk score per session using simple signals: IP reputation, ASN and geography, user\u2011agent sanity, request cadence, and tiny interaction breadcrumbs captured early. High risk routes to a safer path; low risk flows untouched.<\/p>\n\n\n\n<p>I mention the tools on purpose: rate limits, IP reputation checks, lightweight bot challenges, and a web application firewall as the last stop before the app. Industry guidance notes that WAFs can reduce risk temporarily while you patch\u2014they\u2019re not a silver bullet. For readers who want quick context,<a href=\"https:\/\/www.imperva.com\/learn\/application-security\/what-is-web-application-firewall-waf\/\" target=\"_blank\" rel=\"noopener\"> an approachable explainer on WAF behavior<\/a> shows where this control fits. Each control does one job and gets out of the way. The goal isn\u2019t to be clever; it\u2019s to be consistent and fast so humans never notice the machinery.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>A simple risk formula<\/strong><\/h3>\n\n\n\n<p>I keep it legible to marketers: Risk = (Bad Network Score + Automation Hints + Form Anomalies) \u2013 (Return Visitor + Prior Engagement). I tune thresholds until the sales team reports cleaner first calls without blaming \u201cmarketing quality.\u201d<\/p>\n\n\n\n<p>Signals don\u2019t need to be creepy to be useful. I avoid fingerprinting and stick to basics: is this IP new to us in the last 7 days, does the user agent match the rendering behavior I observe, and how much entropy sits in keystrokes and focus changes? I store the score for a few hours so subsequent requests inherit the decision without extra latency.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Fail small, learn fast<\/strong><\/h3>\n\n\n\n<p>If a cohort gets misclassified, I only degrade the path\u2014never a hard block\u2014until I have evidence. That keeps experiments reversible and politics light.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Rate Limits Without Rage Quits<\/strong><\/h2>\n\n\n\n<p>The easiest way to break trust is to throttle real people. I cap spikes per IP and ASN, but I also carve out exceptions for known good edges\u2014cloud proxies, accessibility tools, even corporate egress points from big accounts. Rate limits should look like courteous bouncers, not padlocks.<\/p>\n\n\n\n<p>When a visitor trips a threshold, I degrade gracefully: delay rather than deny, serve a lighter page, or ask for one extra human signal (like a micro\u2011interaction) rather than dropping the session. On the backend, I log the decision and the alternative path so analytics can see the whole picture, not just the rosy one.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Bot challenges, only where needed<\/strong><\/h3>\n\n\n\n<p>Challenges are a tool of last resort. I introduce them after two independent risk signals agree, and I measure abandonments to keep myself honest. If challenges don\u2019t bend the fraud curve without denting conversions, they go back in the drawer.<\/p>\n\n\n\n<p>I tune throttles per route: product pages tolerate more rapid refresh than pricing or form endpoints. I also randomize cool\u2011off windows slightly so scripts can\u2019t learn the threshold and surf just below it. Error messaging matters\u2014human visitors get a short, plain explanation and a retry countdown rather than a cryptic 429.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Backpressure you can explain<\/strong><\/h3>\n\n\n\n<p>When traffic surges legitimately (launch day, press hit), I switch rate limits to \u201csofter\u201d modes that slow new sessions evenly instead of punishing the unlucky few. Everyone waits a touch; no one gets stonewalled.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>CRM Hygiene: Don\u2019t Let Junk Land<\/strong><\/h2>\n\n\n\n<p>If a record fails two checks in 24 hours, it never enters the active pipeline. When enrichment does run, I follow<a href=\"https:\/\/www.ampliz.com\/resources\/data-enrichment-best-practices\/\"> <strong>marketing data enrichment best practices<\/strong><\/a> to prevent garbage attributes from slipping into otherwise good records.<\/p>\n\n\n\n<p><strong>I treat deliverability as part of fraud prevention.<\/strong> A smaller, cleaner send list warms IPs faster, earns inbox placement, and starts with habits that<a href=\"https:\/\/www.ampliz.com\/resources\/how-to-make-sure-that-your-email-list-is-clean-and-verified\/\"> <strong>keep your email list clean and verified<\/strong><\/a> so bounces and spam traps never snowball. Meanwhile, I store evidence for \u201cwhy we blocked this lead\u201d so no one has to guess later.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The \u201cbreak glass\u201d kit<\/strong><\/h3>\n\n\n\n<p>If sales needs to pursue a risky lead (big logo, direct request), I provide a one\u2011click override that tags the record as \u201cexception,\u201d routes it to a sandboxed sequence, and requires manual confirmation before the contact joins any bulk campaigns.<\/p>\n\n\n\n<p>Edge decisions don\u2019t replace human judgment; they prioritize it. Records that enter quarantine route to a shared review queue with quick actions: verify domain, request clarification, or dismiss. I log which reasons we use most so I can tighten the upstream rule that would have caught it earlier.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Deliverability is the canary<\/strong><\/h3>\n\n\n\n<p>I watch soft bounces, complaint rates, and spam\u2011trap hits as signals of upstream health. If any of those wobble, I pause certain sequences and audit the last week of net\u2011new sources before the reputation damage compounds.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Dashboards That Sales Actually Trust<\/strong><\/h2>\n\n\n\n<p>Those numbers make it easy to defend guardrails when budgets shift or new channels arrive. For regulated verticals, I annotate metrics with the basics of<a href=\"https:\/\/www.ampliz.com\/resources\/compliance-in-healthcare-marketing\/\" data-type=\"URL\" data-id=\"https:\/\/www.ampliz.com\/resources\/compliance-in-healthcare-marketing\/\"> <strong>compliance in healthcare marketing<\/strong><\/a> so no one confuses clean growth with risky shortcuts.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>One list I actually keep<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Reply rate by intent tag (retargeting, competitor, research)<\/li>\n\n\n\n<li>Verified email rate and enrichment spend avoided<\/li>\n\n\n\n<li>Meeting rate by traffic source and by hour of day<\/li>\n\n\n\n<li>Bot\u2011challenge trigger rate and induced abandonment<\/li>\n<\/ul>\n\n\n\n<p>Dashboards serve conversations. I schedule a weekly 15\u2011minute review with SDR and AM leads to annotate anomalies: did a webinar invitation skew intent tags, did a new partner flood us with top\u2011of\u2011funnel curiosity? Those notes travel with the charts so future me remembers the context behind the curve.<\/p>\n\n\n\n<p>I also publish a tiny glossary on the dashboard itself\u2014what counts as a verified email, what \u201cmeeting\u201d means, which sources sit in the \u201crisky but strategic\u201d bucket\u2014so no one argues definitions while the data ages.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>One list I actually keep<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Reply rate by intent tag (retargeting, competitor, research)<\/li>\n\n\n\n<li>Verified email rate and enrichment spend avoided<\/li>\n\n\n\n<li>Meeting rate by traffic source and by hour of day<\/li>\n\n\n\n<li>Bot\u2011challenge trigger rate and induced abandonment<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>A Two\u2011Week Rollout That Proves Lift<\/strong><\/h2>\n\n\n\n<p>I time\u2011box the first iteration to 14 days so momentum beats debate. Week one, I run passive scoring and measurement while preparing rules and flags. Week two, I enable the gentlest controls on the riskiest sources first. I preserve a pure control cohort (no controls) to calculate lift honestly.<\/p>\n\n\n\n<p>Success looks like fewer form fills but more replies, shorter time\u2011to\u2011meeting, and lower enrichment spend for the same pipeline value. When the numbers land, I don\u2019t high\u2011five; I widen the net a little, update exceptions for false positives, and keep moving until the funnel\u2019s shape looks like humans again.<\/p>\n\n\n\n<p><strong>Day 1\u20133:<\/strong> instrument and observe. No gating yet\u2014just tags, logs, and a clean control group. <strong>Day 4\u20136:<\/strong> switch on the lightest controls for the dirtiest sources (delayed form injection, IP reputation nudges), confirm abandonment doesn\u2019t spike for proven cohorts. <strong>Day 7:<\/strong> freeze changes; document results.<\/p>\n\n\n\n<p><strong>Day 8\u201310:<\/strong> expand to medium\u2011risk sources and add graceful rate limits on form endpoints. <strong>Day 11\u201312:<\/strong> solicit qualitative feedback from SDRs: are first replies clearer, are \u201cwrong person\u201d responses down? <strong>Day 13\u201314:<\/strong> compile lift: reply rate, meetings per 100 MQLs, enrichment dollars saved. Present the plan to keep rolling with the same calm cadence.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Governance keeps it boring<\/strong><\/h3>\n\n\n\n<p>I log each control in a simple register\u2014what it does, who owns it, and the rollback plan. Legal and IT don\u2019t need a novel; they need to know nothing silently escalates.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Closing the Loop<\/strong><\/h2>\n\n\n\n<p>Fraud prevention in acquisition is a practice, not a purchase. Every channel you add will attract a different flavor of noise. If you keep the decisions close to the edge, keep the rules legible to the business, and log the \u201cwhy\u201d behind each block, the system stays understandable and fast.<\/p>\n\n\n\n<p>The long\u2011term win is cultural, not technical: marketing and sales judge success by human outcomes\u2014meetings and revenue\u2014rather than raw lead counts. When the junk never enters the room, the good work starts earlier and feels lighter.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Last updated on March 30th, 2026 I don\u2019t lose pipeline to bad creative; I lose it to noise. Click farms, headless browsers, and throwaway emails shove themselves into my metrics until the truth gets fuzzy and the sales team stops trusting anything with a UTM tag. That\u2019s fixable, but only if you treat data quality as an attack surface, not an afterthought. So I run acquisition like a defensive sport. I isolate where the garbage sneaks in, I make it harder for bots to behave like people, and I front\u2011load decisions so junk never crosses into the CRM. The payoff isn\u2019t just fewer form fills\u2014it\u2019s a smaller, truer MQL pool and faster starts for sales because fraud never takes a seat in the queue. The Baseline: Make the Mess Measurable Before I add controls, I want proof that they\u2019ll matter. I start by quantifying drift between \u201cad platform success\u201d and \u201csite reality\u201d\u2014which campaigns have high click\u2011through but suspiciously low dwell or interaction depth, which referrers spike at odd hours, and which pages show sky\u2011high form views but oddly uniform field patterns. I\u2019m not hunting one silver bullet; I\u2019m building a pattern library for what fake looks like in my funnel. Global estimates put the problem in the tens of billions\u2014see ad fraud costs hit $84B for context. I also move critical signals server\u2011side (event receipts, basic page pings) so browser tampering doesn\u2019t erase them. A few days of passive observation gives me the shape of the problem: IP ranges that over\u2011contribute, user agents that never move the mouse, and session streaks that hit the form in three seconds flat. With that in hand, fixes become math, not vibes. What \u201cgood\u201d looks like I keep a small north\u2011star set: rising human interaction rates (scroll, focus, paste avoidance), fewer bounces from first\u2011time segments, and conversion curves that remember the laws of friction. If those are trending in the right direction while raw click counts drop, I\u2019m doing it right. I also pin lightweight event logs to a durable store so I can compare apples to apples week over week: a single table with timestamp, campaign tag, referrer family, IP ASN, session duration buckets, and whether a human interaction occurred before the form loaded. When fraud creeps back, this view tells me which lever to pull without rerunning the whole investigation. Instrument once, analyze forever I don\u2019t chase perfect telemetry; I chase consistent telemetry. A few high\u2011signal fields\u2014interaction depth, time to first input, copy\/paste frequency\u2014beat a sprawling dashboard where no one trusts the axes. When growth asks \u201cwhy did form fills drop 12%?\u201d I can show that human interactions rose 18% in the same window. That\u2019s the story that sticks. A legitimate research cohort lingers, compares, and comes back. Cheap placements sprint to the form, fat-finger email, and vanish. I align channels to B2B SaaS lead generation strategies that historically correlate with real replies, not scripted fills. Cheap placements sprint to the form, fat\u2011finger email, and vanish. Instead of flipping the off switch wholesale, I nudge budgets away from patterns that correlate with bots: domains with inorganic time\u2011on\u2011page, placements where every \u201cvisitor\u201d shares an ASN, and publishers that deliver at 3 a.m. local time for weeks. I also set pre\u2011bid rules and blocklists where available, then confirm with post\u2011click checks on my side to avoid trusting the fox to guard the henhouse. Ad ops meets revops Media tweaks live next to CRM truth. I mirror campaign tags into the CRM so sales can see which sources send people who actually reply, not just people who fill. That feedback loop makes pruning feel like growth, not loss. For ad platforms that allow it, I prefer allowlists over endless blocklists. Start from the partners and placements that have historically produced real replies, then expand carefully. If the tech supports pre\u2011bid brand\u2011safety or viewability filters, I switch them on and verify with my own post\u2011click data so the incentives stay aligned. On affiliates, I insist on transparent referrers and cut deals that pay on qualified goals, not raw clicks. When to cut versus coach If a source is strategic, I share my post\u2011click patterns with the rep and ask for inventory changes. If they can\u2019t deliver cleaner cohorts in a week, budgets shift. I don\u2019t wait months to protect my CRM. Did they pause before pasting a phone number? I also bias the offer toward lead magnets that attract qualified leads so automation has less incentive to hammer the form. I also keep \u201cpolite friction\u201d ready for risky cohorts: light bot challenges only when IP reputation stinks, a copy\u2011paste guard on the email field for suspect segments, and a post\u2011submit verification email that must be clicked before enrichment triggers. It\u2019s all reversible, measured, and tuned against abandonment lines I set upfront. Don\u2019t wreck UX Everything ships behind feature flags and A\/Bs. If abandonment jumps for a healthy segment, the guardrail relaxes there first. People first; scripts later. I also rotate progressive profiling for genuine prospects: on first visit I ask for just an email and role; after verification I invite the rest. Scripts that shotgun fields hate the extra round\u2011trip; humans rarely mind. A hidden timer records reading time before the form can submit from risky segments\u2014it\u2019s invisible to normal visitors but poisonous to one\u2011second submitters. Be explicit about privacy Every gate I add includes a plain\u2011language note on why it exists and how signals are used. Saying the quiet part out loud (\u201cwe challenge suspicious traffic to protect our users\u201d) builds trust\u2014especially in regulated markets. Edge Scoring: Decide Early, Decide Fast I like decisions near the edge where latency is cheap and blast radius is small. A thin layer in front of the app computes a risk score per session using simple signals: IP reputation, ASN and geography, user\u2011agent sanity, request cadence, and tiny interaction breadcrumbs captured early. High risk routes to a safer path; low risk flows untouched. I mention the tools on purpose: rate limits, IP reputation checks, lightweight bot challenges, and<\/p>\n","protected":false},"author":13,"featured_media":3752,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[85],"tags":[],"class_list":["post-3751","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-b2b"],"_links":{"self":[{"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/posts\/3751","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/users\/13"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/comments?post=3751"}],"version-history":[{"count":1,"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/posts\/3751\/revisions"}],"predecessor-version":[{"id":4604,"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/posts\/3751\/revisions\/4604"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/media\/3752"}],"wp:attachment":[{"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/media?parent=3751"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/categories?post=3751"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ampliz.com\/resources\/wp-json\/wp\/v2\/tags?post=3751"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}