This interview is with Brandon Kidd, VP Operations, DeltaV Digital.
For readers meeting you on Featured, how do you describe your role as VP Operations at DeltaV Digital and the types of client problems you solve across SEO, paid media, CRO, and content?
My title is VP of Operations, but I describe what I do more simply: I help businesses figure out why their marketing isn’t working and then fix it. That usually starts with a diagnostic conversation rather than a sales pitch. What are you spending? What is it returning? Where are leads dying in the funnel?
At DeltaV Digital, we work across SEO, paid media, CRO, and content. The reason we’re built that way is intentional: those four things are not separate channels—they are one system. A client can have a well-optimized paid search campaign and still lose the lead because their landing page converts at 1%. Another client might have strong organic traffic but no content strategy that moves someone from awareness to a conversation. The problem is rarely just one thing.
Most of the clients who come to us have either outgrown their current agency, taken a hit from an algorithm update or experienced a site migration that went sideways, or they’ve been running marketing activity for years without any clear picture of what it’s actually producing. My job is to bring clarity to that, build a plan that connects the channels, and then hold the team accountable to outcomes that actually show up in revenue, not just on a dashboard.
I spent 12 years building this kind of work, including founding my own agency before it was acquired by DeltaV. That background shapes how I think. I approach every client like an operator, not a vendor.
How did your journey from web development and SEM to VP Operations shape the way you build and scale marketing programs today?
I didn’t start as a strategist. I started as someone who had to figure things out because the budget wasn’t there to hire specialists for everything. Early on, I was building sites, running paid campaigns, troubleshooting why traffic wasn’t converting, and writing copy all at the same time. That kind of forced generalism turns out to be one of the best educations you can get in this industry, because you stop thinking in channels and start thinking in systems.
When you’ve personally set up a paid search campaign and also built the landing page it sends traffic to, you understand in a very concrete way why those two things have to be designed together. When you’ve done technical SEO work and also managed content calendars, you stop treating them as separate line items. That connective thinking is what I try to bring into every program we build at DeltaV.
The transition into operations and eventually VP was less about moving away from the hands-on work and more about scaling the thinking. How do you take what works for one client and build the processes, the team structure, and the accountability systems so it works reliably across many clients? That’s the ops problem. It’s not glamorous, but it’s where most agencies fall apart. Good strategy without good execution infrastructure just produces inconsistent results.
Founding Folsom Creative and then going through an acquisition also changed how I think about growth. When it’s your business, every dollar of inefficiency is personal. That experience made me allergic to activity that can’t be connected back to revenue. I still carry that into how I run programs today. We’re not trying to win awards or hit vanity metrics. We’re trying to move the numbers that matter to the business.
That’s the through line from where I started to where I am now. The tools have changed. The core problem has not.
When you onboard a new SEO client with years of legacy content, what does your first 72-hour audit look like to identify high-intent opportunities and quick wins?
The first 72 hours are about orientation, not recommendations. I want to understand what we’re actually working with before anyone on my team proposes a single change.
The first thing we do is get eyes on the technical foundation. Crawl the site, look at indexation, check for crawl errors, redirect chains, canonicalization issues, and site speed. Legacy content sites almost always have accumulated technical debt that’s quietly suppressing performance across the board. You can find a hundred content opportunities, but if the site has 400 redirect chains and half the pages are canonicalized incorrectly, the content work won’t move the needle the way it should.
In parallel, we pull the organic performance data from GSC and whatever analytics platform they’re on. We want to see what’s ranking, what’s ranking on page two or three with real search volume behind it, and what used to rank but has declined. That middle bucket is usually where the fastest wins live. A page sitting at position 11 or 14 that targets a high-intent keyword often needs a focused update, not a rebuild. We can move those relatively quickly.
Then we look at the content itself through the lens of intent. Legacy sites tend to have a lot of informational content that was created for traffic but never built with conversion in mind. We’re identifying where the funnel breaks, where someone doing research could find exactly what they need to take the next step, but the page doesn’t give them a path forward.
By hour 72, I want a clear picture of three things:
- what’s broken technically and needs to be fixed before anything else;
- what’s close to ranking and just needs attention;
- where the content gaps are relative to what the business actually sells.
That’s the foundation everything else gets built on.
Shifting to paid media, how do you decide the initial budget split across brand search, non-brand search, Performance Max, and remarketing for a new B2B client?
Honestly, I start with what I know before I touch a budget allocation.
If there’s existing account history, that tells me most of what I need. I want to see where conversions have actually come from, what the brand search volume looks like relative to the competitive landscape, and whether Performance Max has been running before and what it was doing with the spend. A lot of clients come to us with Performance Max (PMax) campaigns that were eating budget and producing assisted conversions that looked fine in Google’s reporting but couldn’t be connected to real pipeline.
For a brand-new B2B client with no history, my default starting posture is to protect brand first, go conservative on non-brand, skip Performance Max initially, and use remarketing as a supporting layer rather than a primary channel. Brand search is almost always the highest-intent traffic you can buy, and the CPCs are usually low relative to what you get. There’s no reason to let a competitor steal that traffic while you’re figuring out the rest.
Non-brand search is where I want to learn before I scale. I’d rather run a tighter campaign with strong negative-keyword discipline for the first 60 days than spend big and let broad-match traffic muddy the data. B2B buying cycles are long, and the cost of a bad lead isn’t just the click—it’s the sales time spent chasing it.
Performance Max earns its way into the mix once we have conversion data. Without it, the algorithm is guessing; in B2B, where the conversion events are form fills and phone calls rather than purchases, it needs guardrails and clean signals to do anything useful.
I run remarketing from day one but at modest spend. Staying visible to site visitors during a long consideration cycle is worth it. The budget just doesn’t need to be heavy until the top of funnel is producing real volume.
The honest answer is that there is no universal split. The split follows the data and the sales cycle, not a template.
Before any redesign, what are the first three low-lift CRO experiments you run to lift conversion rate on a lead‑gen site?
The same three things come up more often than anything else, and they’re all testable without touching the design.
-
The first is the headline on the primary landing page. Most lead-gen sites have a headline that describes what the company does rather than what the visitor gets. There’s a difference between “Integrated Digital Marketing for Growing Businesses” and “More Qualified Leads Without Increasing Your Ad Spend.” One is about us; one is about them. Swapping the value framing in a headline is a single line of copy, and it consistently moves conversion rate more than almost any design change I’ve seen. We test this first because the lift potential is high and the implementation cost is basically zero.
-
The second is form friction. The default assumption is that shorter forms convert better, and that’s true often enough that it’s worth testing, but the more important question is whether the form fields match what the visitor is ready to give at that stage of their decision. Asking for company size, annual revenue, and project timeline on a first-touch form for someone who just wants to talk to a human is asking too much, too soon. We’ll strip the form down to the minimum needed to have a useful first conversation and see what happens to both volume and lead quality.
-
The third is the call to action itself, specifically what it promises. “Submit” and “Get Started” do almost no work. They’re asking someone to take an action without telling them what happens next. “Talk to a Strategist” or “Get Your Free Audit” tells the visitor exactly what they’re getting. That specificity reduces the perceived risk of clicking and almost always improves conversion rate on its own.
None of those require a redesign. They require a clear hypothesis, a clean test, and enough traffic to reach statistical significance before you call it.
You’ve advocated building role‑by‑stage content; how do you operationalize that matrix across SEO, email, and social so it ships on time without ballooning the content budget?
The matrix is useful only if it’s also a production system, not just a planning document.
We operationalize it by building the matrix once and treating it as a brief factory. Every cell in that matrix—role, stage, channel—becomes a brief template. The brief defines the intent, the audience, the conversion goal, and the key message before a writer or designer touches it. That upfront investment in structure is what keeps the content budget from ballooning because you stop producing content that doesn’t have a job.
Second is channel sequencing. We don’t try to produce everything at once. We pick the highest-value role and stage combination first, usually the decision stage for the primary buyer persona, and we build the core asset. That might be a long-form SEO piece or a case study. Then we slice it. The email version is a shorter narrative pull from the same core content. The social version is a single insight or data point from that piece with a link back. One production effort, three channel executions. The budget math changes completely when you’re not commissioning original content for every channel separately.
The third piece is a shared editorial calendar that maps to the matrix explicitly — not just dates and titles, but role, stage, and channel clearly labeled on every row. That visibility is what keeps SEO, email, and social from drifting into their own silos, where everyone is producing content that doesn’t connect to the same buyer journey.
Where most teams fall apart is governance. The matrix gets built, the calendar gets populated, and then the first tight deadline hits and someone publishes something that wasn’t in the plan. We keep it tight by making the matrix the filter for every content request. If it doesn’t map to a cell, it doesn’t ship until we understand why we’re doing it.
Discipline in the system is what makes the budget work.
From an operations lens, what does your pre‑launch QA and measurement framework include to keep multi‑channel data clean and attributable?
Clean data is a pre-launch requirement, not a post-launch cleanup project. By the time a campaign is live, the window to fix attribution problems without losing data has already closed.
-
The first thing we lock down is tagging infrastructure. Every URL that touches a paid channel gets UTM parameters, built using a consistent naming convention, before anything goes live. This sounds basic, but it frequently breaks in multi-channel launches because different people on different teams are building assets in parallel and no one owns the naming standard. We solve that by making one person responsible for the UTM taxonomy and requiring that every asset goes through a tag audit before it gets scheduled or trafficked.
-
In parallel, we audit the analytics implementation itself.
- Is GA4 firing correctly on every conversion event?
- Are the conversion actions in Google Ads pulling from the right source?
- Is there duplicate tracking anywhere?
We use a combination of Google Tag Assistant and manual QA in an incognito browser, walking through every conversion path. Form submissions, phone calls, chat initiations—whatever the defined conversion events are for that client. If it doesn’t fire correctly in QA, it doesn’t launch.
-
The third layer is a baseline measurement document that gets created before launch, not after. What are the current conversion rates, cost per lead, and channel contribution percentages? That document is what makes the first 30 days of data meaningful. Without a documented baseline, you’re just looking at numbers with no reference point.
For multi-channel attribution specifically, we set clear expectations up front about what the reporting will and won’t show. Last-click attribution will overcredit search. View-through attribution will overcredit display. We align with the client on a model before launch so nobody is surprised or arguing about the numbers three weeks in.
The framework only works if everyone touches it before launch day, not after.
Can you walk us through one client engagement where your AI‑powered research‑to‑strategy pipeline materially changed the plan or timeline?
The one that sticks out happened with a mid-market client in a crowded professional services category. They came to us frustrated because they had been running paid search and doing SEO work with a previous agency for two years and couldn’t gain meaningful ground on their top three competitors. The assumption going in was that it was a budget problem: they believed the competitors were simply outspending them.
We ran the competitive research differently than we would have two years ago. Instead of just pulling keyword overlap and share-of-voice data, we used an AI-assisted workflow to analyze competitor content at scale, their reviews across multiple platforms, their messaging across paid and organic channels, and the gaps between what they were promising and what their customers were actually complaining about. That kind of qualitative synthesis across a large dataset used to take two to three weeks to do properly. We delivered it in under four days.
What it surfaced changed the entire strategy. The competitors weren’t winning on budget. They were winning on one specific trust signal our client wasn’t addressing at all: transparency around process and timeline. Every negative review pattern across the competitive set pointed to the same frustration: clients felt left in the dark. Our client actually had a better process but had never made it a marketing asset.
We scrapped the original plan, which was essentially a keyword expansion and content-volume play, and rebuilt the strategy around that insight. Conversion-focused content, paid messaging, and landing-page copy all centered on that single differentiator.
The timeline shifted by about three weeks because we didn’t just execute the original plan. That was the right call. The research changed what we built, and what we built worked.
Looking ahead, what operating cadence and team structure have helped you scale digital programs reliably across startups and mid‑market clients without sacrificing execution quality?
The honest answer is that structure without rhythm doesn’t hold, and rhythm without structure doesn’t scale. You need both working together.
On the team side, the model that works for us is a pod structure in which a strategist, one or two channel specialists, and an account lead are attached to a client group rather than a channel silo. The alternative—where you have an SEO team, a paid team, and a content team all operating independently—creates handoff problems and accountability gaps. When something underperforms, nobody owns it because everyone can point to another channel. The pod model means one group owns the outcome, not just the activity.
The cadence piece is where most agencies and internal marketing teams quietly fall apart. Weekly check-ins that are just status updates are a waste of everyone’s time. What we run instead is a tiered rhythm: a short weekly sync focused only on what’s blocking execution or needs a decision; a monthly performance review that actually connects channel data to business outcomes and asks whether the current plan still makes sense; and a quarterly planning session that steps back from execution entirely and resets priorities based on what we’ve learned.
That quarterly reset is the one most teams skip because they’re too deep in the work. It’s also the one that matters most for scaling reliably. Without it, you end up optimizing a plan that should have changed three months ago.
For startups specifically, the challenge is that the strategy has to flex faster because the business is changing faster. The cadence stays the same, but the monthly review has to carry more weight because the assumptions you built the plan on can shift significantly in 30 days.
For mid-market clients, the challenge is usually the opposite: too much institutional inertia. The cadence becomes a forcing function to have honest conversations about whether what they’ve always done still makes sense.
The structure and the rhythm together are what let you scale without execution quality eroding. Either one alone doesn’t get you there.