This interview is with Liviu Multiply CMO, Fractional CMO, Multiply CMO.
For readers at Featured, how do you describe the impact you create as a Fractional CMO for B2B companies?
The impact I create is closing the gap between strategy and execution that undermines most B2B growth efforts.
Most B2B companies at the growth stage have one of two problems. Either they have smart founders who understand their market but have never built a marketing function before, so strategy exists only in someone’s head and nothing is actually running. Or they have activity without strategy: campaigns, content, and paid spend operating without a coherent framework connecting them to revenue.
I think at the CMO level — market positioning, ICP definition, messaging architecture, channel strategy — and I execute at the operator level. I don’t hand a client a deck and leave. I build the briefs, run the campaigns, hire and direct the contractors, and sit in the revenue conversations where marketing decisions get made. Clients get the plan and the delivery without hiring two people or paying for an agency layer they don’t yet need.
After 33 years across DTC ecommerce, digital agencies, international events, and real estate development, the pattern I keep seeing is the same: the companies that grow aren’t necessarily the ones with the best product. They’re the ones where someone with genuine senior marketing experience is making the decisions and staying accountable for the outcomes. That’s what I do.
What were the key inflection points that shaped your path from early PPC pioneer to running a DTC brand to advising founders as a Fractional CMO?
There were three moments that genuinely changed the trajectory.
-
The first was 1996, when I opened one of the first internet cafés in Iași, Romania. Most people around me thought the internet was a curiosity. I thought it was infrastructure. That instinct led me to start running affiliate marketing campaigns on goto.com in 1998, spending millions on pay-per-click advertising two years before Google Ads existed. I became Romania’s first Google Ads–certified expert when AdWords launched. That early immersion in performance marketing gave me a quantitative foundation that most brand marketers never develop: I’ve always thought in CAC, LTV, and margin before I think in campaigns.
-
The second inflection point was building Olely, a DTC cosmetics brand in US markets, between 2012 and 2016. Running a brand end-to-end — product, supply chain, paid acquisition, retention, and the Amazon marketplace — forced me to think like an operator, not just a marketer. You can’t hide behind strategy when you’re responsible for the P&L. Every marketing decision had an immediate financial consequence. That accountability shaped how I advise founders today: I’m comfortable sitting in the numbers, not just the narrative.
-
The third was co-founding and running AFTERHILLS, one of Romania’s largest international music festivals, across three editions between 2017 and 2019. Managing 40 departments, hundreds of contractors, national marketing campaigns, and budgets in the millions — with a hard deadline and no second chances — taught me something no marketing textbook covers: how to make decisions under genuine operational pressure when the cost of being wrong is measured in hundreds of thousands of customers. COVID ended the business in 2020, but the operational discipline it built is something I bring into every client engagement.
The thread connecting all three: I’ve always been earliest in the room. First internet café; first PPC campaigns; first AdWords certification in Romania; DTC before it was called DTC. That pattern of early adoption is why I’m now building a fractional CMO practice around AI-integrated execution — I’ve seen enough technology waves to know which ones are infrastructure and which ones are noise.
What is your 90-day diagnostic process for identifying and fixing ARR growth constraints in a new B2B SaaS engagement?
The first thing I do is resist the temptation to touch anything for 30 days.
Most fractional CMOs come in and start optimizing immediately—tweaking campaigns, refreshing messaging, restructuring the funnel. That’s usually the wrong move. You end up optimizing a system you don’t yet understand, which produces local improvements and misses the actual constraint. The first 30 days are diagnostic only.
-
Days 1–30: I map the full revenue picture. Where is ARR actually coming from—which channels, which ICPs, which use cases? Where is it leaking—churn rate, expansion rate, contraction MRR? I interview the sales team about why deals are won and why they’re lost. I interview churned customers directly if I can get access. I audit every marketing asset—not to judge quality, but to understand what signal the market has been sending back through engagement, conversion, and retention data. I’m looking for the gap between who the company thinks it’s selling to and who is actually buying and staying.
-
Days 31–60: I build the constraint map. Almost every ARR growth problem traces back to one of four places: the wrong ICP being targeted, a positioning that doesn’t survive contact with the buying committee, a funnel with a specific broken stage, or a retention problem being masked by new-logo growth. By day 60 I know which of those is the primary constraint and what’s downstream of it. This is where the strategy gets written, not before.
-
Days 61–90: I run three to five focused experiments against the primary constraint with clear success metrics and a decision rule agreed in advance—not a full campaign overhaul, but targeted tests designed to validate the diagnosis before scaling the fix. By day 90 the client has a confirmed constraint, a validated approach to addressing it, and a 6–12 month execution roadmap built on evidence rather than assumptions.
The most common finding: the ARR constraint isn’t a marketing problem. It’s a positioning problem that marketing has been trying to compensate for with volume. More leads into a broken value proposition produce more churn, not more revenue. Fixing the positioning first makes everything downstream more efficient.
Can you share a recent example where a mid-quarter budget shift forced tradeoffs, and how your “protect converters, fund learners, cut the rest” rule played out in practice?
The clearest recent application of this framework was building the Multiply CMO launch budget earlier this year.
Launching a new professional services brand from zero means every channel is unproven at the start — there are no converters yet, only learners. The budget decision at launch was about sequencing: which learner should be funded first, and how do I create a converter as quickly as possible so I have something to protect?
The initial allocation went entirely into two channels: content SEO and expert PR outreach through platforms like Featured.com. Both were learners on day one. I applied a decision rule to each: I would know within 60 days whether SEO was generating indexed pages with early ranking signals, and within two weeks whether expert PR was generating placements. The faster feedback loop on PR meant it would either become a converter or get cut first.
Within the first week of active PR outreach, two placements went live: American Marketing Association and PR Thrive. That was the signal. PR outreach became the first converter in the Multiply CMO budget, producing backlinks, domain authority signals, and direct brand visibility with the exact audience that hires fractional CMOs. It was protected and scaled.
The channel I cut was paid social. The temptation at launch is always to run awareness ads — it feels like progress. But paid social for a professional services brand with no case studies, no testimonials, and no content library yet is spending to amplify nothing. It was neither converting nor generating learnable signal — a classic “cut the rest” candidate. That budget went back into content production instead.
The framework works because it forces an honest conversation about what each channel is actually doing at any given moment — not what it did historically, not what it might do eventually, but what it’s doing right now. Most budget decisions are made on hope. This one is made on signal.
How are you applying AI as an operations layer in B2B SaaS go-to-market to tighten the loop between buyer behavior and sales/marketing actions?
The way I apply AI in B2B SaaS go-to-market isn’t as a content generator or a chatbot layer. It’s a signal processor — something that compresses the time between a buyer behaviour signal and a sales or marketing action responding to it.
In most B2B SaaS companies, buyer behaviour data exists in abundance, but insight lag is enormous. A prospect visits the pricing page three times in a week, reads two case studies, and attends a webinar. That signal sits in HubSpot or Salesforce, but by the time a human reviews it, prioritises it, and routes it to a rep, the window for timely outreach has usually closed. The buyer has either moved forward or moved on.
AI tightens that loop in two specific places:
- Intent signal aggregation. AI can monitor behavioural patterns across website, email, and product usage data simultaneously and surface the accounts showing buying signals before any human would have noticed them. The action that follows — a personalised sequence trigger, a rep alert, or a retargeting audience update — happens in near real time rather than during the next weekly pipeline review.
- Message-to-segment matching. B2B SaaS buyers in different roles, stages, and use cases need different conversations. AI can analyse which messages are resonating with which segments based on engagement and conversion data, and adjust the content and sequencing accordingly, without waiting for a quarterly campaign review to surface the pattern.
The operational principle I apply: AI should reduce the time between signal and response, and humans should make the judgment calls about what to do with the response — not the other way around. The moment AI makes strategic decisions without human review, you’ve removed the judgment layer that makes the strategy right for this specific business and this specific market moment.
When this works correctly, the sales team feels like marketing understands what’s happening in deals, and marketing feels like sales is following through on the leads being generated. That alignment — which is the chronic failure mode in most B2B SaaS GTM — is what AI as an operations layer actually produces when it’s implemented correctly.
When evaluating market expansion into a new segment or geography, what pre-commitment tests do you run to validate demand and unit economics?
The principle I start with is this: never commit resources to a new segment or geography until youve manufactured a buying signal from it at minimal cost. An opinion about whether a market wants your product is worthless. A prospect from that market who has taken a qualifying action is evidence.
I run three pre-commitment tests in sequence, each designed to fail fast and cheaply before the next one requires more investment.
-
The message test. Before building anything for the new segment no localised landing page, no dedicated campaign, no sales collateral I run a small paid search or LinkedIn test pointing to an existing page with modified messaging that speaks to the new segments specific problem. The question Im answering is: does this segment respond to being addressed at all? Click-through rate and time on page give a directional signal within two weeks and a few hundred dollars. If the segment doesnt engage with messaging that speaks directly to their situation, the expansion conversation ends here.
-
The conversation test. If the message test shows signal, I find five to ten people in the target segment and have direct conversations founder to founder, operator to operator, not a sales call. Im not pitching. Im mapping: what does the problem actually look like from inside their context? How do they currently solve it? What would make them switch? What Im listening for is whether the language they use to describe their problem matches the language in my positioning, or whether Ive been projecting a problem onto a segment that frames it completely differently.
-
The conversion test. Only after the message and conversation tests show genuine alignment do I run a full acquisition test: a dedicated landing page, a complete campaign, a real offer, a real price. This is the unit economics validation: what does it actually cost to acquire a customer in this segment, and does that CAC produce an acceptable LTV ratio at the price point the market will bear?
Ive used this sequence when entering US markets with a DTC brand from Romania, evaluating European vs UK market focus for Multiply CMO, and deciding which verticals to prioritise within B2B SaaS. The test that fails most often is the conversation test the market has the problem, but frames it so differently that the positioning requires a complete rebuild before anything else makes sense. Better to find that out in week three than in month six.
How do you align a founder team on LTV:CAC targets and payback periods before scaling acquisition?
The alignment problem on LTV:CAC almost never starts with the numbers. It starts with the fact that the founding team has three different mental models of who the customer actually is, and until those models converge, any LTV:CAC target is just arithmetic applied to a fiction.
The first conversation I have with a founder team before touching any acquisition metric is: who is your best customer, specifically? Not the ICP document. Not the persona slide. The actual company that renewed, expanded, referred others, and never required a discount. Can everyone in the room name the same two or three accounts and agree on what made them best? If the answer is no (and it usually isn’t), the LTV calculation is based on an average that includes customers you shouldn’t have acquired in the first place, which makes the number meaningless as a scaling target.
Once the team agrees on the right customer profile, I work through LTV:CAC in three passes.
-
The first pass is historical reality. What have we actually spent to acquire customers, and what have those customers actually been worth over their observed lifetime? No projections yet, just what the data shows. This pass usually surfaces a wide distribution: a small cohort of ideal customers with strong LTV and a larger cohort of misfit customers dragging the average down.
-
The second pass is segmented reality. Strip out the misfit cohort and recalculate LTV:CAC for the ideal customer profile only. This number is almost always significantly better than the blended average and it’s the number that should be driving acquisition decisions, because it represents what the business looks like when it’s targeting correctly.
-
The third pass is forward commitment. Given the segmented LTV:CAC, what payback period is the business financially able to tolerate? This is where the CFO or financial reality enters the conversation. A 12-month payback is fine if you have 18 months of runway. It’s catastrophic if you have 8.
The alignment happens when every founder can see their own mental model of the customer reflected in the segmented data and agrees that the forward commitment is financially honest. Until then, scaling acquisition just means making the misfit cohort problem larger and more expensive.
For a mid-market SaaS with a $20–50k ACV, what paid channel mix do you recommend, including the signals that tell you to reallocate?
At a $20–50k ACV, you’re selling to a buying committee, not an individual — which changes the channel logic significantly compared to SMB SaaS or consumer. The deal involves multiple stakeholders, a longer evaluation cycle, and a decision that requires internal consensus. The paid channel mix has to serve that reality, not fight it.
My starting recommendation for this ACV range:
- 50% LinkedIn
- 35% Google Search
- 15% retargeting across both
LinkedIn earns the majority allocation because it’s the only paid channel where you can reach the specific job titles, company sizes, and industries in your ICP before they’re actively searching. At $20–50k ACV your buyer is a VP or C-suite leader at a company with 50–500 employees. LinkedIn lets you address that person directly with content that builds category awareness and positions you in their consideration set before they open a search tab. This is demand creation, not demand capture, and at this ACV you need both.
Google Search earns the 35% allocation for demand capture — the buyers who are actively researching a solution right now. The keyword strategy here is narrow and intentional: competitor terms, category terms with buyer-intent modifiers (“best,” “alternative,” “pricing,” “for [industry]”), and problem-specific long-tail terms. Broad keyword coverage at this ACV is expensive and attracts the wrong funnel stage. You want the hand-raisers, not the browsers.
Retargeting at 15% is the connective tissue: keeping your brand visible to the people who’ve already shown interest across both channels without yet converting. At a $20–50k ACV with a 30–90 day sales cycle, retargeting is the difference between a lead that goes cold and one that re-engages when the internal timing is right.
The signals that tell me to reallocate:
- If LinkedIn CPL is above 3x Google CPL and pipeline quality from both is equivalent, shift 10–15% from LinkedIn to Search — you’re in a market where intent-based capture is more efficient than awareness-based targeting.
- If Search impression share on core terms drops below 60%, the budget is too thin to compete effectively. Pull from retargeting temporarily to defend the position.
- If retargeting frequency exceeds 8–10 impressions per user per month with flat CTR, the audience is saturated — pause retargeting, refresh creative, or expand the audience definition before resuming spend.
- If MQL-to-SQL conversion rate drops below 20% while volume holds steady, the channel mix isn’t the problem — the ICP or the offer is wrong.
What is one repeatable tactic you’ve used to lift demo-to-opportunity or trial-to-paid conversion within 30 days?
The tactic that consistently moves demo-to-opportunity conversion within 30 days is replacing the generic follow-up sequence with a single, specific next-step commitment made during the demo itself.
Most demos end with “I’ll send you the deck and some resources,” which is a polite way of ending the conversation without advancing it. The prospect leaves with materials they won’t read, and the sales rep leaves with a follow-up task that produces no signal about whether the deal is real. The follow-up sequence that follows is essentially a series of increasingly hopeful nudges into silence.
The change I make: before the demo ends, I get agreement on one specific thing the prospect will do before the next conversation — not a commitment to buy, not a commitment to a next meeting, but a micro-action that requires genuine engagement with the problem.
- Show the product to the person in the organization who owns the budget.
- Run a quick audit of the current process we discussed.
- Pull the data point that would confirm whether the problem is as large as they described.
The micro-action does three things simultaneously:
- It tests whether the prospect is genuinely engaged or just being polite.
- It advances their internal evaluation process in a way that makes the next conversation more substantive.
- It creates a natural, non-pushy reason to follow up — not “just checking in” but “did you get a chance to run that audit?”
The conversion lift comes from the qualification function. Prospects who complete the micro-action are real opportunities. Prospects who don’t (despite agreeing to it on the call) have self-selected out of the pipeline without requiring three weeks of chase emails to confirm they were never serious. Smaller pipeline, higher conversion rate, less wasted sales capacity.
I’ve applied this in my own Multiply CMO business development process and advised clients to implement it at the demo stage specifically because that’s where most B2B SaaS pipelines leak — not at lead generation, but at the moment between expressed interest and genuine evaluation intent.