Interview with Anton Strasburg, Media Manager, FreeConference.com

Featured

Featured connects subject-matter experts with top publishers to increase their exposure and create Q & A content.

7 min read

Interview with Anton Strasburg, Media Manager, FreeConference.com

© Image Provided by Featured

This interview is with Anton Strasburg, Media Manager, FreeConference.com.

As a Media Creator focused on live voice and video, how do you describe the work you do today for your audiences?

I’d say my work today is really about shaping how audiences understand and engage with live voice and video.

As a content creator in this space, I’m not just hosting sessions. I’m thinking about positioning, messaging, and how to explain the value of live communication in a way that feels real. A lot of people have meeting fatigue, so the story can’t be “more video.” It has to be “better conversations.”

On the marketing side, that means translating everyday behavior into useful insights. For example:

  • Why do people default to audio when bandwidth drops?
  • Why do simple join links outperform complicated onboarding?

Those small patterns say a lot about what audiences actually want.

So my role sits between product and audience. I pay attention to how people communicate, then turn that into clear messaging that feels practical, not hype-driven. If it sounds like something someone would actually say about their own meeting experience, I know we’re close.

What path led you into real-time production and SDK-driven platforms?

I didn’t start out thinking, “I want to work in real-time production.”

It really came from watching how people communicate under pressure. Live environments are honest. If the audio drops or the stream lags, you feel it instantly. There’s no hiding behind edits. That immediacy pulled me in.

Over time, I got more curious about the infrastructure behind those moments. Not just hosting a live session, but understanding what makes it stable, scalable, and flexible. That’s where SDK-driven platforms came in. They let teams shape their own experience instead of being boxed into a rigid interface.

I’ve always been interested in the space between experience and technology. Real-time tools sit right there. One small delay, one complicated join flow, and the audience checks out. But when it works, it feels effortless.

That tension, between complexity under the hood and simplicity for the user, is what kept me in it.

When you scope a new livestream with real-time interaction, what are the first three decisions you lock in?

First, the interaction model is the first thing I lock in.

Are we taking live questions in chat, bringing people on screen, running polls, or keeping it moderated? If you don’t define that upfront, the session can become messy quickly. I’ve seen streams stall because no one knew who was fielding audience questions.

The second is the audience environment.

Are they mostly on desktop at work, on mobile, using low bandwidth, or multitasking? That shapes everything. If half the audience is listening while commuting, you better ensure the audio carries the experience. Video becomes secondary.

The third is the fail plan.

What happens if the guest drops? If the stream lags? If comments stop loading? Real-time production is about redundancy. A backup dial-in, a second host, and prepped talking points are essential. You hope you don’t need it, but you likely will at some point.

Lock those three in early, and the rest becomes execution detail.

How do you evaluate and choose a real-time SDK or CPaaS for a project?

First, I look at reliability under real conditions, not just in documentation.

Can it handle unstable networks? What happens when someone switches from Wi-Fi to mobile mid-session? I’ve seen platforms look great in demos, then struggle when 200 real users join from mixed environments. So, I test with messy, real-world scenarios.

Second is flexibility.

Does the SDK let us shape the experience around the audience, or are we forced into a preset layout? For example, can we prioritize audio-only fallback? Can we control how participants enter, muted or live? Small controls matter a lot in live settings.

Third is developer friction.

How fast can a team ship something stable? Clear documentation, predictable APIs, and solid support are essential. If engineers spend weeks fighting edge cases, that’s a red flag.

At the end of the day, I’m asking one simple question: when this goes live, will the audience notice the tech, or will they just stay focused on the conversation? If the tool fades into the background, it’s usually the right fit.

How do you set a latency budget and design around unstable networks to keep Q&A truly live?

I start by deciding what “live” actually has to mean for this Q&A.

For most streams, I aim for something like “question asked to answer heard” in the 2–5 second range. Under ~2 seconds is amazing but expensive and fragile. Over ~7–10 seconds, people start talking over each other and the chat feels disconnected from the host.

Then I work backwards into a latency budget, piece by piece.

  • Capture + encode: keep it light and avoid over-processing.
  • Transport: pick the lowest-latency path that still holds up on bad networks.
  • Playback buffer: this is the big lever, but it’s where “live” goes to die if you let it creep up.
  • Interaction path: chat or Q&A events need to be fast even if video isn’t.

On unstable networks, I design for “graceful degradation” instead of pretending everyone has perfect WiFi.

So, audio gets priority, and video adapts down aggressively. If someone’s connection tanks, I’d rather keep them in the conversation on clean audio than freeze them out trying to hold 1080p. For audience Q&A, I’ll often separate the interaction channel from the media path so questions still land quickly even if the stream is buffering.

One practical trick: I keep the host on the cleanest connection possible and assume everyone else does not. That mindset saves you.

What is your go-to voice signal chain and monitoring checklist that consistently prevents failures?

I keep it pretty simple, honestly. Fancy chains are great until something breaks five minutes before you go live.

My go-to voice chain is:

  • Good dynamic mic, usually something forgiving in untreated rooms.
  • Into a reliable audio interface with clean gain.
  • Light compression, just enough to smooth peaks.
  • High-pass filter to cut low rumble.

That’s it. No heavy EQ unless the room demands it. The more processing you stack, the more points of failure you introduce.

Monitoring is where the real protection happens.

Before every session, I check:

  1. Input level. Am I peaking if I laugh or get animated?
  2. Backup mic nearby and plugged in.
  3. Headphones, not speakers, to avoid echo loops.
  4. Network stability. Quick speed test and a glance at packet loss, if possible.
  5. Local recording rolling, even if the platform records. I never trust a single recording path.

And one small thing people skip: I join as a participant from a second device. Mute it, but listen. That’s how you catch clipping, gating issues, or weird compression artifacts before your audience does.

Most failures aren’t dramatic. They’re small, preventable things. So I build a chain that’s boring and predictable. Boring is good in live audio.

You’ve emphasized join-time and stability in your work—which live KPIs and alerts do you watch during a broadcast?

Yeah, join-time and stability are the early warning signs.

During a live broadcast, I’m not staring at vanity metrics. I’m watching friction.

The first KPI is join success rate in the first few minutes. If people are clicking the link but not fully connecting, something’s wrong. You’ll see it in drop-offs before the intro even finishes.

The second is latency drift. Not just raw latency, but whether it’s creeping up. If your delay slowly climbs from 3 seconds to 9 seconds, your Q&A starts feeling awkward. Hosts begin talking over questions because timing feels off.

The third is packet loss and jitter, especially for speakers. Even small spikes can make someone sound robotic or clipped. I’d rather downgrade video instantly than let the voice break up. The audience will forgive soft video. They won’t forgive broken audio.

The fourth is mute state and audio presence. You’d be surprised how often a guest is “connected” but not actually sending usable audio. I keep an eye on audio level indicators constantly.

Then there’s a simple human KPI: chat velocity. If chat suddenly drops to zero during an active segment, that’s a signal. Either engagement has died, or something technical is slowing people down.

In live environments, you’re looking for patterns, not perfection. Small shifts tell you more than big crashes. If you catch the drift early, you can correct before the audience feels it.

During top-of-the-hour start spikes, what load-testing or traffic-distribution tactic has saved a stream for you in the real world?

The tactic that has saved me the most is boring but effective: I don’t let “everyone joins at :00” hit one choke point.

So I’ll do two things.

First, I create a staged lobby and pre-join window. I open the room 5–10 minutes early, even if the show starts at the hour. People trickle in, their devices settle, and you avoid that single-minute thundering herd where authentication, signaling, and media all spike at once. You’d be surprised how many issues disappear when joins aren’t synchronized.

Second, I split traffic paths on purpose. Viewers and interactive participants don’t need to hit the same stack. If you’ve got a “watch-only” path and a “Q&A / on-stage” path, you can protect the interactive layer from getting crushed by passive viewers. Worst case, the watch stays smooth while interaction is rate-limited instead of everything falling over.

And I always have an operational “oh no” lever ready, like temporarily forcing audio-first or lowering maximum video resolution when I see joins accelerating. People would rather have a stable stream than a perfect one.

What pre-event communications or in-session UX tweak most reliably boosts attendance or on-mic participation for you?

The single biggest boost to attendance? A plain reminder that tells people exactly how to join.

Not a long promo email. Just a short note the day of that says, “Starts at 2 PM. Click this link. Or dial this number if your WiFi’s shaky.” When people know there’s a backup option, they’re more likely to show up. I’ve seen attendance jump just by adding a simple dial-in line for people who are between meetings or commuting.

For on-mic participation, it’s an in-session tweak.

I normalize speaking early. Within the first five minutes, I’ll invite one low-stakes response. Something quick, like, “Drop where you’re joining from,” then, “Anyone want to expand on that live?” Once one person unmutes and has a good experience, others follow. Silence at the start kills participation.

Also, clear rules reduce hesitation. If people don’t know how to raise a hand, how long they’ll speak, or whether they’ll get interrupted, they stay muted. When the process is obvious, more voices come in.

Most people don’t need convincing. They just need clarity and a safe first step.

Thanks for sharing your knowledge and expertise. Is there anything else you'd like to add?

One thing I’d add, because it gets overlooked, is that most live failures aren’t technical; they’re expectation gaps.

People don’t drop off just because of latency. They drop off because they don’t know when they’ll speak, how they’ll participate, or whether it’s worth staying. The tech matters a lot, but structure and clarity matter just as much.

I’ve learned to design live sessions assuming attention is fragile. Clear entry. Clear purpose. Clear next step. Even something as simple as telling people, “We’ll open Q&A in 20 minutes,” keeps them oriented.

And honestly, boring reliability beats flashy features every time. If the stream starts on time, the audio is clean, and people feel heard, that’s a win. Everything else is decoration.

Up Next