Interview with Shane Larrabee, President/Founder, FatLab Web Support

Featured

Featured connects subject-matter experts with top publishers to increase their exposure and create Q & A content.

11 min read

Interview with Shane Larrabee, President/Founder, FatLab Web Support

© Image Provided by Featured

Table of Contents

This interview is with Shane Larrabee, President/Founder, FatLab Web Support.

To start, could you introduce yourself and share the WordPress hosting and performance challenges you focus on today?

I’m Shane Larrabee, President and Founder of FatLab Web Support, a WordPress hosting and support company I started in 2011. We manage over 200 client websites, primarily for nonprofits, professional associations, advocacy organizations, and agencies, through white-label partnerships.

Before FatLab, I spent about a decade in web development and design, including time as a partner at a design studio and earlier stints at PR agencies such as Ogilvy, Brodeur, and the Hoffman Agency. That mix of backgrounds gave me perspective on how websites fit into the bigger picture of an organization’s communications—and how often the technical side gets overlooked once a site launches.

The main challenges I focus on today fall into a few categories:

  • Security without complexity. WordPress powers over 40% of the web, which makes it a constant target. Our goal has been to assemble a comprehensive security approach—web application firewalls, real-time malware protection, proactive monitoring—so that clients don’t have to think about it. The challenge is making protection invisible so organizations can focus on their missions rather than worrying about the next vulnerability.
  • Keeping WordPress stable and current. WordPress core, themes, and plugins release updates constantly. Most hosts either ignore them or apply them blindly, hoping for the best. We’ve put systems in place that keep sites up to date while monitoring for issues—if something breaks, a developer gets notified before the client even knows there was a problem.
  • Actual support, not just hosting. The biggest challenge I see in this industry is the gap between what clients need and what they actually get. That gap exists on the hosting side, but it also exists in the handoff from the agency that originally built the site—whether that’s a development shop, a marketing agency, or a design studio. Projects launch, teams move on, and suddenly the client is on their own. We fill that post-launch void. Organizations don’t just need a server; they need someone who can troubleshoot why their donation form stopped working or explain why their site slowed down.
  • Reliability when it matters. Many of our clients experience traffic spikes around events, campaigns, or advocacy pushes. A nonprofit launching a fundraising drive can’t afford downtime at the worst possible moment. We think about those peaks from the start, not as an afterthought.

How did you get here—what experiences most shaped how you approach developer support, performance, updates, and security on WordPress?

I started my career in 1999 at Ogilvy Public Relations in Washington, D.C., as organizations were figuring out how to use the web. I was building interactive press kits and media rooms for Fortune 500 clients—Ford, MasterCard, and HP—and won a couple of PRSA Anvil Awards for that work. It was an exciting time, but I was always more interested in the building than the pitching.

From there, I moved through a few other agencies. Eventually, I became a partner at a design studio, gaining hands-on experience across the full lifecycle of web projects. That’s where I really learned how the sausage gets made—and where I started noticing a pattern.

Agencies would pour months into designing and building a beautiful site, launch it with fanfare, and then move on to the next project. The client was left holding something they didn’t fully understand, with no clear plan for updates, security, or what to do when something broke. I watched organizations struggle with this repeatedly.

When I started FatLab in 2011, I wanted to solve that post-launch problem. Not just hosting in the traditional sense, but ongoing support—the stuff that falls through the cracks when the original team moves on.

The security focus came from necessity. Early on, a few clients got hacked, and cleaning up those messes taught me that reactive security isn’t security at all. So we built prevention into everything we do. The same goes for updates—I saw too many sites break because someone clicked “update all” without testing, or worse, ignored updates entirely until the site was running three-year-old code full of vulnerabilities.

The performance aspect ties back to understanding what websites are actually for. These aren’t just technical projects; they’re how organizations communicate, raise money, and serve their members. When a site goes down during a fundraising campaign or loads slowly for a first-time visitor, that’s a real cost. My PR background led me to view websites as communication tools first and technical infrastructure second.

Everything I do now comes from watching what goes wrong when nobody’s paying attention.

Building on that, when you inherit a WordPress site, which three diagnostics do you run in the first 48 hours to pick the right hosting architecture?

When we inherit a site, the first 48 hours are really about answering three questions: What are we signing up for? Where are the risks? And can we actually help?

Traffic patterns and content rhythm. First, I want to understand how this site lives in the world. Is it a quiet brochure site that gets a steady trickle of traffic? Does it belong to an organization that runs in press cycles or advocacy campaigns where traffic might spike 10x overnight? Is there heavy ad spend driving visitors? And how often is content being updated—weekly blog posts or a site that hasn’t been touched in two years? This tells us which infrastructure makes sense. A membership organization running fundraising campaigns needs different resources than a law firm with a static site. We’re planning for peaks, not just averages.

Build quality and technical debt. Next, I want to know how the site was actually built. Was this done by a professional developer with clean code, or was it cobbled together from a commercial theme and 40 plugins? Is there bloat—unused plugins, redundant functionality, or page builders stacked on top of page builders? This evaluation tells us two things: how much technical support the site will need and how sustainable that support will be in the long term. A well-architected site is a pleasure to maintain. A bloated site built on shaky foundations will nickel-and-dime everyone forever. Sometimes, we have honest conversations with clients about whether a rebuild makes more sense than ongoing life support.

Real-world performance. Finally, we look at speed and performance—not to chase perfect Google scores, but because performance is user experience. How fast does the site actually load for a real visitor? Where are the bottlenecks? We use Core Web Vitals as one reference point, but they’re indicators, not gospel. What matters is whether someone visiting on a phone has a good experience or bounces because the page takes 6 seconds to render. This baseline also helps us show value later—clients can see concrete improvements, not just trust that we’re doing something behind the scenes.

These three things together tell us whether we can genuinely help and what that help should look like.

Once the platform is set, what does your ideal developer workflow—from local to staging to production—look like for teams you support?

It depends on the situation, and that flexibility is intentional.

For urgent issues—a plugin conflict that’s breaking functionality, a bug that’s affecting visitors, or something that needs to be fixed now—we work directly on the production site. The goal is stabilization. If a plugin update caused the problem, we roll it back. If something’s conflicting, we isolate and deactivate it. We’re not going to make a client wait while we spin up a local environment to fix something that’s actively hurting their site. That said, we’re not reckless—we have real-time backups in place, so we can move quickly without taking unnecessary risks.

For larger work—such as new features, design changes, and functionality builds—we follow a more traditional workflow. Our team develops and tests locally first, then pushes to a staging environment. Every client gets a staging site with their hosting. This gives both our internal team and the client a chance to review the work in progress and sign off on the final results before anything reaches production. For organizations without internal developers, which is most of our clients, that staging site is their window into the work. They can click around, test forms, see how things look on mobile—whatever they need to feel confident before we go live.

Once approved, we deploy to production. We don’t run a universal deployment schedule across all clients. With over 200 sites serving different organizations with different needs, that wouldn’t make sense. Some clients want updates deployed immediately, while others need to coordinate with internal announcements or campaigns. Some have compliance considerations. We adapt to each organization’s operating rhythm rather than forcing them into ours.

The common thread is communication. Clients know when we’re working, what we’re doing, and when to expect results. No surprises.

Shifting to performance, tell us about a time you rescued a slow or unstable WordPress site; what was the root cause and how did you fix it under pressure?

This is one of the most common scenarios we see. A client comes to us in crisis—their site is crawling, throwing intermittent errors, or hitting the white screen of death at the worst possible moment. There’s usually a campaign running, a fundraising push, press attention, or something else that makes the timing especially painful.

More often than not, they’re coming from a budget host like GoDaddy or Bluehost. I don’t say that to knock those companies—they serve a purpose—but they sell rigid hosting packages with fixed resources. When a site outgrows those limits, the host either can’t diagnose the issue or has little incentive to. The typical response is “upgrade to a bigger plan,” but nobody explains what that actually means or whether it will solve the problem. The client is left guessing.

When we take on one of these rescues, the first step is figuring out what’s actually choking. Is the application running out of memory? Is the database getting hammered? Is there a plugin that performs an inefficient operation on every page load? Sometimes the site itself is fine—it’s just being starved of resources it was never given.

Our advantage is the ability to scale resources in real time. If the application needs more RAM, we add it. If CPU is the bottleneck, we adjust. If the database is overwhelmed, we can offload some of the work with object caching using Redis. For sites with heavy traffic, we can implement full-page caching at the CDN level so we don’t even rely on the origin server to render every request—that alone can be transformative.

The goal in these situations is stabilization first, optimization second. Get the site performing, stop the bleeding, and then figure out longer-term improvements once the pressure is off. We’ve done this enough times that we can usually move fast. Sometimes the difference between a site struggling and a site humming is a matter of hours, not days.

To make those wins stick, how do you structure caching and CDN policies so hit rates stay high without breaking dynamic features?

The honest answer is that caching is best approached on a case-by-case basis. There’s no universal configuration that works for every site, and forcing one usually means either breaking something dynamic or leaving performance on the table.

For a brochure site that rarely changes, we go aggressive with full-page caching and long TTLs at the CDN edge. Both static assets and rendered HTML get served from the location closest to the visitor, using systems designed specifically for fast delivery. The origin server does almost no work. For sites like this, it’s the cleanest solution, and the performance gains are dramatic.

For more dynamic sites—news organizations, advocacy groups pushing rapid updates, membership platforms—full-page caching can actually become a problem. If content changes frequently and visitors see stale information, the caching layer is now working against the organization. In those cases, we shift to server-level caching with tools like Varnish or Memcached, using shorter TTLs that we can control more granularly. Static assets still get cached aggressively, but dynamic pages might only cache for an hour or less.

For data-heavy applications—sites pulling from APIs, displaying real-time information, running complex membership queries—we’ll add object caching with Redis to reduce database load. And sometimes the right answer is a hybrid: full-page caching at the edge for the main marketing pages, while carving out the dynamic sections to bypass that layer entirely.

We always consider traffic volume and patterns. A CDN works by populating cached content across edge locations, but that only helps if there’s enough traffic to keep those caches warm. A low-traffic site with visitors scattered globally may not see the same benefits as a high-traffic site with concentrated audiences. We’re always balancing technical configuration with real-world usage and client expectations—how quickly does this information need to reach people, and who’s looking at it?

For ongoing reliability, what monitoring and alerting setup has proven most effective at catching issues before users notice?

We don’t rely on any single monitoring solution. The same way we think about backups and security, we layer our monitoring so that no single point of failure lets anything slip through.

The first layer is a third-party service that monitors from geographically distributed locations worldwide. We configure these based on where the client’s audience actually is—if it’s a U.S.-focused organization, we monitor from locations across the United States; if they have global reach, we set up monitors internationally. This service checks for both uptime and performance. If response times exceed a threshold or an outage is detected, the system automatically validates against a second location before alerting us. That confirmation step reduces false positives caused by regional network blips.

The second layer is internal infrastructure monitoring managed by our engineering team. This watches the servers and individual applications directly—resource utilization, service hangs, memory issues, and database performance. It catches infrastructure-level problems that might not yet show up in a page-load test but will cause issues if left unaddressed.

The third layer is a system we built ourselves, integrated into our CRM. It independently checks website health from several U.S. locations and provides a secondary alert path if something goes wrong. This same system powers a dashboard that clients can access, giving them visibility into their own uptime without asking us.

The philosophy is simple: monitor the page from the outside like a visitor would, monitor the server from the inside like an engineer would, and then double-check both with an independent system. Every part of the stack gets watched.

This setup has proven effective. Most issues get caught and resolved before they escalate into outages—and more often than not, before the client even knows anything happened.

When it’s time to update core, themes, and plugins, how do you run safe updates at scale while keeping rollback fast and painless?

Updates are one of those things that seem simple until you’re managing hundreds of sites. Manually updating each one isn’t realistic, but blindly pushing updates to production is what breaks sites.

We’ve implemented an automated update system that handles this at scale without the risk. For each site, the system spins up a staging environment, applies all pending updates—core, themes, plugins—and then runs regression testing across key pages to check for front-end issues. It’s looking for errors, visual changes, and anything that suggests the update broke something.

If everything passes, the updates get applied to production automatically. If something fails—a plugin conflict, a PHP error, or a visual regression—the production site remains untouched, and our team is alerted to investigate. The client’s live site is never affected by a bad update.

The most common failure we see is actually a lapsed license on a premium plugin. The update can’t complete because the plugin can’t authenticate. This system also helps clients stay current on their licenses, which is a security consideration in itself.

Every site goes through this process weekly, with critical WordPress security updates applied on an as-needed basis outside that schedule. The whole process is non-disruptive—clients don’t experience downtime, and most never know it’s happening.

For us, as the service provider, this approach means we spend hands-on time only on sites that actually need attention. Instead of manually updating 200 sites and hoping nothing breaks, we focus on the exceptions. It’s the only way to maintain this many sites responsibly without cutting corners.

Finally, based on hard‑earned lessons, what security stack and operational practices now anchor how you defend production WordPress sites?

We approach security in layers, each one addressing a different type of risk.

The first layer is a web application firewall, specifically not a plugin-based one. We use Cloudflare Enterprise across all our sites. The problem with plugin firewalls is that they’re reactive—by the time they act, the attack is already hitting your server. A cloud-based WAF operates at the edge, filtering malicious traffic before it ever reaches your infrastructure. We also benefit from the intelligence that Cloudflare gathers across millions of sites and massive amounts of traffic. Threats identified anywhere in their network get blocked everywhere, including our sites. It’s protection we couldn’t replicate on our own.

The second layer is real-time malware monitoring on the server itself. We run Imunify360, which monitors for malicious uploads, injections, or executions. Because it’s real-time, we catch issues immediately rather than discovering a compromise days or weeks later during a scan. The signatures and rules are maintained by a company that specializes in this, so we’re not dependent on plugins that require manual updates to stay current.

The third layer is keeping software up to date. Outdated plugins, themes, and WordPress core remain one of the biggest attack vectors for any site. Our automated update system handles this weekly, with critical security patches applied as soon as they’re released. It’s unsexy but essential—most compromises we’ve seen in the wild trace back to software that should have been updated months earlier.

Finally, backups serve as the last line of defense. We store backups both on-server and off-server, updated regularly. If something catastrophic happens—whether it’s a security incident, a bad deployment, or human error—we can roll back quickly. It’s not a security measure in the traditional sense, but it’s what lets us recover fast when everything else fails.

Thanks for sharing your knowledge and expertise. Is there anything else you'd like to add?

Just that the technical stuff only matters if it’s in service of something. We work primarily with nonprofits, associations, and advocacy organizations—groups that are doing something meaningful in the world. Their websites aren’t vanity projects; they’re how they communicate, fundraise, organize, and serve their members.

What I’ve learned over 14 years of running FatLab is that these organizations don’t need a hosting company. They need a partner who actually picks up the phone, tells them the truth, and solves problems without making everything feel like an upsell. The technology we’ve built is in service of that relationship, not the other way around.

If there’s one piece of advice I’d offer to anyone evaluating a hosting or support partner: ask them what happens when something goes wrong. Not the sales pitch version—the real version. How fast do they respond? Who’s actually doing the work? What’s not included? The answers tell you a lot about whether you’re entering a partnership or just buying a service.

Up Next