This interview is with Pierre Duval, Head of Institutional Partnerships & Growth, basis.pro.
To kick things off, could you introduce yourself—your role as Head of Institutional Partnerships & Growth at BASIS—and share how your work intersects with crypto staking and rewards across BTC, ETH, and SOL?
I’m Pierre, Head of Institutional Partnerships & Growth at BASIS. My work sits at the intersection of institutional capital access and execution-layer infrastructure — specifically how we bring hedge-fund-grade arbitrage capabilities to a broader range of participants.
At BASIS, we don’t operate as a traditional staking platform. Our model is built around capturing structural market inefficiencies in real-time price gaps, funding differentials, and liquidity fragmentation across BTC, ETH, SOL, and PAXG. Rather than passively holding assets and distributing inflationary rewards, we actively deploy capital through our proprietary BHLE engine, which operates at sub-50-microsecond execution latency.
My role specifically focuses on building relationships with institutional partners, quantitative trading firms, liquidity providers, and family offices, while also driving the broader growth strategy as we transition from private testing into public access. The goal is to close the long-standing gap between institutional execution quality and what has been available to the wider market.
How did your career path lead you into building institutional staking and yield programs, and what pivotal lesson shaped the way you approach partnerships and growth today?
My path into this space was shaped less by a single career trajectory and more by a recurring frustration: watching institutional-grade strategies generate consistent, defensible returns while remaining completely inaccessible to anyone outside a small circle of well-capitalized firms.
Early in my career, I worked closely with quantitative trading desks and saw firsthand how much of their edge came not from smarter predictions, but from superior execution infrastructure: lower latency, tighter risk controls, and deterministic rollback systems. The alpha wasn’t in the idea. It was in the plumbing.
That observation became the foundation of how I approach partnerships today. The pivotal lesson: institutional partners don’t buy promises, they buy proof. Before any relationship scales, they need to see documented performance under stress conditions, not marketing decks.
At BASIS, we applied that lesson directly. We ran months of private testing with Tier-1 institutional partners before opening any public access. The results — sub-50 μs p99 latency, 100K+ ops/second throughput, and 100% uptime — weren’t launch announcements. They were the prerequisites for having serious conversations at all.
That’s the mindset I bring to every partnership: earn the right to the conversation first, then build from there.
Building on that, how do you define “BTC staking” in an institutional context at BASIS, and which yield structures have actually passed legal, risk, and operational due diligence in your experience?
“BTC staking” is a term the industry has used loosely, and that looseness has caused real damage both to investors and to the credibility of the broader yield space. In an institutional context, we’re precise about this distinction.
Bitcoin has no native staking mechanism. It runs on proof-of-work. So when BASIS supports BTC yield generation, we’re not staking BTC in any protocol sense. We’re deploying it within a market-neutral execution framework, primarily inter-exchange arbitrage and delta-neutral funding rate capture, where the yield is structurally generated from real market inefficiencies, not from inflationary token emissions or opaque lending arrangements.
In terms of what actually passes institutional due diligence: the structures that survive scrutiny share three characteristics.
- Transparent yield sourcing — you can trace where the return comes from and what market condition it depends on.
- Deterministic risk controls — defined slippage bounds, formal rollback procedures, and no discretionary overrides under stress.
- Verifiable operational integrity — uptime records, audit trails, and compliance certifications that can be independently validated.
BASIS is built around all three. Our ISO/IEC 27001:2022 and ISO/IEC 20000-1:2018 certifications, LEI registration, and documented private testing results exist precisely because institutional partners require independently verifiable proof, not self-reported metrics.
The yield structures that fail due diligence are almost always ones where the return source is unclear, the risk transfer is asymmetric in ways that aren’t disclosed, or the operational resilience has never been tested under real market stress. We’ve seen how that ends.
When you help a first-time allocator stand up an ETH staking program, what does your practical checklist look like (custody setup, validator selection, MEV policy, compounding cadence, SLAs), and which single decision most impacts net, realized rewards?
When working with first-time institutional allocators, the conversation must immediately shift from “What is the yield?” to “How is the execution risk isolated and managed?” Base consensus rewards are a commoditized baseline. The real differentiation is in structural security and execution performance.
-
Custody & Key Architecture
Strict cryptographic separation of withdrawal credentials (cold storage via qualified custodians such as Fireblocks or Copper) from validator keys (held hot by the infrastructure provider). Whitelisted exits ensure that even in a worst-case infrastructure compromise, swept rewards and exited principal can only flow to the allocator’s whitelisted MPC vaults.
-
Validator & Client Matrix
Active avoidance of super-majority consensus clients (e.g., Prysm) to eliminate correlation penalty risk during network finality issues. Prefer bare-metal over cloud where possible: centralized cloud reliance (AWS/GCP) introduces systemic concentration risk that most allocators underestimate.
-
MEV Policy & Execution Layer
An explicit relay routing policy before deployment: OFAC-compliant relays only, or non-censoring relays. Block building is a latency game. Sub-50 μs execution capabilities ensure validators capture the highest-value blocks rather than being front-run by faster participants.
-
Compounding Cadence & Treasury Sweeping
Consensus-layer balances above 32 ETH don’t auto-compound. An automated sweeping cadence must be established. Skimmed rewards — execution layer tips, MEV, consensus rewards — are systematically batched and re-staked into new 32 ETH validators, or deployed into delta-neutral strategies to eliminate cash drag.
-
Institutional SLAs & Risk Internalization
Not “best-effort” SLAs: contractual compensation for missed block proposals due to infrastructure latency. And critically: who bears the slashing tail-risk?
The single decision that most impacts net realized rewards: P&L alignment on execution risk and slashing.
Standard providers operate on a pass-through model: they charge a fee on the upside while the allocator absorbs principal loss on a slash or prolonged outage.
At BASIS, we internalize the execution risk entirely. Our model is strictly profit-based for the client. We absorb the downside — slashing, missed executions — and take a percentage of generated profit only. When you remove the tail-risk of principal loss from the allocator’s balance sheet, net realized yield becomes mathematically stable and deterministic.
Zooming in on Solana, what operational controls—such as client diversity, failover runbooks, and real-time monitoring—have you found essential to keep SOL rewards predictable during periods of network stress?
Solana’s architecture makes this question different from Ethereum. There’s no slashing in the traditional sense, but the operational risks are just as real and, in some ways, more subtle.
Client diversity is the starting point. Solana’s validator ecosystem has historically been dominated by a small number of client implementations. Running independent validator clients where available and actively monitoring super-majority exposure at the network level is essential. A correlated outage across homogeneous infrastructure during a network halt isn’t theoretical—it’s happened.
Failover runbooks need to be tested, not just written. The key failure modes on Solana are vote-account delinquency, missed slots during leader windows, and RPC endpoint degradation during high-throughput periods. Each of these has a different recovery path. Runbooks that haven’t been executed under simulated stress conditions are documentation, not operational controls.
Real-time monitoring must cover validator vote latency, skip rate relative to the network average, and stake-weighted APY deviation, not just uptime. A validator that’s technically “up” but consistently missing leader slots is silently degrading rewards without triggering standard availability alerts.
During periods of network stress, the controls that matter most are RPC redundancy across geographically distributed endpoints, automated alerting on delinquency thresholds before they compound, and a clear escalation path that doesn’t depend on manual intervention during off-hours.
At BASIS, our sub-50μs execution infrastructure and 100K+ ops/second throughput provide the foundation, but the operational discipline around monitoring and failover is what keeps SOL rewards predictable when the network itself isn’t.
Can you share a concrete instance where adjusting validator configuration or MEV-Boost settings materially improved rewards, and what trade-offs in latency, variance, or policy you accepted to achieve it?
During our private testing phase, one of the clearest examples came from execution routing configuration, specifically how we handled liquidity fragmentation across venues during high-volatility windows.
Early in testing, our arbitrage strategies were routing orders through a fixed venue priority stack. During a period of significant market stress, we observed that the highest-value execution windows were being systematically missed not because of latency issues, but because liquidity depth at our primary venues was temporarily degraded. The signal was there; the routing logic wasn’t adapting fast enough.
The adjustment was dynamic venue re-ranking based on real-time liquidity depth and fill-quality metrics, rather than static priority. The result was a material improvement in captured spread during high-fragmentation periods — exactly the conditions where most execution infrastructure degrades.
The trade-offs were real. Dynamic routing introduces more decision points per execution cycle, which adds computational overhead. Keeping that overhead within our sub-50μs p99 latency target required significant optimization of the BHLE engine’s internal routing logic. The variance in execution path also required more granular monitoring. Static routing is easier to audit; dynamic routing requires real-time visibility into why each order went where it did.
The broader lesson: execution quality isn’t a fixed property of your infrastructure. It’s a function of how well your system adapts to market conditions that weren’t present during initial calibration.
Turning to risk management, describe a slashing event or near-miss you’ve handled: how it was detected, how you coordinated the response with partners, and the permanent process changes that followed.
The most instructive near-miss during our private testing phase wasn’t a slashing event; BASIS operates on a market-neutral execution model, not a validator infrastructure, but the underlying lesson is directly analogous.
During a high-volatility window in testing, our risk engine detected that slippage on one leg of an arbitrage execution had exceeded the predefined mathematical bound. The system was mid-execution: one leg had been filled, and the second hadn’t. Left unaddressed, this would have converted a market-neutral position into an unhedged directional exposure—exactly the failure mode our architecture is designed to prevent.
Detection was automated and immediate. The BHLE engine’s real-time risk controls flagged the slippage breach within the execution cycle itself, not in post-trade reconciliation. This is a critical distinction: systems that detect execution failures after the fact manage consequences; systems that detect them during execution can abort and unwind deterministically.
The response: the system initiated a deterministic rollback procedure, unwinding the filled leg at the best available price within defined parameters. No manual intervention. No discretionary override. The entire sequence—detection, abort decision, unwind initiation—occurred within our latency envelope.
Coordination with our institutional partners was straightforward because the documentation was already in place. Every partner had been briefed on the rollback mechanism during onboarding. The post-incident report confirmed the system had performed exactly as specified.
The permanent process change: we tightened the pre-execution liquidity depth validation threshold. The near-miss wasn’t a system failure; it was the system working, but it revealed a calibration gap in how we assessed venue liquidity before committing to an execution sequence. That gap is now closed.
On reporting and auditability, what end-to-end data, reconciliations, and dashboards do you provide so staking is “ready for the auditor,” and which metrics have proven most decision-useful for institutional stakeholders?
“Ready for the auditor” is a useful frame because it forces you to design reporting backwards from the verification requirement, not forwards from what is convenient to produce.
At BASIS, the data architecture is built around three layers:
-
Execution-level records. Every order, fill, and rollback event is logged with timestamps at microsecond resolution. This is not just for internal monitoring; it is the audit trail that allows any execution outcome to be reconstructed independently. When a position is opened, held, and closed, the full sequence is verifiable without relying on our own summary reporting.
-
Position and P&L reconciliation. Daily reconciliation against exchange-level data is standard, but the more important reconciliation is at the strategy level: verifying that the realized spread captured matches what the market conditions at execution time would have predicted. Discrepancies at this level surface execution-quality issues before they compound.
-
Institutional dashboard metrics.
The metrics that have proven most decision-useful for institutional stakeholders are not headline yield figures. They include:
- Realized vs. expected spread capture by strategy
- Execution slippage distribution (p50, p95, p99)
- Rollback frequency and cause classification
- Uptime by execution venue
These metrics tell a sophisticated allocator whether the system is performing as designed, not just whether the number at the end of the month looks acceptable.
The single metric most consistently requested by institutional stakeholders is p99 execution latency over time, not just at inception. A system that performs at sub-50 μs during calm conditions but degrades under stress is a different risk profile than one that maintains that envelope consistently. Our private testing demonstrates 100% uptime and consistent p99 latency across the full testing window; this consistency is precisely what auditors and risk committees need to see.
Looking ahead 12–18 months, which developments—ETH Pectra and restaking/AVSs, Solana fee dynamics and validator concentration, or emerging Bitcoin yield primitives—do you expect to reshape institutional staking strategies, and how are you prioritizing partnerships in anticipation?
The next 12–18 months will separate platforms built for current conditions from those built for structural durability. Three developments stand out.
-
ETH Pectra and restaking/AVSs. Pectra’s validator consolidation changes capital efficiency for institutional allocators. More significant is restaking—EigenLayer and its successors—which introduces correlated slashing risk and AVS-specific operational complexity that institutional due diligence frameworks haven’t fully caught up with. The yield premium is real; so is the tail risk. We see this as a space where execution infrastructure quality will determine which participants actually capture the premium.
-
Solana fee dynamics and validator concentration. As Solana’s priority-fee market matures, execution quality during fee-spike events increasingly determines captured value. Validator concentration remains a structural concern; the economics increasingly favor well-capitalized infrastructure, which creates both opportunity and systemic fragility worth monitoring.
-
Bitcoin yield primitives. This is the most nascent but potentially most significant development. Native Bitcoin yield infrastructure beyond basis trades and lending is still early. Our existing PAXG integration positions us well as tokenized hard-asset yield generation matures. We’re watching Bitcoin-native smart-contract layer developments closely.
Partnership prioritization: We’re focused on counterparties who bring execution infrastructure depth, not just AUM. In a compression environment where the easy basis-trade yield has already collapsed, execution quality — not yield promises — determines whether alpha survives. That’s the filter we apply to every partnership conversation.