Introducing Keanu 3.0

Starting today, every campaign running on Vibe gets a better bidding model. Keanu 3.0 is our new unified multi-head prediction model for CTV performance bidding, and it is now live across all campaigns with no action required on your end.

In live A/B tests, Keanu 3.0 improved performance across all five supported campaign objectives: Leads, Traffic, Retargeting, Sales, and Installs. Average gains across all five goals were +12.4% at constant budget and +19.8% at constant ROI.

If you are running a Retargeting campaign today, you are already getting +20% more outcomes from the same budget. If you are running Sales, your cost per sale just got lower without you changing a single setting. The model improvement is live, applied automatically, and reflected in your results starting now.

Availability

Keanu 3.0 rolled out automatically across all active Vibe campaigns on April 29, 2026. No campaign changes, no migration, no new settings required. Your existing campaigns are already running on the new model.

A unified model for all five campaign objectives

Previous generations of the Vibe bidding stack used separate prediction paths for each campaign objective. Each had its own model, trained largely in isolation from the others.

Keanu 3.0 replaces that with a single unified predictor with multiple output heads, one per campaign goal. Auction context, advertiser context, retargeting state, user history, and campaign goal all feed into the same model. Each head specializes per objective, but training is shared across all five, which means signal learned in one objective improves prediction across the rest.

The practical consequence is compounding. When the model gets better at predicting conversion intent from Retargeting campaigns, that learning propagates to Sales. When Traffic campaigns surface engagement signals, Leads campaigns benefit from the same representation. Fragmented models can only improve in isolation. A unified model improves everywhere at once.

Installs joins the Vibe bidding system for the first time with this release, built on the same unified architecture from day one. Campaigns running Installs optimization now benefit from the same foundation as Leads, Traffic, Retargeting, and Sales.

Ad pod inference

Part of the gain comes from improved handling of ad pods. CTV inventory is delivered in pods of two to four placements per break. The first and last slots within a pod see different completion and recall outcomes in practice, and bid value should reflect that distinction.

Publishers do not always send pod structure in a clean or standardized way. Inventory frequently arrives with incomplete or ambiguous pod metadata. The previous model treated this as missing information and priced those impressions without position context. Keanu 3.0 infers pod context from surrounding signals even when publishers have not sent explicit pod data. That inference improves bid quality across a meaningful share of available inventory, particularly on supply where pod metadata is sparse or inconsistent.

This is a CTV-specific capability. Search and social inventory does not have this structure. Getting pod inference right requires a model that understands the delivery environment, not just the audience. Keanu 3.0 does both.

Results

We evaluated Keanu 3.0 through live advertiser-level A/B tests. For each advertiser, the targeted audience was split 50/50 between Keanu 3.0 and the previous model. Each panel received equal budget. We ran multiple iterations to confirm stability.

All five objectives improved. Retargeting led at +20% at constant budget and +26% at constant ROI. Sales improved +17% and +34%. Installs, entering the benchmark for the first time, came in at +14% and +21%. Leads and Traffic improved +4%/+8% and +7%/+10% respectively. No objective regressed.

"A better model only matters if customers feel it in performance. We built our evaluation framework around that idea: measure the real lift on each advertiser’s metric of choice, and measure it from both a budget and ROI perspective. Keanu 3.0 is not just better on paper. It brings more performance to our customer base where it counts — and it is only the beginning of what this new generation of bidding can unlock."

Arnaud Blanchard, VP Product

The breadth matters as much as the magnitude. A model change that improves one objective at the cost of another is a trade-off, not a win. Keanu 3.0 improved all five. That is what a shared-backbone architecture produces when it works: lifting the model lifts everything it touches.

How to read the two metrics

The constant budget view measures the direct efficiency gain at unchanged spend. If your campaign budget stays the same, this is how much more you get out of it. A +20% constant budget result for Retargeting means a campaign that was converting 100 households per day is now converting 120, at the same cost.

The constant ROI view estimates how much additional scale the model unlocks while maintaining the same cost per outcome. We calculate this using Vibe's budget-to-outcome elasticity system, which models the relationship between spend and outcomes per campaign type. This view answers a different question: if the model is more efficient, how much more volume can you run before hitting the cost-per-result you were already accepting.

Both views improved across every tested objective. That means your current campaigns get more efficient today, and there is more room to scale them tomorrow at the same cost per result.

What comes next

Keanu 3.0 is the foundation for a broader shift in how Vibe's prediction and delivery stack works. The unified architecture makes it substantially easier to add new objectives, integrate more tightly with the delivery layer, and incorporate new signal types as they become available.

Specifically: we are working on deeper integration between the prediction model and how impressions are paced and bought, continued improvement to ad pod decision-making as publisher signal quality varies across inventory, and better models trained on a growing dataset. Each will appear in future benchmark results when it ships.

We will share updates on Keanu's performance regularly. As new versions ship and the model trains on more data, we will publish the results here the same way we are publishing them today: with the methodology, the numbers, and an honest account of what changed.

Methodology

A/B tests were run at the advertiser level with a 50/50 household split. Each panel received equal budget over the test period. Results were validated across multiple test iterations for stability. Constant ROI figures use Vibe's budget-to-outcome elasticity model and represent modeled projections, not directly observed outcomes.

Apr 30, 2026

Get started with Vibe in minutes.