Signal Optimization: Why pLTV is a Control System Problem (Not Just a Prediction Problem)

By Christian Hansen

Performance marketing is currently facing a crisis of confidence. We see growth teams shifting budgets back into "brand," Top-of-Funnel experiments, or one-off stunts because their primary performance channels feel increasingly unreliable.

The common diagnosis is that "the algorithm has changed." But the reality is a signal misalignment. Most teams are feeding world-class ad platforms low-resolution targets.

At Churney, we view this as a Signal Optimization problem. We aren't trying to outsmart the platforms; we are helping them work better by giving them a high-definition target to hit.

1. The Engineering Reality: Designing a Training Signal

A common mistake in internal builds is treating Predictive Lifetime Value (pLTV) as a standard machine learning problem. In a lab, you optimize for accuracy. In production, you are designing a continually adapting control system.

Accuracy ≠ Performance

The pLTV model is not the end goal; it is a training signal for a black-box model. This changes the objective entirely:

  • Metrics that matter: Offline metrics (like low $RMSE$) does not guarantee online ROAS. What matters is the stability of value ordering under auction pressure and how the signal interacts with platform learning dynamics.

Signal Engineering: Variance vs. Predictability

  • Ad platforms often claim they can process raw data "as-is," but sending extreme value ranges—like $1 to $50,000—creates a massive predictive error problem. The issue isn't the $50,000 value itself; it’s that if a model is even slightly wrong about a "whale", the resulting error ruins the algorithm's stability.
  • Managing a model that predicts future value requires balancing the span of the signal with the reliability of the output. We often transform raw dollars into logarithmic scales or rank-based scores. This ensures the platform can still preferentially bid for high-value users without the model’s inevitable margins of error causing the entire bidding strategy to become erratic or over-reactive to outliers.

The Non-Stationary Domain

Once a pLTV signal is deployed, the environment immediately changes. User behavior shifts as your product, pricing, and onboarding evolve, while platform algorithms adapt to the very signal you are sending.

"You aren’t maintaining a static predictor; you are operating a system that must be retrained, recalibrated, and monitored continuously," says Brian Brost, CTO at Churney. "Feature and target distributions drift, and the 'meaning' of pLTV drifts as your acquisition mix changes. This is a primary engineering responsibility, not a 'set and forget' task."

2. Bridging the "Information Asymmetry"

Ad platforms operate on an asymmetry of information. They know which users are "active spenders" across the web—making those users highly competitive and expensive—but they don't know who is specifically valuable to your unique product. To bridge this gap and help platforms bid effectively on intent rather than just general profile, we use two core signal engineering principles.

The Pessimism Principle (Upper Confidence Bounds)

Ad platforms are designed to allow value updates that increase a prediction, but they rarely allow you to decrease a value once it has been sent.

  • If you start with an over-optimistic prediction, you have already over-bid.
  • Sophisticated Signal Optimization uses Upper Confidence Bounds (UCB)—starting with a conservative estimate on Day 1 and "upgrading" the value as you see behavioral evidence (or enriched data like company size, domain authority, or ICP match).

The Up-Funnel Risk Trade-off

There is a constant tension between Certainty and Delay.

  • Down-funnel: Bidding on actual subscribers is safe but slow (median time to subscribe can be 30+ days).
  • Up-funnel: Bidding on signups is fast but risky. You may see signs of activity without the willingness or ability to pay.
    Signal engineering is the art of "punishing" predictions for up-funnel users where high asymmetry exists (e.g., users from regions or segments that look active but never convert to revenue) to keep the platform’s optimization on track.

3. The Invisible Cost of Feedback Loops

The most dangerous failures in pLTV are slow and expensive to detect because of endogenous feedback loops.

  • Once the signal is live, the platform changes who it sends you, your incoming data distribution shifts, and your model starts training on data it caused.
  • Learning cycles are long: Often 7 to 14 days minimum, and full signal evaluation takes 30 to 60 days of real budget.
  • The Iteration Penalty: By the time you realize a model is mis-specified, weeks of meaningful spend have already been mis-ranked. Recovery itself takes additional weeks.

In our experience, 6 to 12 months is a realistic timeline for an internal system to become reliably net-positive—if the team manages to map all the failure modes.

4. Closing the "Signal Gap"

We describe Churney as the Signal Optimization Layer because we close the gap between what the platform sees and what the business cares about.

Platform Alignment

We are not trying to "beat" the algorithms of Meta or Google; we are helping them work better. Each platform has non-standardized requirements: different event schemas (CAPI), timing constraints, and update rules.

Most of this knowledge isn't in a manual; it’s accumulated through thousands of live experiments. By using a production-hardened system, you gain a "North Star" benchmark. Even for teams that eventually build in-house, having a sophisticated benchmark allows you to see exactly how much money you might be leaving on the table during the "learning" phase of a DIY system.

The platforms are ready to find your best customers. We just give them the high-definition signal they need to get the job done.

Optimize your customer acquisition for maximum Lifetime Value

Your data warehouse has incredible value. Our causal AI helps unlock it.