The 7 behavioural signals that predict a cancellation 30 days early
What customers do in the four weeks before they cancel — across SaaS, subscription boxes, and membership sites. Seven signals, ranked by predictive strength.
We looked at roughly 14,000 cancelled subscriptions across our pilot accounts — SaaS, subscription boxes, membership sites — and reverse-engineered the four weeks leading up to cancellation. The strongest signals are not the ones founders intuit. The intuited ones (NPS scores, support ticket volume, billing failures) are real but weak. The strong ones are mostly about silence: things customers stopped doing.
Here are the seven that matter, ranked by how reliably they show up in the 30 days before a cancellation.
1. Login frequency drop, normalised against the customer's own baseline
The single strongest predictor. Not "logged in less than the average customer" — that catches lots of perfectly happy light users. The signal is a sustained drop relative to the customer's own historical baseline.
In the pilot data, a customer whose login frequency fell more than 50% over a rolling 14-day window churned within the next 30 days at roughly 4× the base rate. The signal usually shows up between days -28 and -14 from cancellation.
This is the signal Ebb weights most heavily.
2. The sticky-feature stop
Every product has at least one feature that, once a customer adopts it, they use weekly without thinking. For project tools it's a recurring report. For analytics tools it's a saved dashboard. For subscription boxes it's the "edit my next box" page.
Customers who stop using their personal sticky feature for 21+ days churn at roughly 3.6× the base rate. The trick is that "sticky" has to be calculated per customer — the keystone feature for one account is irrelevant to another. A pooled metric (% MAU using feature X) misses this almost entirely.
3. Support sentiment shift
Not ticket volume. Volume goes both ways — engaged customers also file lots of tickets. The signal is the tone of the tickets shifting from question-mode ("how do I…") to friction-mode ("why doesn't this…", "this is broken", "I don't understand why…").
A single tone-shifted ticket isn't predictive. Two within 30 days is. Three within 30 days is roughly 2.8× the base churn rate. Most reactive churn tools don't look at this at all.
4. Plan downgrade
The single most under-rated signal in self-serve SaaS. A downgrade is almost never a happy compromise — it's the second-to-last step before cancellation.
In our data, downgraded customers churn at 3.1× the base rate within 90 days. The window is longer than the other signals (downgrades buy a customer time to defer the decision), but the eventual outcome is overwhelming.
5. Two failed payments in 90 days, even if both eventually succeeded
You probably don't think of this as a churn signal because Stripe's smart retries quietly recover the revenue. But a customer whose card failed twice in a quarter has seen two dunning emails, has been mentally prompted twice to think about whether they still want this, and is roughly 2.4× more likely to cancel in the next 60 days than a customer whose payments have all succeeded cleanly.
This signal is invisible in standard reactive churn tools because the dollars are still flowing.
6. The post-renewal silence
A customer who pays you on the 1st and doesn't log in for the next two weeks is a different animal than a customer who pays you and immediately uses the product. The first is signalling "I forgot to cancel". The second is signalling "I bought this for a reason".
Post-renewal silence (no usage in the two weeks after a successful renewal) corresponds to roughly 6× the base churn rate over the next 30 days. It's almost the strongest single-window signal in the dataset.
7. Trial extension or self-serve pause, used and not engaged with
If you offer "pause my subscription" or "extend my trial" as a self-serve option, log every use of it. A customer who pauses and does nothing during the pause window virtually never returns — recovery rates were under 9% in our sample. A customer who extends a trial and doesn't use the extension is signalling, politely, that they're trying to find a soft exit.
These are honest customers giving you a courteous warning. Listen to them.
What this looks like as a model
Individually, each signal is suggestive but noisy. The Ebb model combines all seven (plus a few weaker ones) into a single 0–100 score per customer, recalculated daily. The combination is what matters: any one signal in isolation has a meaningful false-positive rate; three or more together is decisive.
The default risk threshold in Ebb is a score of 65, which corresponds to roughly a 70% cancellation probability within 30 days if no action is taken. That threshold is adjustable per workspace.
If you want this running on your own Stripe data tomorrow morning, join Ebb's early access — the first 50 founders lock in 30% off for life. Setup takes under ten minutes. Your first scored customer list lands in your inbox next Monday.
Cancellations are a behavioural problem before they're a billing problem. The signals show up first. The only question is whether you're looking.