You Changed Three Things at Once (And Now You Don't Know What Worked)
The Attribution Problem
Month eight of your newsletter. Your subscription grew 50%.
Month 7: 400 subscribers Month 8: 600 subscribers
200 new subscribers in one month.
Your best growth month ever.
You’re thrilled. And you want to understand why, so you can replicate it.
You look at what changed:
Month eight, you:
1. Increased publishing frequency (2x/week → 3x/week)
2. Started adding personal stories to every essay (previously just research summaries)
3. Redesigned email layout (cleaner, more visual)
4. Posted essay snippets on Twitter for the first time (new distribution channel)
5. Shifted topic focus slightly (less academic theory, more practical application)
Which change caused the growth?
You think back. Publishing 3x/week feels like the key factor. More content = more growth, right?
You commit: Going forward, 3x/week is the standard.
Month 9: You maintain 3x/week frequency.
Growth: 1% (600 → 606 subscribers).
Wait. What happened?
Monday: Your second attempt fails because success makes you rigid.
Tuesday: You remember the one success, forget the fifteen failures before it.
Wednesday: You replicate tactics without replicating conditions.
Thursday: When you change multiple things simultaneously, you can’t isolate causation.
And when you credit the wrong variable, you optimize the wrong thing.
The Attribution Problem
Here’s what actually drove Month 8 growth:
You changed five things. But you can only track one metric: total growth.
The variables:
1. Frequency (2x → 3x/week)
2. Personal stories (added)
3. Layout (redesigned)
4. Twitter (started posting)
5. Topic shift (less academic)
The outcome: 50% growth
The question: Which variable caused it?
The answer: You have no idea.
Because when multiple variables change simultaneously, you can’t determine causation.
Research practice in science: A/B testing. Change ONE variable at a time. Hold others constant. Isolate the effect.
You didn’t do that.
You changed five variables at once. Now you’re guessing which one mattered.
And humans are terrible at guessing causation.
What Probably Happened
Here’s the likely breakdown of Month 8 growth:
X distribution: 40% of growth (80 new subscribers)
New channel reached new audience
One tweet got shared by someone with 10K followers
That single share drove 60 subscribers
Personal stories: 30% of growth (60 subscribers)
Essays became more relatable
Higher share rate (readers sent to friends)
More word-of-mouth
Topic shift: 20% of growth (40 subscribers)
Practical > academic attracted broader audience
Better fit for audience needs
Layout redesign: 5% of growth (10 subscribers)
Cleaner emails = slightly better retention
Minimal impact on acquisition
Publishing frequency: 5% of growth (10 subscribers)
More content = more touchpoints
But also created some unsubscribes (too many emails)
Net effect: small
Total: 200 subscribers
But you credited 100% to frequency.
Because:
Frequency was the most visible change (you felt it every week)
Frequency was the most effortful (writing 3x instead of 2x was hard)
Frequency felt controllable (you can decide to publish more)
Twitter distribution was actually the key driver. But you didn’t notice because:
It was new (you weren’t tracking Twitter metrics closely yet)
The influential share felt like luck (you didn’t think you could replicate it)
Less visible (posting a tweet takes 2 minutes, doesn’t feel significant)
You optimized the wrong variable.
Which is why detective work would be much easier if crimes left Excel spreadsheets instead of disparate clues.
Why You Pick the Wrong Variable
Research by Daniel Kahneman and Amos Tversky on causal attribution: When multiple factors correlate with an outcome, we attribute causation to the most salient factor.
Salient = what you notice.
Publishing 3x/week was salient:
Required significant effort (hard work = must be important)
Changed your routine (noticeable behavioral shift)
Created stress (managing 3 essays/week instead of 2)
X posting was less salient:
Low effort (2-minute task)
Felt like “bonus” activity (not core work)
Results seemed random (some tweets worked, most didn’t)
Your brain defaults: High effort = high impact.
Actually: Low effort can have high impact. High effort can have low impact. Effort doesn’t correlate with results.
But when you’re deciding what caused success, your brain picks the thing that felt most significant.
Research by Baruch Fischhoff on hindsight bias: After knowing the outcome, you revise your memory of what you expected.
Before Month 8: You thought “Let me try some different things and see what happens.”
After Month 8 success: You remember thinking “Publishing 3x/week will drive growth.”
You rewrite your memory to match the outcome.
And the rewritten version becomes: “I knew frequency was the key.”
The Confounding Variables
Even if you correctly identified X distribution as the driver, there’s a deeper problem:
What about variables you didn’t change intentionally?
Month 8 also had:
Your best-performing essay ever (6K views, viral on LinkedIn)
A prominent newsletter mentioned yours as “must-read”
Your topic happened to trend on X that month
The month started right after New Year (people subscribing to new newsletters)
A competitor shut down their newsletter (their readers needed alternative)
You didn’t plan any of these. They just happened.
But they probably drove 30-40% of growth.
So the actual breakdown might be:
X distribution: 25%
Personal stories: 20%
Topic shift: 15%
Layout: 3%
Frequency: 2%
Viral essay (unplanned): 15%
External mention (unplanned): 10%
Trending topic (unplanned): 5%
January timing (unplanned): 3%
Competitor shutdown (unplanned): 2%
Planned changes: 65% of growth
Unplanned factors: 35% of growth
But you attributed 100% to the planned change you noticed most (frequency).
What You Do Instead
Month 9: You maintain 3x/week frequency (the variable you credited).
Growth stops.
Now you conclude: “3x/week isn’t enough. Maybe I need 4x/week?”
Wrong diagnosis.
Growth didn’t come from frequency. It came from X distribution + personal stories + timing + viral essay.
You’re doubling down on the wrong variable.
Better approach after Month 9:
Step 1: Map ALL changes
“This month I changed: frequency, stories, layout, Twitter, topic. Also happened: viral essay, external mention, January timing, trending topic, competitor shutdown.”
Don’t claim to know which mattered. Just document what changed.
Step 2: Form hypotheses
“Frequency might have helped (more touchpoints). X might have helped (new audience). Personal stories might have helped (more relatable). Viral essay definitely helped (one-time spike). External mention helped (new source).”
Rank by plausibility, not certainty.
Step 3: Test one variable
Month 10: Keep Twitter, stories, topic shift. Change frequency back to 2x/week.
If growth continues: Frequency wasn’t the key driver.
If growth stops: Frequency might have mattered (or something else changed).
Still not perfect. But better than blind replication.
Step 4: Accept uncertainty
Most of the time, you won’t know what caused success.
Multiple factors interacted. Some you don’t even see.
Better to say: “I don’t know which variable drove this. Here’s what changed. I’ll test to learn more.”
Than to falsely attribute: “X caused this. I’ll replicate X.”
What Changes Tomorrow
Tomorrow: Friday. Synthesis. You’ve learned why replication fails (rigidity, survivorship bias, missing conditions, wrong attribution). The shift: Stop trying to replicate specific successes. Build systems that create favorable conditions repeatedly.
But tonight, one question:
What success are you attributing to a single cause?
“My workshop sold out because of the email sequence.”
“My post went viral because of the title format.”
“My product launch succeeded because of early-bird pricing.”
Ask:
How many things changed when this succeeded?
Which changes did you plan?
Which changes were external/unplanned?
If multiple things changed: You don’t know which caused success.
Don’t pick one to credit. Document all changes. Test to isolate.
Don’t guess. Test.
That’s how you move from lucky wins to reliable systems.
REFERENCE
On causal attribution: Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124-1131.
On hindsight bias: Fischhoff, B. (1975). Hindsight ≠ foresight. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288-299.
On confounding variables: Pearl, J. (2009). Causality: Models, Reasoning, and Inference. Cambridge University Press.

