Houston, TX shay_public@hotmail
.com
Buy Stocks, Not Statistics, at the Dip

Buy Stocks, Not Statistics, at the Dip

CrowdStrike (CRWD)

On July 19, 2024, cyber-security company CrowdStrike released a defective sensor-configuration update. This triggered a global IT outage, crashing 8.5 million Windows devices with the infamous “Blue Screen of Death”. Planes were grounded, hospitals were forced to cancel surgeries, and cities lost emergency services.

The CrowdStrike stock price fell to a little over $200.

There were other $200 stocks to be had in August 2024, but CrowdStrike was a better buy than most, because CrowdStrike wasn’t a $200 stock, it was a $350 stock having a bad day.

Stocks vs Metrics

That’s the way we look at stocks and other systems we cannot control. We assume that, without our intervention, the price will return to “normal” over time. This isn’t always right, and there are colorful metaphors (“catching the knife”) for when we get it wrong, but it’s an intuition with a strong track record.

We have a different intuition for systems we can control. We assume that, without intervention, what is working today will stay working, and what is broken today will never improve. We treat complex systems like we treat leaky faucets.

So we intervene and—our being conscientious, “data driven” professionals—we watch our metrics to confirm that our interventions are working.

Unit A Had a Bad Quarter

We replace the manager, keelhaul the assistant manager, shuffle the team, and shame the survivors … then we watch the metrics … and it works!

Unit B Had a Great Quarter

Unit B gets ballcaps, bonuses, and barbecue. The Unit B manager gets to speak at the next company meeting.

However, as it always seems to go, the Unit B team “gets complacent” and disappoints us next quarter.

Don’t meet your heroes, folks.

Why Can’t We Just Punish EVERYONE and Win EVERY Quarter?

That aught to rid us of this damned epidemic of complacency!

Let’s start before any good or bad quarters. This is what we expect to see before any assumptions about which Units are good or bad.

expectation

There’s a lot of color on this board. What does it mean?

  • The pale areas (0 std dev) are where things stay normal.
  • The dark red and green (2+ std dev) areas are disasters or home runs.
  • The shaded green and red areas (1 std dev) are slightly good or slightly bad outcomes.

Remember that where you land reveals NOTHING. If you throw a dart blindfolded, this is what you can expect. No finesse, no tilt, no instinct. NOTHING but chance. If you’re even casually interested in probability, you will recognize this board as a standard Gaussian distribution.

If darts aren’t your game:

  • good - chance of rolling doubles on a pair of dice
  • bad - same chance of rolling doubles
  • home run - chance of rolling exactly 12
  • disaster - chance of rolling exactly 2 (snake eyes)

We’ve all rolled “snake eyes” before. It’s rare-ish, but you see it most games, so it’s likely that terrible things will happen over time “for no reason”. At least, for no reason that isn’t present when terrible things don’t happen.

Here’s where that second type of assumption (broken will stay broken) comes into play.

When something bad happens, we assume that something is wrong with the Unit or Department where it happened, even if we aren’t sure what that something is. We’re there to fix it, which is great for us personally, because this is the board after we intervene.

expectation after intervention

It’s still possible to get a terrible result, but it’s far less likely. You’d have to roll snake eyes on a pair of 27-sided dice. Post-intervention success is highly likely, but it’s not our intervention that made the difference. It’s the assumption that Unit A was broken. And we made that assumption based on something that would have happened 1 time out of 6 even without a problem.

But what if you knew enough to filter out the noise?

We’ve become more analytically savvy in recent years, and many of us recognize that sigma 1 events are likely to be false indicators. In practice, these are not considered “statistically significant”. What if we waited to intervene till a 1:40 statistically significant disaster occurred? This is the post-intervention board after a disaster.

expectation after disaster

Home runs everywhere! Watch out for 177-sided dice, but you’re all-but guaranteed a terrific outcome, even if your interventions do more harm than good.

Complacency

We can use the same understanding to see how “complacency” all but inevitably follows success. If we treat complex systems like leaky faucets, if we assume normal is whatever happened last time we looked, then we are in danger of making failure look like success and success look like failure.

post-intervention expectation after a sigma 1 negative event
post-intervention expectation after a sigma 2 negative event

Take Aways

Of course we want to do everything we can to improve our systems, but we may have to trust our instincts more than our metrics.

  1. Instinct, experience, investigation, and common sense have not been superseded by data.

  2. Global improvements may pay better dividends than “targeted” initiatives. Spread the love.