Here’s the thing about ‘Efficacy’. It doesn’t exist. It is an abstract concept, a placeholder, a shorthand, a broad clustering of ‘things that a product does’, a descriptor under whose vagueness all kinds of good things can hide.
There is only one time in a Market Research presentation when ‘efficacy’ should be capitalised, and that is on the slide that shows “Efficacy: what doctors meant when they said they want more of it…”
Unfortunately, that is wishful thinking. An overwhelming majority of primary research still comes back reporting that doctors want more Efficacy, Safety and Tolerability, and ideally at lower cost. Usually that research will have been framed by what your audience thought they could have more of.
Imagine telling a salesman that you want a car. “What kind of car?” “One with more, well.., efficacy. One that does what cars do, but better…” Or, imagine presuming that people buy new mp3 players because they want to hear their mp3s better.
If we take something as simple as obesity, all of the following parameters could be included in a review of Efficacy:
degree of average weight loss at a certain timepoint (the usual understanding),
effect on satiety,
effect on ‘food addiction’,
durability of response,
effect beyond withdrawal,
amplification of lifestyle change,
effect on visceral fat,
effect on mood or mental appetite,
effect on fat distribution (visceral vs subcutaneous),
responder rate…
Each respondent in a piece of Market Research study on obesity may have meant one or more of those dimensions when they chose ‘Efficacy’. It isn’t enough to think that we’re talking about the same thing – why not go one step further, and find out exactly?
This is all unfortunate, for it is in the detail that efficacy becomes beautiful, and strategic opportunity becomes possible. Consider the simple switch that enabled Lipitor to gain advantage: moving the market from measurement of outcomes to measurement of a lab test as a primary measure of efficacy. Lowering LDL is a measure of efficacy, as is lowering major cardiac events - physicians were ready to believe that one of those led to the other, however long the chain between the two.
The problem comes when we all assume we’re talking about the same thing when we use the word (unfortunately, unlike some other languages, English doesn’t allow a way to hear whether we mentally capitalised the ‘e’ or not…). The Development folks hear that physicians want more Efficacy and think, ‘well, that’s fine… Let’s go looking for something, anything, in the studies that fits under that banner…” That was the case for the statin market, where ever-escalating outcomes studies proved they reduced serious events. But, luck and judgement led to Lipitor seeing things differently, saving money, time and risk.
Even worse, an acceptance of the use of Efficacy ends up with a view that your product, from all the products out there, offers the perfect ‘balance’ of Efficacy and Tolerability (a net clinical benefit that just happens to favour your drug). Consider an anticancer drug for a moment. Efficacy is often taken to mean Overall Survival, or Progression-Free Survival. Those are perfectly reasonable things to measure. However, there are many other ways to evaluate the efficacy of an anticancer agent: visible effect on tumour regression, effect on symptoms, effect in different lines, in different stages, in different risk patients. All of those dimensions are running through the minds of oncologists, who see patients in their real worlds. So, for a marketer to claim their drug offers ‘a perfect balance’ will often come across as more than a little thoughtless.
The problem of that capital E is manifest in any review of ‘Unmet Need’. (The same rule applies to those capital letters…) Because we can all agree that Efficacy is a shorthand for a granular set of (often conflicting) things that a drug might do, any review of unmet need must respect that granularity or be rendered pointless, a waste of Energy, Enthusiasm and Effort…
Even worse is that clinical trial endpoints can be assumed to be the best measure of ‘efficacy’, ignoring the adage: not everything that matters can be measured - not everything that can be measured matters. So, our provable, claimable ‘efficacy’ can often be reduced to whatever statistical significance was shown in studies. The thing that matters might not have been measured, but we now have to make what was measured matter.
This is at the core of the challenge of positioning. If done early enough, the wonderful connection between something your drug does and something your audience want (even if they don’t know it yet) can be included in studies, and claims can be made. If it is done late, your task becomes to create a belief with flimsy evidence and real endpoints that don’t measure value.
Excellent writing.