Advertising wisdom from 1917

This statement was written over 100 years ago, but still applies today.  Brand advertising, that is consistent over time, and prominently features the brand’s distinctive elements, builds buyer recognition and acceptance – which is financially valuable to the brand owner.  Not only valuable … long-term valuable.  But the work has to continue – you have to keep doing it.



Product Review sites


Product Reviews

What should managers know about Product Review sites ?

There are two broad sorts of customer review sites, Feedback sites and what I’ll call Recommender sites.  Feedback sites give a biased picture of what your customers think.  Recommender sites are basically people saying nice things about your brand for free product, so they are also completely biased.  Neither type of site should be taken seriously in terms of reflecting what your customer base really thinks.
(note, we’re not discussing expert review sites like Choice in this discussion).

Feedback sites.

This type of site is meant to be where people give feedback, but really they are a venue for complaints about your brand.  Sites like in Australia are like this.  Almost all the reviews are bad!  But this is an example of selection bias.  The people who go to this site go for a reason – mostly, they’re irritated.  So the reviews for say, health insurance are all very bad – the majority of reviews say brands like Medibank Private, BUPA, Australian Unity and Teacher’s Health are “terrible”.  But that can’t be the view of the broader customer base.

Likewise the reviews for the big 4 banks in Australia are: NAB 75% “terrible”, CBA 64% “terrible”, Westpac 73% “terrible”, and ANZ 80% a whopping “terrible”.  Yet on average, large-scale surveys of clients of these banks yield scores around 7.5 out of 10 for satisfaction. Why the difference?  The surveys are of the general customer base, whereas product reviews are from the small proportion of clients who have had a bad experience.

The management take-outs for sites like this are:

1. Understand the important concept of selection bias.  If you get information from an unusual sample of clients, the results bear no relationship to what your “average” clients feel.

2. Judge results for your brand in the context of what other brands get.  In other words, don’t necessarily worry if 75% of your product reviews are “terrible” because all it shows is, people who are unhappy with you tend to vent online. Check against what your competitors get.  But also, don’t be complacent, solve the complaints and identify what systemic issues lead to them.

3.  Use review sites to keep in contact with disgruntled clients, to fix their problems and perhaps identify common issues. And to show that your business does care.

Recommender Sites.

These are sites consumers submit reviews to get free product.  An example is  and guess what, all the reviews average around 4.5 out of 5.0 !  Typical reviews are, “oh I bought brand X and it’s really amazing, it’s really made my hair shine like my pony’s tail !”.

This might all be harmless stuff, but for managers this obviously gives you no clue as to what consumers really think of your brand.   It’s also doubtful that too many consumers would take the reviews seriously, as they’re so blatantly biased.  Probably in the main people who look at sites like this are only figuring how to get their own free pack of shampoo or cookies.

Another example is, where consumers join to act as semi-professional recommenders.  Apparently “you share your thoughts about them [brands] by starting natural, genuine conversations with real-life friends and online buds” ….  and perhaps tellingly, “spreading word-of-mouth like you mean it” (from the Bzzagent web site).  So, it’s all genuine, right!

Here, it might be the case your brand potentially achieves some exposure / reach, but the extent of reach that individuals tweeting about brands to uninterested friends would achieve seems low.  A look at many of the tweets from sites like this suggests they seem to achieve fairly low coverage.

In summary, apart from monitoring and responding to individuals who are having trouble dealing with you, product review sites offer a distorted view of what your brand’s buyers think.  There is a lesson here we should take more broadly, to market research –  to ensure our samples are representative of the population we’re interested in, otherwise we will get completely erroneous information.



The “Attraction Effect” debunked.



The Attraction Effect is this: you add a high-priced but not necessarily better option to your range of products, thereby boosting the sales of one of your other products – probably the one that used to be priced highest.  The Attraction Effect, also known as the ‘Decoy Effect’ has been widely researched, taught in marketing classes and presented at pricing conferences around the world.  It has intuitive appeal: you have two products, priced at say $30 and $60. You want to increase sales of the $60 option, so you introduce a $100 product; that’s probably not much better than the $60 one. The $60 product now doesn’t seem expensive. Hey presto, its sales increase. Sounds great, right ?   The Attraction Effect was first pronounced to the world in the 1980’s.

But – most of the evidence for the Attraction Effect was obtained by asking respondents (often US college students) to choose between various products.  Typically these products were presented using text descriptions (Product A, $295, quality level 80; Product B, $350, Quality level 90 …).

The Attraction Effect was debunked in 2014 in two very large-scale studies. It seems it was an artefact of asking people stylised questions about price and quality, using principally text descriptions unlike those they encounter in real life.
To quote the authors of the studies:
“Ninety one attempts to produce an attraction effect … only 11 reliable effects … significantly fewer than expected”.
Yang & Lynn, More evidence challenging the robustness and usefulness of the Attraction Effect. Journal of Marketing Research, 51,4, 2014.

“We found no evidence of the attraction effect …”
Frederick, Lee and Baskin. The Limits of Attraction. Journal of Marketing Research 51,4, 2014.
So — you can’t magically boost the sales of one of your products by adding in a higher priced option.

This new evidence supports the idea of replication – carefully re-examining the validity of past studies. Replication is well established in many disciplines but is a bit underdeveloped in marketing.  Replication is the way that marketing science can slowly progress.