Subtitle

The Not Quite Adventures of a Professional Archaeologist and Aspiring Curmudgeon

Tuesday, January 24, 2012

Permission Marketing, Math, and Annoyed Executives

Back in the late 90s, I worked for a large computer hardware manufacturer.  I worked in one of the marketing departments.  The management of the various marketing departments were obsessed with a "marketing guru*" named Seth Godin.  Ol' Godin was, at that time, pushing an idea called "permission marketing" (often credited to him, but existing much earlier).  The concept, in a nutshell, is that you identify people who might want your product or service, and you get them to agree to read or listen to messages that you send to them about your product or service.  The basic idea being that they will be receptive and appreciative to hear what you have to say, and you aren't annoying other people by interrupting what they are doing (watching television, reading a website, listening to the radio, etc.) and thereby generating illwill from potential future customers.  While the idea has its limits, it's basically a sound concept, and seems like something that should work.  I have seen some anecdotal evidence to indicate that it does work some of the time, but I have no idea how it works overall in practical situations - and to be honest, I really don't care.

Now, the way that my employer decided to apply the permission marketing concept to our way of doing business was to identify the different marketing segments, and ask companies within those segments to sign up for various programs where they would get a few different goodies, and would also receive regular emails from us trying to provide information that might persuade them to purchase our products.  This had been going one for a few months when I got brought on board, and the program to which I was assigned had approximately 12,000 members, which got whittled down to around 10,000 when you eliminated cases where two accounts existed for the same company (usually because two different people at that company had signed up).  It was at this point that the upper management wanted to prove that their new program was working.

Note, I say that the upper management wanted to prove that it was working, not look at it and see whether or not it was working.  This is an important distinction.

I was given access all of the data for the program - member data, email send dates, click-through data (we had a way of seeing if the emails were read - I can not remember at this point whether the email simply contained a link or if there was some way that we were able to track whether the email was opened, though the latter seems like it would be riddled with errors), and contents of all emails.  I discovered that the average email generated a total of five click-throughs, or roughly 0.04% to 0.05% (depending on whether you count duplicate accounts or not).  There were a few noteworthy exceptions - emails that demanded action be taken to maintain membership had click-throughs as high as 25%, and during periods when there was a very high number of new members, there would follow one to two emails with click-through rates as high as 15%, but the emails demanding action to maintain membership privileges were all clearly isolated incidents not to be included with the rest of the data.  The higher rates during periods of program expansion were noteworthy, as they seemed to come from new members interested by the novelty of the new emails, but the rates always settled back down to 0.04% to 0.05%, sometimes ranging as low as 0.01%, which seemed an abysmal range of rates for a program that was costing no small amount of money to implement.

So, I put all of this together, wrote a report, and handed it to my supervisor, who handed it to her supervisor, etc. up the chain of command up to one of the Vice Presidents in charge of marketing (there were a few of these guys, I don't recall which one ultimately received the report). 

I didn't hear any more about this for a few weeks.  And then I received an email, from the VP in question, addressed to my boss, but with me as a CC and not amonggst the (several) primary recipients (the tone of the email left me feeling that this was used as a way of letting someone lower on the foodchain know that they were to be "instructed" without ever directly addressing them - I always found it insufferable and insulting).  The email demanded, of my boss and not me, to know whether I had a degree in marketing, and then rather strongly implied that if I did not have a degree in marketing, everything that I might have to say on the subject was meaningless.  Being the sort of person that I am, I responded, stating that, no, I did not have a degree in marketing, but that I could do basic math and it was clear that a 5/10,000 response rate was pretty sad and not the sign of a healthy marketing program.  My boss was, again, informed that, as I did not have a degree in marketing and was therefore not ordained to the priesthood, and Mr. Godin had written this holy text that was to be followed, all analysis that I had done was to be disregarded**.

This has always struck me as an unaccountably odd position for a corporate executive to take.  Admittedly, there was always the possibility that there was some other principle at work here that, had I gone through school to get a marketing degree, I might be aware of which would explain why such a low response rate was not necessarily a bad thing.  I can not conclude absolutely that I was not missing some important concept or piece of information.  However, the tone of the emails (and my later interactions with this VP) couple with the fact that he never bothered to point out any such concept or information suggested that this was not the case.  Rather, there seemed to be two things at work: 1) there had been a good deal of time, effort, and money put into this particular marketing scheme, and in keeping with the sunk cost fallacy, comment that the program should be altered or dropped was not looked upon kindly; 2) within the business side of the tech industry at that time, whether or not this is still the case I cannot say, there was a definite pressure towards "group think" and those who were skeptical of the positions taken by the group (or, more often, paid lip service by the group by held by upper management) were often seen as not being team players.

Regardless of the precise reason, I never did receive any sort of explanation from anybody regarding the distaste that the VP had for my analysis, which I had been asked to give, but it was made clear that he saw it as somehow offensive.  I was, however, assigned to other tasks and never asked to provide an analysis of anything again.  The program continued for the next year that I continued working for that company.  I haven't a clue as to whether it still exists.

One interesting side-note.  Some of the methods that I used for slicing up the data in looking for the effectiveness of the program, and looking to see if there were sub-sets of the clients who were more interested in the program than others, I later adapted for looking at the frequency of different types of artifacts in archaeological sites, and I was able to use them successfully in writing my Masters Thesis.  So, in the end, this assignment did work in my favor.






*"marketing gurus" came and went like sheets at a hotel.  They usually had a kernel of a good idea that was then packed about with all manner of pseudo-profound nonsense in order to turn what could have been a one-page instruction sheet into a book with accompanying seminars and DVD sets.  Occasionally, one would actually have a profusion of good ideas, but most were one-trick ponies who enjoyed their five minutes, and then were forgotten when, after a few months of trying to get the "new, game-changing idea" to work and discovering that it failed when tested against reality, it was abandoned by the marketing executives who had found a new guru to teach them what was sure to be the one true religion.

**Strangely, I was allowed to maintain access to the data sources until a year later, when I ran a series of statistical tests demonstrating that the individual sales people had little influence on product sold, but that changes in engineering and manufacturing had a huge impact, and therefore the engineering and produciton staff should be the ones getting the sale's team's rather large bonuses and other perks (such as, and I shit you not, safari trips to Africa, month-long tours of Italy, and so on).  I was laid off a few month later, admittedly for reasons unrelated to my oddball exercises in applied mathematics, but I have always wondered if management wasn't at least a bit happy to get rid of an irritant.

No comments: