Friday, October 20, 2006

"All marketers are liars". Seth Godin is a marketer. Hence?

Sorry for the cheeky title, but Seth sort-of ticked me off today with his post "Nobody Knows Anything".

First off, as someone who makes his living providing marketing analytics, I get a bit, let's say annoyed, when someone like Seth starts a blog post with "There are two kinds of marketing analysis, both pretty useless".

I calmed down a little bit when later in the post he conceded "Here’s the really good news: in addition to analysis, marketing today offers something that actually works: a process".

But his post in general has an attitude that says "Marketing is not science", and that "most marketing breakthroughs come down, sooner or later, to luck".

Well, I am not a guru like Seth (or as "lucky" in terms of having a couple of bestsellers under my belt), and he definitely knows a lot of things I don't -- but this is my blog :-) -- so, I will say that marketing is just as much of a science as social science is. It may not have equivalents to Newton's law of gravity or Einstein's theory of relativity, but marketing analytics does have some tried and tested ways to leverage data to make smart predictions about future behavior of customers and prospects.

Now, once equipped with the intelligence that marketing analytics provides, it is completely up to the marketers on how successfully they can change their strategy and tactics to yeild results (so, marketing is only partially scientific) -- but marketing analytics will almost always put a marketer somewhere above pure dumb luck.

Of course, you may agree with Seth, or not -- but I just had to get this off my chest.

Thursday, October 12, 2006

Marketing Analytics for

This week and last, we've had 2 good conferences in San Francisco, very relevant for marketing analytics. Last week it was DreamForce'06 from, and earlier this week, we had DMA'06.

Let me talk about DreamForce first because it has become a fad these days (specially for anyone working in BI or SaaS) to integrate with using AppExchange. Since early this year, I have been eagerly trying to find an angle between our business and, so DreamForce'06 was obviously very relevant.

A good friend of mine from college, John Barnes, was at DreamForce'06. John is the VP of Technology at Model Metrics, a Chicago-based firm that specializes in customization and integration of with legacy systems. I pinged John to find out more about what is actually doing about marketing automation and analytics, which was very insightful (thanks John). website describes marketing autmation as a key component of their platform, which also includes marketing analytics. Turns out while it is possible to define and manage campaigns using, the marketing analytics bit only provides some very basic reports.

So my idea is fairly simple. If people are actually running high-volume direct marketing campaigns using, then we will write an AppExchange component to suck in all the campaign and customer data into our platform and then provide much more sophisticated predictive analytics and other fun stuff on our platform -- make a lot of money, retire early, speak at next DreamForce (sorry I get carried away with this vision thing).

Anyway, about 10-20% of SF customers are using the marketing functionality today. The SFA is the main use but marketing is growing more and more. But 10% of 22,000+ customers is still a good market to go after. And the marketing analytics (and reporting) on SF is lacking. The main obstacle to making it better is that their API does not have a join capability so the only way to do better reporting today is to have a copy of SFDC locally and use replication software to keep it up to date (by DBAmp or Relational Junction on the AppExchange).

Sounds like a good opportunity.

So, overall I am happy with DreamForce'06. I had a chance to shoot breeze with an old college buddy, and learn about a very feasible way for us to get into the AppExchange game. Now I need someone who will go in on this with me as a "design partner" :-)

Tuesday, October 10, 2006

Better Blogging by Chemistry

Elliott Ng, a friend, ex-colleague and fellow blogger, has put an interesting commentary on Top 10 tips from (to) a novice blogger posted by Avinash Kaushik.

My take on it is that a blog's success depends on how effective it is in starting conversations -- which means you either get comments like this, or someone links your post on their blog expanding on the topic -- or simply email the link around with some comments.

Why would someone do that?

Well, only if they actually care about what you write about. And caring is more of an emotional response rather than intellectual one. Scoble, Doc Searls, et al really stress on "having a voice", which happens when you combine passion and get awawy from corporate-speak, IMHO.

Scoble's point on being easy to find is also important -- but instead of going the SEO route, it is more important to find interesting conversations in the blogosphere and participate. If you are an active participator with a unique and compelling voice, the search engines are bound to pick you up.

Personally, I found it helpful to write a post outlining my reasons for blogging -- and to the rest of the bloggers out there, novice and experts alike, I'd love to lob the question -- what have you found to be most effective at starting conversations? Was it different than what you'd initially expected?

Monday, October 09, 2006

How to win $1 Million from Netflix?

Fellow blogger Michael Fassnacht noted rightly that Netflix has caused quite a stir for marketing data geeks with their recent $1 million prize offer for "substantially improving" their existing Cinematch algorithm to make more accurate predictions of "how much someone is going to love a movie based on their movie preferences".

Call it "crowdsourcing", or harnessing "group smart" -- the approach is intriguing, and one of a kind. Being a curious soul myself, I decided to register a team from our company to check this out (who knows what may happen? we can be smart sometimes with enough luck :-)

A few interesting facts:
  • The contest is actually slated to go another 5 years until 2011, the bar being raised each year to improve over last year's winner
  • A fine but important distinction: the algorithm needs to predict how someone will rent a movie, NOT what movie someone will rent
  • At first glance, the data provided by Netflix seems pretty "skimpy" in terms of richness. Basically you get:
    • List of movies
    • List of ratings assigned for each movie by an extensive list of Netflix members
  • My first reaction was that having extra information on the movies themselves might help. There's a bunch of stuff available from IMDB . However, apparently there are license restictions and also Netflix doesn't really consider extra data to be valuable in improving their algorithm (see the discussion thread )
The "enjoy the journey, not the destination" mantra may be apt for this contest. As you can see on the discussion forum on netflix , this process has invited all sorts of interesting conversation on the validity of approaches, whether Netflix has provided enough data, why should one even bother, etc. etc. -- a dream peer review IMHO, albeit a bit too noisy. So, Netflix should be getting a lot more than their money's worth via this process -- not just by getting better algorithms and the PR buzz, but also by leveraging an almost open-source-type process to involve external community for their internal R&D.

At the moment, I agree with Michael's assessment that trying to solve this with ratings data alone might not be the best way to go. There seem to be so many other interesting dimensions that should influence somone's movie rating: movie characteristics like the cast, director, etc., review from critics, local media review, geo/demographic information about the Netflix member, among others. None of these are being considered in the current algorithm. I can understand Netflix's hesitancy to interface with 3rd party resources, but perhaps they should make all the datapoints within Netflix's movie database available for this contest -- and second, encourage contestants to add their own qualitative datapoints. If the goal is to approach this as a pure improvement of a data mining problem -- then increasing the depth of data should help.

I'll keep you all posted how far we get on this. Being a small company, we will do this in the copious amount of spare time left over after working on existing client work that pays the bills. Still, it should be a lot of fun.