Tuesday, August 15, 2006

Commitment in the Marketplace

What is the price you pay for making your service a "no-commitment" option for your users? While most start-up experts will tell you that less commitment is key to get new users to try a service/product, is there a flip side?

Recently, I read Influence: Science and Practice by Robert Cialdini. While the entire book was packed with interesting insights, one of the most profound was the effect commitment has on a person. Cialdini gave numerous examples, ranging from cults to volunteer efforts, that showed how people will reinforce their belief that they made a good decision, once it's been made. I recently saw this first-hand, having to make a significant career choice. It was a difficult decision, as my two options were not directly comparable. I actually went back and forth a few times, even after I had thought my mind was made up. However, as soon as I called and made a commitment to one company, the turmoil vanished, and over the past several weeks, I have become more confident that I made the right choice. At least some of this, I am sure, is due to the commitment effect mentioned above.

One of the most common themes I've seen among new startups this summer has been businesses that bill themselves as the "eBay of XXXXX", where XXXXX is something other than a material good. The companies are aiming to provide a marketplace for intangible goods, much like a consumer version of the B2B exchanges that cropped up in the late 1990s. Some of the players in this market are Ether, IPSwap, etc.

While each of these want to be the next eBay, it's important to examine how the intangible nature of their goods may affect the attitude of customers towards their service. If I want to sell my expertise for instance, I can use Ether to connect me to buyers. However, I can also use several other services to do the same thing in parallel. By contrast, in the early days of eBay, if I had a single collectible Beanie baby, I could only sell it in ONE place. Each time a person listed something on eBay, they were making a commitment to that platform. This had a reinforcing effect, as they convinced themselves that eBay was the right place to sell, and made them more likely to use it again in the future. True, these days, eBay does much of it's business with sellers that have several of the same item that they sell through multiple channels. However, this occurred only AFTER eBay hit it's critical mass, and became a de facto retail channel. When building that initial traffic, I believe it was the commitment effect that was a strong driver of success.

Without this commitment effect, customers are less likely to become the passionate evangelists you want them to be. This can obviously be overcome with a great product that does something that no one else can, but when your service is counting on a network effect to differentiate itself, it makes it much easier if your network is wedded to your service alone.

So how can this be overcome? I think the key lies in building a secondary relationship with a user, one that can create commitment over time. For marketplace systems in particular, the development of a reputation is one possible approach, as it helps a customer commit to one platform, to avoid spreading the benefits of a good reputation too thinly. This however, may be a short term solution, as portable reputation systems like Rapleaf may soon prevent this from being a focus of commitment.

Wednesday, July 19, 2006

Convergence 2.0 - The networks

I posted last week about the theme of "Content is King" that was all over the place at Convergence 2.0. The other major takeaway I took from the conference was that the networks are still thinking in old models. While the networks were all speaking of the shift in power back to content in tones of glee, there were a couple of points that I think they missed.

1. For a lot of their content, they are not the ultimate producer.
2. Professional quality content is hard, but for some types of content, it's not as necessary.

The Context of the Content
During one of the breaks, I had a chance to speak to an IP attorney at one of the networks. We chatted about one of the networks clips that had found it's way onto YouTube, and the resulting explosion of attention generated by that leak (okay, it was an NBC lawyer, and the clip was "Lazy Sunday"). I asked why the network didn't just view such activities as free advertising. Her answer was that while the network doesn't have a problem giving the clips away for free, they want it to be only available on their site.

I didn't understand why it was so important that it be on their site, until I recently wanted to show a friend the Lazy Sunday clip. Before, I had simply pulled up the clip on YouTube, shown it, and gone about my business. This time however, once I found it on NBC's site, it was listed with several other SNL digital shorts. After we watched the first clip about 5 times in a row ("You can call me Aaron Burr, the way I'm dropping Hamiltons" - Genius!) we proceeded to check out the other digital shorts.

Needless to say, none of them were nearly as good - but the point is that we actually watched them. As a media consumer, there is a signal in proximity (on tv it is temporal, while on a site, it is more spatial). We naturally assume that things near each other will be similar in some important measure. Coupled with inertia that keeps us in the same area, this can be a powerful effect. The networks have used that for years, as they leverage "lead-in" from one show to boost the ratings of another, weaker show (think of all the one-season wonders that followed Seinfeld).

Microchunking blows that proposition out of the water. It means that each piece of content has to stand alone and win popularity on its own merits. Already, this is hurting the music industry, as they can no longer bundle 2 good songs with 14 poor ones on an album - we just buy the single off of iTunes. Similarly, if each show is spread on it's own through numerous channels, there's no audience to leverage, and the weak content will not survive on its own. That's why they want to control the context of the content - to allow them to piggyback weaker content on the stronger.

Capitalism of Content
Now, if I was a content producer with a hit, why would I want to allow someone else to create value based on my work? If the power in negotiations is shifting from the distributors (Cable and RBOCs) to content (proxied by the networks), why wouldn't that shift continue, and ultimately end up with the primary producers of the content? After all, if TV is no longer the only way to distribute content, all of a sudden, Fox need American Idol more than American Idol needs Fox, right? When each piece of content exists on its own, the value of top content will rise exponentially, while that of lower quality will actually drop.

I asked this very question during the conference, and the answer was that content aggregators would still be necessary, to enjoy economies of scale in promoting the content. The rule of thumb I heard was that if promotion expenses ever passed production expenses, you are in trouble, and that can only be avoided by aggregating content. However, if you stop leaning on the promotion of content, and allow each distribution channel to compete for the best content, it should naturally rise to the top anyway. Therefore, I see the networks having a harder time signing top content once the alternative distribution methods become more mainstream.

The Digg Network
The final point where I disagreed with some of the network representatives was in the area of user generated content. Over and over, it was dismissed as a distraction, with the mantra of "Professional quality content is HARD - not everyone can do it". I agree with this to a point - it is unlikely that user generated sitcoms will become a big competitor. Any form of content that requires serial quality for an extended period of time will probably stay the province of professionals.

However, for content types where each piece stands on its own, the story is completely different. While professionals might maintain the highest average quality, amateurs can meet or exceed that quality for individual items. Taking a print analogy, there are very few bloggers that can consistently write at the quality of a Washington Post, or New Yorker, or Fortune (and most of those tend to write in both types of media). However, the best posts of the day on Digg are often as good or better than anything you will from those professionals. As content spreads to new channels, the new aggregators that succeed will be the ones that bring the best of both types of content to us. Consumers will judge aggregators on their ability to find and organize the best content, but since content will be available to each aggregator, there won't be any room for weak content that simply piggybacks.

Wednesday, July 12, 2006

Convergence 2.0 - Content is King

I attended the Convergence 2.0 conference hosted by The Deal a couple of weeks ago.  After a full day of listening and talking to experts and practitioners in new media, old media, and telecomm, I came to the following conclusions:
  1. Content is king - the ability to capture value is shifting from distributors to content producers.
  2. The networks (at least the representitives I heard) still don't get how tenuous their position is.
Content is king
Without a doubt - this was the theme of the day.  As the number of different channels available to reach consumers increases (Cable, IPTV, mobile, portal, search, etc), distribution as a concept starts to become commoditized, even if a particular type of distribution is a monopoly or duopoly.  Pretty much all the players agreed on this, the carriers with resignation, and the networks with glee.

The one exception to this I think, is in connecting end users to one another.  It's in this area that distribution channels have a great chance to differentiate themselves from one another.  While watching a music video on a cell phone vs. a computer sceen vs a plasma TV might be a difference in degree, the way we communicate with others over those channels is a difference in kind.

What does this mean for start-ups targeting this space?  I think it means that if you are a business bringing content to users through a new channel, you need to focus more on differentiating your content, not your delivery.  I have actually seen a couple of companies that are pitching mobile services, and they are all focused on their delivery technology.  They considered the content they are delivering a mere afterthought.  When talking about their competition, they focus on others in the same distribution space.  But in the end, I think they should worry more about the incumbents in older channels with higher quality content (be that a more thorough database, better user-generated or professional content, or even more relevant search results).  The barriers to expanding good content to a new channel are much lower than the barriers to accumulating that good content in the first place.

Tuesday, June 13, 2006

This is next

Well, it's official - according to Guy Kawasaki, I only add $250K to a start-up's value now that I have received my MBA (I'm assuming I still get to keep my engineer value).  After 3 years, I graduated from the Langone Program at Stern!

In addition, as of yesterday, I started as an intern screening new deals for an early stage tech investor (I still don't know the blogging policy, so I'll defer names until I know it's okay).  This transition from the operating side to the investing side has obviously been a goal for quite some time, so hopefully this is just the first step in my next career.  With this transition, I am going to try and abstract the lessons I am learning about investing in new companies. 

The first of these lessons is that the best way to learn how to write any kind of document is to read a lot of them.  In school, I saw my own case writing improve dramatically after I read dozens of case assignments as a TA.  As an entrepreneur, I know that when I wrote my plan, every word was precious, and every statistic in the plan was captivating.  However, after reading dozens of plans, I know that there are paragraphs or even sections that cause most readers' eyes to glaze over.  Most of these relate the painstaking process that was used to derive a market size or valuation.

A good approach is to find others with business plans in related, but not identical fields, and read as many of them as you can.  Whenever you find yourself skimming or even skipping over a section in their plan, highlight it.  Then, go back and find the corresponding sections in your own plan.  Most probably, you'll find one sentence that you really want to convey out of the section.  Keep that one sentence.  If you must keep the rest, reference an appendix, where you move the rest of the statistics, methodology, assumptions, etc.

This has two benefits - 1)  It lets readers focus on your results, and only dig into the methodology if the results interest them and 2) As you find out more, and have to tweak the numbers, it makes it easier to find them all and keep your plan consistent.


Tuesday, April 18, 2006

Building network economics

I've been working with a number of start-ups in the NYC area recently, and most of them are launching community based sites, hoping to capture some of the value created by networks of users. While most of these ideas are interesting once the network is built, the one part that seems to be overlooked in many cases is how to build to that critical mass.

Most people talk about the network value of successful sites. However, before a site is successful, the network value is much smaller than the non-network value. Until a critical mass of users exist, the only value to a new user is the "selfish motive". This is the reason you are willing to be the very first person on a community site, even if you think no-one else would ever join the community.

A great example is flickr - one of the poster childs of the web 2.0 wave. Now that the site has a great community, and active content posters, the value of the community feeds itself, and helps them bring in even more people. But at the beginning, when there were no other people uploading and tagging photos, they still had something useful - an intuitive and quick way to upload and share photos - that was the selfish motive for people to use the service.

With that in mind, I think it's important to clearly define your customers' selfish motive for using your service/site/network, and at the beginning, THAT needs to be your primary marketing message. Only once you achieve critical mass can you switch to really pushing the value of the network itself. Push the network value too early, and you risk people expecting more than they find, and not sticking around to help build the network itself.

Warning - equations and low level geekspeak below!

A few months ago, there was a lot of attention paid to Metcalfe's Law, and later, to it's big brother, Reed's Law. Both of these talk about how the value of a network grows faster than linearly with the number of people on the network (or correspondingly, in the community). Metcalfe's law is usually quoted as saying that the value of the network is N^2, where N is the number of members of the network.

Now, this is an approximation based on the assumption that N is large. More precisely, Metcalfe's Law states that the value to each member is: a + b*N, where a is MUCH greater than b. Therefore, the total value is simply

N*(a+b*N) = a*N + b*N^2

For networks with a large number of N the value to a new member is driven much more by the b*N portion of the value, than by the a. However, for those first early adopters, there absolutely has to be a selfish reason to join - the a.

Update: This post had been in draft form for a while, and it looks like Tom Evslin and his readers beat me to the punch, both in time and in content.

Monday, January 09, 2006

Consumer EAI

In my background working with ERP systems, there were discrete trends that washed over the industry, one after another. The driving force from one generation to the next was an alternating cycle of features and integration:

  1. The first wave were the single-focus, stand-alone applications. This goes back to the time when SAP was just manufacturing software, PeopleSoft did nothing but HR, etc.

  2. Next, these applications tacked on additional services, but to be honest, their original focus was really still their strongest selling point. Having worked with the first iterations of modules like SAP HR, let me assure you, it was no fun having to implement v1 of any of these attempts.

  3. After the first group of companies implemented these suites, and realized they weren't really enterprise ready in all of the functional areas, we came upon the era of portfolio management and EAI. Companies chose "best-of-breed" solutions for each area, and then tried (with varying levels of success) to tie them all together. Some companies that did this well stayed here, while others moved back to...

  4. The all-in-one solutions. Consolidation in the enterprise software industry had led to better all in one suites. SAP and Oracle, in particular, had bought up enough of their smaller competitors to shore up their weaker areas. Although each of them still retains their original areas of expertise, their other offerings were at least good enough to be deployed.

Looking at the consumer space being served by internet services now, I'm starting to see parallels:


  1. AltaVista does search, Yahoo does a directory, Amazon does B2C book sales, etc.

  2. Yahoo, AltaVista, and everyone else with a website tries to make an all in one portal.

  3. This is where I think we are now - multiple sites are cropping up on a daily basis, trying to provide the best solution for very narrow niches (del.icio.us for bookmarking, LinkedIn for professional networking, AirSet for Calendaring, etc).

  4. We're beginning to see this next step, as Google, Yahoo and Microsoft buy up the niche companies from stage 3 and tie their services together into more stable suites.

Despite these parallels, the differences between business and consumer markets may prevent stage 4 from becoming as strong among consumer services. For better or worse, individual consumers have a much broader range of needs, and so are more likely to want to pick and choose individual services to meet their needs. Especially among early adopters, consumers may be willing to trade off interoperability in exchange for better functionality. This may be reinforced by the fact that consumer system integration does not need to scale quite as much as enterprise systems.

However, what we're really missing is EAI for these services. Individual mashups exist to tie some pairs together as a point-to-point solution, but what we really need is a common middleware for these. RSS may provide the basis for this, but that's really nothing more than message transport (much like MQSeries was for enterprise applications). It's the next layer, the message brokering and routing, that's really missing for consumer apps. Once we get this in place, as consumers we'll be able to define our own data models, and populate them from the range of web services that expose API's.

Imagine being able to extend the core concept of a contact. In addition to contact information, I'd like to be able to hold a history of meetings with that person, a trail of our emails and IM's, and some favorite postings from their blog. I could populate this larger model with data from LinkedIn, AirSet, gMail, AIM, and del.icio.us. Being able to see this all in one place would be much more valuable to me than having the same data spread among the different services.

One obstacle I see to this is the approach to monetization being used by many of these services. With most of them being supported by ad revenue from the site itself, they are not likely to allow background access to the underlying data. This probably will end up being resolved with premium subscriptions that allow direct access.

While this represents a fundamental advance in consumer services on the web, it's really just the first step. Once this is in place, things REALLY get interesting, as we get to start building workflow on top of it.

Monday, October 24, 2005

Irrational Games

Thomas Schelling and Robert Aumann were this year's Nobel Prize winners for economics. They shared the prize for their contributions to game theory, with Aumann helping to create the theoretical foundation of the field, and Schelling using the theories to help interaction between nations.

Other than the Peace Prize, it seems that the economics prize always generates the most controversy. Typically, this is driven by a conflict in the political ideologies that support or attack the winner's economic theories. This year, the conflict was no less, but it seemed less political than usual, and more academic in nature. Several articles came out that attacked game theory as a field itself, claiming that it was too idealistic and based on too many assumptions shown to be false in real life. One such article was written by Michael Mandel in Business Week. In it, he asserts:

Instead, the real progress in economics these days is coming not from game theory, which has been around for 60 or more years, but from the much newer fields of behavioral and experimental economics. Behavioral and experimental economics don't start with the assumption of rationality used by game theory. Rather, as the name suggests, the focus is on looking at how individuals and organizations actually make decisions in practice, including systematic biases, misperceptions, and just general all-around bloody-mindedness.


The article basically states that game theory is intricately intertwined with the assumption of fully rational behavior, and that fields such as behavioral and experimental economics are simply incompatible.

With all due respect to the author's PhD (I'm just an MBA student, and not even specializing in economics), but this isn't how I've learned game theory. I have always understood game theory to be more of a framework to determine strategy, based on the analysis from all the players standpoints. It doesn't matter whether the players make rational, utility maximizing choices, or not. What does matter is that you know what drives their choices.

If you can predict other parties' action, no matter the motivation, the framework of game theory can still apply. These choices can be based on truly rational self-interest, on altruistic social conscience, or on the ego of the CEO. In determining these decisions, behavioral and experimental economics are indeed valuable, as they often predict individual behavior better than the standard assumption of purely rational value maximization.

Therefore, the important thing to learn through game theory is not the fundamental assumptions of what drives behavior. Rather, it is more important to learn how to build upon the assumptions to find the correct response to a given situation - in order to maximize whatever drives YOU.