Thursday, October 25, 2012

Regulating collaborative consumption and the sharing economy

(Been three years since my last blog post. No, I haven't stopped writing, I've just been redistributing where I publish my stuff. Over the next few weeks, I'll try and consolidate and summarize some of my posts and opinions from elsewhere over here.)

I've recently gotten interested in the regulatory issues facing "sharing economy" companies like Airbnb, Uber and Coursera (and TaskRabbit, Couchsurfer, GetAround, RelayRides, former NYU student Erica Swallow provided a great overview of the sharing economy in Mashable earlier this year.)  The digital industries have always been fascinating sandboxes for regulatory challenges -- from AT&T to IBM to Microsoft to Google. This time around, however, it seems different, because the challenges are in industries that are decidedly non-digital. Hotels, taxis, rental cars. It harkens a new kind of permeation of digital into the physical.

Earlier this week, I argued in a Wired opinion article that the government should tread carefully when they go after the sharing economy because they risk raising the question of whether their regulators are even relevant anymore. Not that all regulation is bad or unnecessary. But the need for its rules and the manner by which these rules are applied changes as (i) digital institutions (like reputation systems) start to subsume some of the roles regulators were needed for, and (ii) digital technologies facilitate superior enforcement of the rules that remain.
(There's a fairly detailed discussion on these points on the Wired site, in the feedback following the article.)

There is an excellent follow-up in GigaOM by Mathew Ingram about the regulatory disruption that the sharing economy is inducing. He draws on a great blog post I had missed, by Chris Dixon, to argue that "startups like Airbnb and Uber are regulatory hacks  in the sense that they are designed to do an end-run around existing industry regulations — in much the same way the early disruption in telecom was driven by startups which played fast-and-loose with the rules, and eventually forced regulatory change and became the norm."

Its going to be an interesting year, the conversation has just begun. Stay tuned...

Wednesday, March 11, 2009

If it ain't broke but obselete, should we upgrade it?

My MBA class and I discussed the case of a successful retailer whose business model was heavily dependent on their IT infrastrcuture, and whose IT infrastructure was DOS-based. The class initially seemed unanimously in favor of upgrading the system to a more modern platform. However, a parallel analysis led us to conclude that the existing system supported the retailer's business model and information needs very well. There were no obvious immediate deficiencies, their IT was well aligned with their business strategy, their processes were carefully engineered, and their employees were trained and empowered. Besides, their key challenges to scaling their business were not IT-related, but had to do with operations and human resources. 

The discussion ended unresolved. A good summary of what was left open might be: when a system is technologically "very old" but functions in perfect alignment with one's business model, does it make sense to rip it out and replace it towards being technologically current, or reducing vendor dependence, or to create future (but non-obvious) application development options? Or should IT investment strategy always be driven by forseeable business needs, alignment with current business strategy and the philosophy of "if it ain't broke and does what you need well, don't fix it"?

Monday, February 16, 2009

The Micropayments Mystery

It appears that my favorite print newspaper is on the verge of extinction, and its survival depends on resolving a long standing ecommerce mystery. Time Magazine recently conjectured that future high-quality journalism will be viable only if we establish a micropayment system which allows newspapers to charge consumers per article, and if we alter consumer psychology to get them used to paying for online content. 

This line of argument raises two interesting issues. First, consumers are willing to pay for "bundles" of content as long as their collective quality is vetted by an intermediary. We still buy newspapers, subscribe to magazines, and pay for cable television.  Maybe this is because the cognitive costs of assessing the quality of millions of slices of content are too high, and we'd rather let someone trusted do a first screening. 

More interestingly, we've been promised a micropayment system for over a decade now, but nobody seems to have succeeded in creating one with widespread adoption. Instead, we have trusted intermediaries like iTunes who use a standard payment system (your credit card) to "microcharge" you for 99c songs and $1.99 iPhone apps. This seems to be working out rather well for Apple, who generated a couple of billion dollars worth of transactions in 2008. 

Perhaps this is the future of micropayments for content -- combining trust in payments with trust in taste or filtering. If we have enough faith in the intermediary, we'll let them narrow our content for us, and trust them to charge us right.  Its also loosely consistent with why eBay owns PayPal. At first glance, this seems less appealing and efficient than a stand-alone micropayment system combined with technology-based filtering and an open market for content creators. But if you factor in cognitive costs and a growing body of evidence that consumers do pay more for online reputation, placing your "taste" and transaction risk in the hands of the same trusted intermediary might not be all that inefficient. 

That still doesn't solve the micropayment mystery. Why hasn't a good micropayment system emerged? The technology exists, there's lots of money to be made from being the standard, and there's certainly plenty of microcommerce languishing in its absence. There's got to be something about what consumers value in a payment system that is missing from all the efforts we've seen so far...

Thursday, February 12, 2009

Genes, Science and Technology

As human beings, we are wired by our genes to reject actions that don't sustain our survival, and to adopt actions that enhance it. However, a substantial fraction of what governs our choices  today isn't determined by our genes; rather, it is (for lack of better labels),  constructed by social, economic and technological forces. 

This disconnect between our genetic instincts and our social, economic and technological environment seems to have changed how we evolve. Successful scientists seem to frequently and naturally reject important evidence because they are wedded to contributing to a paradigm that promotes creating science that is considered "normal" and adds to a prior tradition. This is documented by Kuhn in his book about the process of human discovery, one whose thesis maps best to what I've seen. In the language of genetics, we naturally avoid mutation, choosing selection and recombination instead.  We choose to enhance the present rather than thinking about the future. 

Similarly, the "gene" of a firm is very different from the gene of a person. And this is what makes information technologies so disruptive. Successful firms are those that are wired to select and recombine, to gather and react to what their environment (customers) tell them is important. So when the development that requires radical mutation comes along, it is the most successful firms that don't react. After all, ignoring the fringes is what makes them succeed, and most new technologies develop on these fringes. This is a lot like the scientists who contribute to a paradigm while ignoring the evidence that contradicts their theories. 

The firms that recognize that their success imposes these horse blinkers will therefore make that extra effort to "mutate" and experiment aggressively,  even if it doesn't add to their short term profits. Google, Yahoo, Microsoft (yes) and IBM are great current examples, although in different ways.  I'm not sure they are doing enough; however, it is good to see some convergence between our genes and our socially constructed instincts. 

Friday, April 18, 2008

A conversation with Jeff Bezos

A couple of days ago, I had the pleasure of chatting with Jeff Bezos before and after he gave an excellent talk to about 500 of our alumni. Jeff made a number of interesting (and humorous) observations, speaking on topics ranging from why Amazon experiments actively to how we've become a society of information "snackers" to his basis for spousal choice ("someone who can get me out of a third-world prison").

What made me think the most during the few minutes we chatted was his (seemingly simple) framework for making difficult decisions. Innovative companies like Amazon often have to make big decisions with little or no data. In making these choices, Jeff says that his choice is governed by "what would be better for the customer?". His point was that in the long-run, the interests of one's customers are perfectly aligned with the interests of one's shareholders. (This is clearly not the case when one has to manage short-term earnings.) He cited cases ranging from launching Amazon Prime to allowing customer reviews (both positive and negative) to remain on the site as examples where this framework paid off in the long run.

This observation (which seems to make more sense the more one thinks about it, although "perfectly" aligned might be a slight simplification) is an interesting one when applied to data ownership. Because it implies that in making data collection and retention choices, the smartest companies might be the ones who formulate policies that are aligned most clearly with the welfare of their customers. This is a lot simpler than thinking about expected future value and liability. I'm not yet convinced, but there's something interesting here.

I still don't have a comfortable feel for why they've entered the cloud computing business, but that's a subject for a different post.

Monday, April 7, 2008

Kansas, Memphis, data: value and liability

I ended up watching the Kansas-Memphis game. The first college basketball game I've watched this year. Nevertheless, I made many predictions. Most of which were wrong.

The one prediction I was most confident about was when Kansas made the 3-point shot in the last seconds, tying the game. At that point, it was absolutely clear to me that they would win the game. I had no data, no context, no history, but it didn't matter. All I had to do was look at the faces of the teams, and it was so clear who would perform better in the next 5 minutes. I didn't need historical performance data of any kind.

Companies have so much customer data these days. These data seems valuable, and worth storing, even though their value isn't immediately apparent. But I wonder if we're forgetting that data in isolation might not be marginally that valuable any more, and further, that firms need to understand how to associate a data trail with a conversation and a person before making a business decision. And if they don't, while they might anticipate value from mining the data in the future, perhaps they shouldn't be keeping the data, because it could end up being a liability to them. As Professor Vasant Dhar and I have discussed and written about in the past, firms may well need to rethink their "data valuation" models and strategies.

Just one dimension of a much larger discussion about how firms should manage their customer data.

Tuesday, March 25, 2008

The social role/responsibility of business

My dean Tom Cooley is a firm believer in the role of business as an agent of social change and progress. In case you are wondering why this is related to digital strategy, check out the wonderful story of ITC's eChoupal that we discussed in my MBA class today.

I'm torn about two issues on this subject:

(1) Where should a business draw the line between maximizing familiar "shareholder value" metrics and facilitating broader social transformation and progress? I respect the writings of both Milton Friedman and Ed Freeman, but their views diverge pretty radically on this front. Or do they?

(2) Is it sensible to allow a corporation to own physical and technological infrastructure essential to a nation's commerce? This is clearly a more important question for developing countries. However, before you conclude that its only relevant to them: the Internet took off after it was freed from the shackles of DoD ownership. As a consequence, an essential and integral piece of today's U.S. commercial infrastructure is entirely owned by a handful of telecom companies. Leading to, for instance, debates about net neutrality.

Anyway, food for thought. Look forward to your feedback.