27 December 2010

My blogging status

You may have noticed that blog posts have been relatively scarce. While I expect to post again from time to time, I have pushed more posts on Twitter and Buzz because it is faster for me. Please feel free to follow me on Twitter (@smpl5) and Buzz (smpl5d@gmail.com).

13 October 2010

Markets in spectrum licenses

I have an abiding interest in cooperative spectrum sharing. One form of this sharing is license trading, though the volume of spectrum trades are quite impressive (see this paper). But the FCC Universal Licensing System does not contain the dollar value of spectrum transactions, which makes this article all the more interesting. According to the article,

The company is selling up to 40 megahertz of spectrum per market, a slice of its wireless capacity, one person said. A value of 20 cents to 40 cents per megahertz of spectrum per U.S. resident would reach the $2.5 billion to $5 billion price tag, Jennifer Fritzsche, an analyst at Wells Fargo Securities LLC in Chicago, said in a research note today.

The lower end of that range would be in line with sales of similar spectrum in Europe, she said. The company has an average of 120 megahertz of spectrum in each market, she said.

This is precisely the kind of outcome I expect in an unregulated market: companies that have more spectrum than they need should sell it to those that need more. What is clearly missing is public pricing information!

28 September 2010

AT&T Long distance network, Sept 1898

This item, a 1898 map of AT&T's long distance network, is interesting. This dates from the days before vacuum tubes, so there were no active repeaters in the network. Since Pupin did not receive his patent on "loading coils" until 1890, it is likely that there were relatively few of these in the network when the map was drawn. In addition, local exchange competition, which began in 1893, would have been in full swing, and one of AT&T's strategies was to leverage the network effects due to its growing long distance network. Indeed, such a map would have been valuable in marketing the benefits of AT&T's affiliates in competitive local markets.

(Shameless plug: learn more about this in my book!)

20 September 2010

Intel and the price for quality

This item from Engadget caught my interest. As the article notes, differential charging for varying levels of quality is an idea that has been around for some time. Varian and Shapiro, in their book "Information Rules" cite a similar example around IBM printers.

What is different and interesting to me about this case is that Intel is stepping around the value chain in trying to directly monetize quality. In other words, in the case of the IBM printer, it was the integrator (IBM) who had the direct relationship with the customer that was doing the quality differentiation. Here, Intel does not have the direct customer relationship (Dell, HP, etc. hold that honor), yet they are trying to directly monetize the quality. By directly selling to consumers, they are bypassing the relationship that Dell and HP have built.

Intel has been trying to build brand awareness among consumers with their "Intel Inside" campaign for some time. This is a significant step further in that direction. I wonder what liability Intel would assume for customer support should this upgrade cause a system to fail?

16 September 2010

Razors, blades and business models

One of the classic stories that relate to business models involving compatibility standards is that of Gillette. The story goes that Gillette built a proprietary standard for safety razors and handles. To profit from this strategy, they gave away handles (or sold them at low cost), with the idea of selling higher priced razor blades on which they had a monopoly by virtue of the standard.

This, by the way, is the model that HP uses with ink jet printers ... you purchase a printer at relatively low cost (and below production cost) and HP will recoup this "investment" by selling you ink cartridges above production cost ... they NY Times wrote:
H.P.’s printing group has long been one of the company’s star performers. It accounts for nearly a quarter of overall revenue. Printer ink remains one of the most expensive liquids on the planet — more valuable than expensive perfumes — providing H.P. with far higher profit margins than PCs and other types of computing hardware provide.

This came to mind because of this paper by U. Chicago law school Professor Randall Picker. In it he writes:
The actual facts of the dawn of the disposable razor blades market are quite confounding. Gillette’s 1904 patents gave it the power to block entry into the installed base of handles that it would create. While other firms could and did enter the multi-blade market with their own handles and blades, no one could produce Gillette handles or blades during the life of the patents.

From 1904-1921, Gillette could have played razors-and-blades - low-price or free handles and expensive blades - but it did not do so. Gillette set a high price for its handle - high as measured by the price of competing razors and the prices of other contemporaneous goods - and fought to maintain those high prices during the life of the patents. For whatever it is worth, the firm understood to have invented razors-and-blades as a business strategy did not play that strategy at the point that it was best situated to do so.

It was at the point of the expiration of the 1904 patents that Gillette started to play something like razors-and-blades, though the actual facts are much more interesting than that. Before the expiration of the 1904 patents, the multi-blade market was segmented, with Gillette occupying the high end with razor sets listing at $5.00 and other brands such as Ever-Ready and Gem Junior occupying the low-end with sets listing at $1.00.

Given Gillette’s high handle prices, it had to fear entry in handles, but it had a solution to that entry: it dropped its handle prices to match those of its multi-blade competitors. And Gillette simultaneously introduced a new patented razor handle sold at its traditional high price point. Gillette was now selling a product line, with the old-style Gillette priced to compete at the low-end and the new Gillette occupying the high end. Gillette foreclosed low-end entry by doing it itself and yet it also offered an upgrade path with the new handle.

But what of the blades? Gillette’s pricing strategy for blades showed a remarkable stickiness, indeed, sticky doesn’t begin to capture it. By 1909, the Gillette list price for a dozen blades was $1 and Gillette maintained that price until 1924, though there clearly was discounting off of list as Sears sold for around 80 cents during most of that time. In 1924, Gillette reduced the number of blades from 12 to 10 and maintained the $1.00 list price, so a real price jump if not a nominal one. That was Gillette’s blade pricing strategy.

So another good story is shattered by facts ...

20 August 2010

Cable Map

If you have been following this blog, you know that I am interested in gathering information about undersea cables (especially costs and capacities). Some sites, like Thus, I found this site interesting, as it contains a comprehensive, clickable map. The sidebar contains links to each cable's home page, so it would be relatively straight-forward to gather comprehensive cable data.

Also, if you're interested, I found this article over at Amazon that is related.

18 August 2010

Should Congress mandate FM receivers in all handheld devices?

This item over at Ars Technica is an interesting discussion and commentary around the proposed legislation that would require all handheld devices to contain FM receivers. This has been done before with UHF television, but in that case it was to encourage the development of the then new TV band. In this case, it is to support an existing technology either toward the end of its life (if you follow TechDirt) or at least one that is finding a new niche. As Ars points out, FM still has a wide listenership, but it is suffering from a thousand cuts, from MP3s to HD-Radio to the economic downturn. It is also true that adding FM may not be very costly: chips are cheap and, in fact, the FM functionality may be a software feature rather than a hardware one (depending on the nature of the device).

To me, the central question is: Is this an appropriate role for government? The argument for government mandates in the UHF case was fairly clear, as there was a first-mover (aka chicken-and-egg) problem that had to be solved. To me, the arguments for government mandates in this case are much less clear. Handheld devices could have included FM radios if consumers had demanded them (or if there was a clear business model for including them). Except for some MP3 players, FM radios do not appear to be an important adjunct to many handheld devices, so what is the public interest rationale for government action?

06 August 2010

What ever happened to ... Verizon Wireless and their "open" network?

With much fanfare, Verizon Wireless announced a plan to open their network in November 2007 (see this). A podcast I was listening to today lamented that Google's Nexus One has become a GSM-only developer phone (despite the success of the Android platform on CDMA). This caused me to wonder about what happened to device portability on CDMA ... I have seen or read precious little about this project since then.

So was the Nov 2007 announcement a public relation stunt to appease regulators? Was it more difficult to implement device portability on CDMA more difficult than they had imagined? Did consumer interest in portability not materialize?

05 August 2010

WU v. Bell -- Orton v. Hubbard

In doing some research for my last blog post, I came across this article, which was quite interesting. In it, Prof. Carlson describes the relationship between William Orton, who was President of Western Union in the 1870s and Gardiner Hubbard, who was a backer (and father-in-law) of Alexander Graham Bell). The conventional wisdom is that Orton turned down the opportunity to purchase the telephone patent because he did not see its business potential. As this article points out, the situation was far more complicated and was rooted in a longer term relationship between these two men who had vastly different visions of what the communications industry should be.

In a nutshell, Orton was heavily invested in the business model that was extraordinarily profitable to Western Union (and its shareholders) because he had a large role in building it. Hubbard, a Boston lawyer who was new to this industry, was deeply skeptical about the wisdom (from a public policy perspective) of allowing a critical infrastructure like telegraphy to reside in private hands under monopoly control. He was interested in a re-thinking of the communications industry; for him, this meant placing telegraph stations in post offices in addition to railway stations. This, he reasoned, would make telegraph more accessible, drive down prices, and thus allow the telegraph to be used for social as well as business purposes. He proposed that Congress would fund this new network. This did not succeed, largely due to the efforts of Orton, but Hubbard continued his pursuit of technologies that would enable communications be more accessible, which led him to support Bell. In the end, Orton's dislike of his rival of many years may have contributed to him turning down the telephone patent, though his (Orton's) decision made business sense in the short term.

It is ironic that the technology backed by Hubbard would end up being the next great private infrastructure monopoly. I wonder if that outcome would have Gardiner Hubbard spinning in his grave ...

To me, there are parallels here with today's broadband environment. Was Hubbard like the advocates of subsidized broadband?

Verizon + Google = Western Union + Associated Press

This report of talks between Verizon, who operates one of the largest Tier 1 ISPs, and Google,arguably the largest provider of Internet content, is eerily reminiscent of the deal between Western Union and the Associated Press. You can read the background here, but the basic story outline is as follows:

1) As early as 1846, newspapers saw the economic benefit of sharing the cost of gathering and sending news from different locations. Instead of having one reporter for each newspaper in each location, they needed only to have one reporter in each location.

2) As telegraph emerged as an important information transmission medium, newspapers saw the advantage of using this to distribute news more quickly

3) Telegraph became economically concentrated (as infrastructure industries tend to do) and the economics of newsgathering also led to concentration

4) By 1870, WU was a de facto monopoly and AP was the dominant news gathering agency

5) WU, with agents in every town, had the infrastructure to enter the newsgathering business, and AP generated enough traffic to sustain a private or even rival telegraph network.

6) Instead of entering each other's markets, they basically entered into an exclusivity and non-compete agreement. WU would not get into newsgathering, and AP would not build or use another telegraph system (see this for the gory details)

Now, substitute "Verizon" for "WU" and "Google" for "AP", change the dates, and do you now get something like the NYT story cited above?

20 July 2010

Mobile apps

If you have been following the standards battle between rival mobile operating systems, you will have noticed that one tie-in that the sponsors of each mobile OS touts is the number of apps that are available on their respective platform. This, along with complementary hardware, is an indicator of the size of the ecosystem, which is often a key determinant of a potential user's adoption decision (there are clearly others).

It should not come as a surprise that some observers, often advocates of those mobile OS's with the quantitatively smaller ecosystem, argue that the total number is less important that the number of useful apps. To that end the folks over at AppBrain used the data provided by their system (a subset of Android users) to plot the popularity of apps on the Android platform as measured by the number installed. The resulting curve looks perilously like a power law distribution ... as it should if you follow Chris Anderson's argument in the Long Tail.

22 June 2010

Investment and demand in fiber to the home

There are a couple of threads that, juxtaposed, are interesting. First, Australia moved ahead with its ambitious fiber-based Internet project, and second, this report gives one pause as to whether it would make sense in the US.

So is the US foolish in not making larger investments in infrastructure like our trading parters, is it wise?

The answer is not so simple and certainly not obvious because of the enormous delay involved in rolling out infrastructure. AT&T witnessed this when it signed on as the exclusive carrier of Apple's ground-breaking iPhone. Indeed, they are arguably still trying to make the investments due to the runaway success of the device.

Would the same thing happen with some as-yet unknown device (Device X), except with end user bandwidth? We don't know if Device X exists, or, if it does, what the impact on existing networks would be. The iPhone has shown us that the potential exists, but how can carriers (and their shareholders) be convinced to make large investments based on revenue streams that are highly uncertain at best? Should the government take these risks?

14 June 2010

Tiered pricing and the rise of "proxynets"

This article over at Seeking Alpha is interesting. In it, Shelly Palmer argues that:

people with more money than time pay and people with more time than money steal. Piracy will run rampant, but it will be very easy to thwart because most of the consumption devices will require monthly data plans. If a device is connected to a network and the network operator has your credit card number or billing address, you will have a hard time using it to steal content.

The arms race will continue until the rigidity of the content provider pricing and network service provider greed overwhelm consumers or until a new technology evolves.

The new technology is what he calls "ProxyNets". In this scenario, the WiFi radios on smart phones will be used in the "ad hoc mode" instead of the "infrastructure mode" and will rely on mesh networking for interconnection. If the device density is sufficiently high (that is, the probability of participating radios being in range is high enough) then it will be possible to construct an informal network at low cost outside of a formal carrier relationship. You can even imagine jumping onto the Internet for some links (via an access point) if the density is not sufficiently high. The fact that such a network might exist outside of the control of a carrier, even if the quality is low, might be a "good enough" technology that could be the start of a classic "disruptive technology".

The argument is interesting and even somewhat plausible, especially since almost all smartphones now have wifi. In my international travels, I always jump onto a wifi network with my phone for data connections, since the carrier-based rates are punishingly high.

08 June 2010

Are mobile "bar codes" a new standards battle?

On a recent canal tour in Amsterdam, I noticed a QR code emblazoned on the side of a canal. At the recent Google I/O meeting, participants were given T-shirts with these codes on them. The Pittsburgh Post Gazette also prints a QR code on the front page.

If you haven't interacted with these, the basic idea is that you can, through your mobile phone, use the code to access information. You need a code scanner/interpreter on your phone, which takes images from the phones camera, decodes them, and then passes the decoded information to your phone's web browser, which then calls up the web site via your wireless Internet connection.

Microsoft has developed a denser code which it is trying to popularize. This code uses colors and other shapes as opposed to the black-and-white squares used by QR codes. It is not hard to conclude that this Microsoft code is in many ways superior since it can encode more information for a given surface area.

On the canal tour, I started to wonder whether this is the latest incarnation of the standards battles, the most famous of which is BetaMax vs. VHS, and the most recent of which is HD-DVD vs. Blu-Ray. In this case, QR codes seem to have the early lead because of their popularity in Japan, because they can be printed in black and white, and because they have (implicitly) Google's backing. Microsoft, for its part, has given away the required mobile phone software but must now convince publishers and content providers to use its code.

Is it a standards battle? Is it one that is already over or will Microsoft's technology win the day in the end?

19 May 2010

Google's Nexus One and unsubsidized handsets

I found this item over at the Technology Liberation Front interesting and worth reading. The article points out that the primary cause of Google's failed experiment is Americans' unwillingness to give up subsidies. This may well be largely true, but it fails to capture other aspects of the American telecom environment and this particular experiment that are also contributing factors.

My last two handsets have been unsubsidized and unlocked (by choice), and yes, one of them is a Nexus One. My carrier (AT&T) does not give me a discount for an unsubsidized handset, so I am paying the monthly price assuming I had a subsidized handset. In a sense, then, I am paying twice. I chose this route not for lower cost (obviously) but for flexibility. I am now free to use SIM cards of my choosing and I have a month-to-month contract, which I also like.

The other aspect of the Google experiment is that consumers did not have the opportunity to "try before buy". The brick-and-mortar retail experience does allow this while the virtual one does not. For a complex product like a smart phone, user experience is key (a lesson that Steve Jobs taught us). User experience cannot be assessed via a web page, no matter how well designed it is.

13 May 2010

Verizon is considering licensing their 4G spectrum to rural carriers

This item is interesting. It seems that they would be engaging rural carriers as partners in building the LTE infrastructure. I get the distinct impression that the carriers would have to make the infrastructure investments. If so, Verizon is taking a page out of Western Union's playbook (yes, Western Union) by engaging partners to build infrastructure.

Update on wireless only households

The latest CDC report on wireless only households in the US came out. Overall, the number of wireless-only households with children increased to 25.9% from 21.3%. Techdirt was wondering if this constitutes a tipping point. Except for social acceptance factors, I don't see the positive feedbacks that normally exist in dynamic systems with tipping points; instead, I think the value proposition of a landline is no longer persuasive.

The more interesting figure in the report is the one below, which shows that 25 year olds are 50% likely to be wireless only. This is clearly something that should be worrisome to telephone operating companies, since this is the demographic that defines their future.

27 April 2010

Apple, Google, AT&T and Verizon

This article over at Seeking Alpha is interesting. It describes the "small numbers bargaining problem" in telecom. Quoting the article:

Verizon has already shown resistance to putting the Apple iPhone on its platform for fear that it will use tremendous amounts of data without sharing any of the third party application profits with the carrier. Now VZ is beginning to play games with Google saying that they won’t pick up the Nexus One, planned to be released spring of 2010. Google loses access to the carrier’s more than 90 million users, and seems to have stumbled for the time being in becoming a major player in the mobile handset market.

AT&T has not picked up the phone either. But AT&T has more problems than just Google, they are trapped in a mutually hated relationship with Apple now, where neither party can get rid of the other. It really is the marriage from hell. Apple doesn’t have another carrier, and AT&T can’t throw Apple off for fear that its massive amount of iPhone users will defect from its completely inferior network. So AT&T is trapped having to provide Apple with more and more bandwidth, towers and other infrastructure, as the public and media scream at AT&T to get their network up to Verizon’s standards, not to even mention how well Sprint (S) works. AT&T is for sure frustrated that they have to make these capital expenditures and see no increased profit from them, it’s like bailing out the water from a ship with a 20 foot hole in the bottom. You’re just spending energy trying to stay afloat.

Something has to give here, this can’t go on forever, and I think we are soon to see a resolution to the issue of the telecom giants paying for the network on which Apple and Google make tremendous amounts of money. Maybe the telecoms chose the nuclear option and just stop building their networks holding Apple’s feet to the fire. Steve Jobs can’t revolutionize the content distribution market without a network to do it on, and believe me, the stuff that he wants to do is going to take a lot more bandwidth than is available today. Do we really think the carriers are going to pay for that to happen? Maybe Apple will buy a carrier, or perhaps even build its own network with a next generation technology they have been developing.

I wonder to what extent the independent LTE network proposed by Harbinger (see this) will be the event that shakes things up? In many ways, the situation described above seems to be an analog of the drama being played out over network neutrality in that we have infrastructure investments required for applications that are not easily monetized by the infrastructure owners.

21 April 2010

Developments regarding LTE

There have been a few interesting things related to LTE that have emerged in the last month or so. I have been rather distracted by other things (IEEE DySPAN among them), so I have not blogged about them. On the other hand, it may be interesting to consider these developments on the whole rather than individually.

First up, this article in the NY Times notes the following:
But the superior efficiency of L.T.E. networks will tend to minimize the overall cost of transmitting voice calls, as conversations are converted into tiny packets of 1s and 0s in a process similar to that used by Internet voice services like Skype that give away some Internet voice service free.

Operators, which generate about 80 percent of their revenue from voice services, want to hinder a new downward price spiral. Revenue from termination fees makes up 15 percent of an operator’s sales and profit, said Jacques de Greling, an analyst at Natixis, a Paris bank.

Many smaller operators would rather adopt the billing regime used by Internet service providers and mobile operators in the United States, called “bill and keep,” which splits the costs of interconnection between the caller and person being called, eliminating additional costs for callers. The U.S. system has enabled flat-rate mobile plans and has promoted cellphone use.

Despite the push by large operators for an L.T.E. standard that would preserve their lucrative billing status quo, it is uncertain whether they will be able to extend the European system of termination rates into the L.T.E. era.

But last year, the European Commission, seeking to encourage the greater use of mobile technology, raised pressure on the operators by adopting rules that would require countries to develop a uniform method of calculating an operator’s costs for delivering voice service when determining the termination rates charged in a country. The new rules are expected to reduce E.U. termination rates from a current average of 7 cents a minute to less than 2 cents by 2012.

In Brussels, an advisory council made up of European national telecom regulators is scheduled to consider a plan in May to switch the European regime from termination rates to one akin to the U.S. system. Most operators remain opposed to the change. But given the rapid decline in mobile termination rates, such a changeover may be superfluous.

Then there is this item about the private equity firm Harbinger, who plans on building a nationwide LTE network in the US. The cited article (from GigaOm) does a bit of analysis of the announcement. Such a network that is not affiliated with a major carrier could serve as a "wholesaler" to MVNOs, which could provide an interesting challenge to the competitive landscape of mobile in the US. It could also serve as "overflow" capacity for the existing carriers, enabling them to roll out a footprint more quickly than if they relied exclusively on their own investment and deployment resources. On the other hand, it has been pointed out that this might be a "smoke and mirrors" strategy to boost the value of the spectrum controlled by Harbinger in advance of a potential sale.

AT&T and the Internet of Things

This item over at GigaOm is interesting. To me, beyond the numbers, the most interesting observation was this:
The irony here is that M2M connectivity in many ways represents the dumb pipe future that AT&T is so worried about — it’s not providing anything to its partners but the bits. On the call, AT&T executives explained that the number of bits sent via the network are high-margin bits and the machine-to-machine clients have very low churn.
Customer acquisition costs in wireless are quite high, so low churn is a rational cost management strategy. The observation is right that M2M clients would not turn over very fast, if at all.

26 March 2010

Android and standards sponsorship

The story of Google's "sponsorship" of Android has been making the rounds on various mobile phone and gadget websites in the past days. This story appears to be the source. The essence of the story is this:

In just 18 months, the number of Google (NSDQ: GOOG) Android phones being shipped has soared to 60,000 a day, and over that period countless new devices have been released by handset makers for sale by carriers worldwide.

Nothing typically moves this fast in wireless. So how has Google done it?

Well, at least part of the answer appears to be that Google is sharing advertising revenues with carriers that use Android, according to multiple sources who are familiar with the deals. In some cases, sources said, Google is also cutting deals with the handset makers. The revenue-sharing agreements only occur when the handsets come with Google applications, like search, maps and gmail, since that is not a requirement of Android. Google declined to comment, and said terms of its agreements with partners are confidential.

When researchers began studying standards in the 1985-1995 timeframe (see, for example papers by me, Joe Farrell, Garth Saloner, Michael Katz, Carl Shapiro, Marvin Sirbu, Michael Spring and others), we focussed on understanding the market dynamics of standards wars -- why did one standard dominate the market? How did markets prone to standardization behave? Why do firms use the committee process and what is effective in this process?

Mobile phone operating systems for smart phones arguably represent a market prone to standardizations because of the application ecosystem demanded by users and because of the operating efficiencies demanded by carriers. So, why is Android succeeding where WebOS is struggling? One of the significant answers to this question is sponsorship. The endorsement of key handset manufacturers would encourage carriers to adopt the handsets because of operating efficiencies; however, that is perhaps a weaker form of sponsorship. Google's more overt sponsorship is far more compelling because it results in direct revenues for carriers and would account for the more enthusiastic reception of Android. In contrast, Palm has arguably a superior operating system in WebOS (also Linux-based) but does not have the resources to credibly sponsor its OS. As a result, its system and phones have received an embrace by carriers that is far more lukewarm than Android; indeed, it is perhaps only because of its compelling OS that Palm is getting any interest at all! Its Treo Pro phone, based on Windows Mobile, was treated with much less interest than its WebOS phones.

It is nice when reality lines up with theory!

22 March 2010

Lock-in and smart phones

This item over at Ars Technica is interesting. The article itself is an analysis of Palm's WebOS and why it failed to attain economic success. While many articles have been written about this (and even though the reports of Palm's death may be premature), I found this section of the article most interesting:
In at least one way, Palm's webOS was a victim of its own success. I ultimately found that the Pre's thoroughgoing cloud orientation, which I made a big deal about in my review of the device, meant that there was nothing tying me to Palm. And unfortunately for Palm, the Pre had to compete with client platforms from companies that also provided important cloud services, with the result that those providers could privilege their own clients and thereby gain a certain amount of vendor lock-in.

Three examples serve to illustrate my point that it was too easy to leave Palm, and that Palm's decision to be solely a client platform put it at a permanent, structural disadvantage against competition that offered both clients and services.

Back in January of this year, after unboxing my new, Google-provided Nexus One, I logged into a few accounts (Google, Facebook, Twitter, Evernote, Amazon) and voila—I had all my data on this new phone, along with my existing phone number (courtesy of Google Voice). Think about that for a moment: I ported all of my data and my phone number to a new phone on a new carrier, without so much as swapping a SIM card or calling a customer service number. And every cloud service that I used on Pre was immediately available on the Nexus One.

So it was that on the day that I got the Nexus One, I dropped my Pre in a drawer and didn't turn it on again for two months, all without missing a single message.

Leaving the iPhone was a lot harder, because—and this is my second example—I was in the habit of syncing my music with iTunes. iTunes syncing was the main way that Apple got its hooks into me, and everyone who followed the Apple vs. Palm battle over iTunes and Pre sync support knows that Apple guarded that key source of lock-in jealously.


In the end, Palm's fundamental problem wasn't the lack of reliable, first-party Google Voice or iTunes support, but the fact that Palm itself never offered a similarly essential service that it could use to lock users into the Pre. Google had Voice, Apple had iTunes, and Palm had nothing. The Pre didn't have first-party support for anything that I depended on or enjoyed, so it had no hold on me at all.

By not having any way at all to lock the user into webOS, Palm ended up betting the farm on the proposition that the Pre could and would deliver an overwhelmingly superior mobile client experience for a common slate of cross-platform cloud services. In other words, because Palm doesn't own any part of a user's data or identity, the user experience is the only thing that could tie a user to webOS.

There is economic research that supports the notion that lock-in is good and desirable for companies; in fact, companies work hard to achieve lock-in to avoid being commoditized. So, as we migrate to cloud-based services, how do firms and carriers create compelling lock-in for consumers? Is this a good thing for consumers?

One of the interesting things for me is that Google, with its Nexus One and Android experience, is trying to drive the value to the cloud through a device that works hard to NOT provide lock-in (i.e., contractual or standards), while Apple is trying to create as much lock-in as possible. This is no surprise, given their rather substantially different business models.

15 March 2010

iPhone usage and AT&T's network

This item over at GigaOm is interesting:
iPhone users tend to use their devices in the evening and on the weekends, reports Localytics, a Cambridge, MA-based start-up offering mobile analytics services. According to as study conducted by the company, the mobile app usage in the US peaks at around 9 pm EST on week days. Over the weekend, the usage is at its peek during afternoon and nights.

Since these time correspond (historically) with lower voice traffic, why are there so many complaints about AT&T's network? Could it be that their network is only part of the problem (see this earlier post)?

12 March 2010

Verizon FiOS buildout

This story is interesting:
They [Verizon] have now canceled planned FiOS deployments for all new territories such as Alexandria, Virginia. According to Bryant Ruiz Switzky in the Washington Business Journal, Verizon is "suspending Fios franchise expansion nationwide." They are "indefinitely postponing" building Alexandria after telling the city they would begin construction several months ago. Alexandria is one of the richest suburbs in the world and a natural part of the network with a lower than average likely construction cost. Verizon "will now focus on installing its network and gaining market share within the areas where it already has agreements." Bostonians and 10M other Verizon customers are apparently screwed.

Verizon has buildout commitments to New York and other cities that will keep some crews working, but had already suggested they might cut FiOS builds by 2/3rds in 2011. This is now a further cutback, canceling areas that for years they had been promising to serve. Verizon's Harry Mitchell sends their perspective. "The bottom line is that Verizon said in 2004 we’d build to pass about 18 million homes by year-end 2010, and we’re on track to do that with the franchises we currently have. Of course, we will also meet any buildout commitments we made in individual jurisdictions beyond 2010."

The article goes on to speculate that Verizon is hoping to get Federal support for this buildout under the broadband plan. If this is the case, then it is a classic illustration of the "moral hazard" of government interventions in markets. Why should a company take private risks when public funding is available?

But this may not be the only explanation. Others have speculated that the business case for FiOS (and similar systems) is weak to begin with. If this is the case, then Verizon's actions are rational.

04 March 2010

Google, Microsoft and the smart grid

I have been wondering why Google and Microsoft have been working hard to enter the "smart grid" market. This article over at Seeking Alpha provides the best explanation I have seen so far:

Google, Microsoft (MSFT), Intel (INTC) and others have all launched efforts to control how consumers and businesses monitor and analyze their energy consumption. Why the rush? Neither Google nor Microsoft will charge consumers for PowerMeter or Hohm. However, advertisers will likely pay both companies for the opportunity to hawk energy efficiency services and other energy-related products to consumers who use their respective consoles. Both companies will also mine the data (after it has been scrubbed to protect privacy) to utilities. Don't worry about a loss of dignity or privacy: you'll get a coupon for ten percent off on a new set of storm windows.

18 February 2010

Demand for spectrum

In the end, what may end up driving the development of dynamic spectrum access (DSA) technologies (such as cognitive radio) is the demand for spectrum driven by mobile uses. GigaOm has this item that lays out the case as succinctly as I have seen (thought he doesn't mention DSA). DSA can allow carriers to tap into idle spectrum to address temporary over-capacity problems.

10 February 2010

LTE vs. WiMAX: The next standards rivalry?

Standards rivalries have always fascinated me. The dynamics are quite surprising sometimes. Until I saw this item over at GigaOm today, I thought that LTE had won the battle for the next generation mobile standard. What this article points out is that this may well be true for the industrialized world, but it is far from the case globally. So are not going to converge on a single mobile air interface (to facilitate roaming) this generation?

08 February 2010

Smartphone statistics

This article from the financial blog "Seeking Alpha" does a good job of carefully analyzing smartphone market data. What Sidahmed shows in this article is that interpretation of results is very much in the eye of the beholder. While this is not news to people who have studied statistics, I think this article is a great example of how to do a more careful job in analysis.

02 February 2010

YouTube and IPv6

This article over at Network World is interesting. Quoting the article:
IPv6 traffic came into ISPs from all over the world when Google turned up its IPv6 traffic on YouTube," Levy says. "IPv6 is being supported at many different Google data centers. We're talking about a traffic spike that is 30-to-1 type ratios. In other words, 30 times more IPv6 traffic is coming out of Google's data centers than before.

That is some spike! It will be interesting to see if this applications (along with the other IPv6-enabled Google services) will be enough to build momentum for a large scale conversion to IPv6. This is a classic case of standards transition; it will be interesting to observe the dynamics.

12 January 2010

The cost of customer service

This report on customer service problems Google is facing with its Nexus One phone is a case study on how success in one arena (software services) does not always translate well into another arena. Customer support is expensive. I remember reading cost studies of ISPs and telephone carriers in the 1990s that indicated that customer service accounts for 35-50% of total costs. What was memorable to me in those studies was that providing customer service was more expensive than operating the networks that the customers used! It seems that Google is learning this lesson now.

05 January 2010

Standards rivalries

I have been interested in standards rivalries for a long time, so this article over at Ars Technica caught my attention (see this search on HD-DVD vs Blu Ray, for example). There are more nuanced analyses of both the Betamax vs. VHS and HD-DVD vs. Blu Ray rivalries, but I think the most interesting contribution of the Ars Technica article is the prediction of a shift from hardware rivalries to content distribution rivalries as we move from hardware to software. I think they have an interesting point, one that the economic literature on standards has not addressed explicitly (to my knowledge).