« August 2001 | Main | April 2002 »

October 29, 2001

When It Comes to Pricing Software, the Greener Grass Is Hard to Find

“Round and round
What comes around goes around.”
     -Ratt, Round and Round

Despite the fact that the software industry is careening towards its fiftieth birthday, in many ways it looks like an industry that has not quite matured – one that is still finding its way in terms of business model and pricing. While Microsoft seeks to move corporate users to a subscription model for Microsoft Office, many ASPs (Application Service Providers), are working frantically to return to a pricing model where more cash is collected up-front – i.e., “away” from the subscription model. How can an industry so old have such schizophrenia about something as simple as a pricing model?

From the beginning, the software industry has had one key distinguishing characteristic from all other businesses – variable costs are at or near zero. Economic theory suggests that in order to maximize profits, you want to pick a price whereby marginal revenue equals marginal costs. However, with marginal costs always equal to zero, this formula obviously breaks down.

Different types of software companies have used different approaches and theories. At the high end, enterprise- software strategists say to price as “high as you can” to reap the maximum profit, and to help offset direct sales costs. On the other hand, companies like Microsoft favor entering the market at a low price with the objective of taking a huge portion of market share. If your R & D dollars are spread across the most customers, no one else can afford to keep up.

Prior to 1995, most enterprise-software companies followed a pretty consistent pricing strategy. Charge as much up-front as you can for the software. Typically a number north of at least $250K is needed to justify direct sales costs, if this is the chosen sales model.
On top of this, the customer is asked to pay about 18% of the original purchase price in “maintenance” which basically covers customer support and access to minor upgrades of the product. Then, every three of four years, the vendor will release a “major upgrade” which requires all customers to revisit the big-ticket investment again.

In the mid-1990’s, this model started to show signs of wear, and most enterprise-software companies found themselves in an awkward position. Each quarter, the company’s sales force would work as hard as they could to close as many customers as possible in these mega-software sales that were fast approaching $1 million per deal. However, when the quarter ended, the company had to start again from ground zero, and the entire game began again. As customers caught wind of the game, many began delaying purchasing until the exact end-of-quarter, when the vendor was most eager to close a deal. As such, the monthly allocation of revenue in a typical enterprise-software company across a quarter could be as lop-sided as 10%, 10%, 80%, with the majority of revenue being closed in the last two-weeks of the quarter.

This model is not for the faint of heart, and as such, many stressed-out CEOs began to search for a new model that might alleviate the end-of-quarter rush and the ridiculous amount of uncertainty inherent in such a model. About this same time, the rise of the Internet gave birth to the idea of an ASP – a model where software would be delivered as a service over the web, and customers would “subscribe” to the software. Analysts raved at the genius of the idea. With this model, the customer would pay an incremental fee each month, therefore eliminating the “start from zero” sales game inherent in the software model. Assuming no loss of customers, the revenue from last quarter is already booked for this quarter – all new sales theoretically represent incremental growth.

Alas, the grass is indeed greener on the other side. For all the theoretical advantages of the subscription model, one key challenge makes it extremely difficult to execute. Let’s assume I have a small software company that sells enterprise-software the old-fashioned way for $1MM base license and 18% maintenance. With this model, the company will book and collect cash flow for $1MM in year one. Now let’s take the amount this customer would spend over 3 years ($1.36MM) and spread it over 36 months in
a subscription model. If the company closes 10 accounts in year one, spread evenly across the year, the recorded revenue and collected cash flow for year one will only be $2.26MM, compared with $10MM in the old model. This is why many ASP players backed off their original pitch and are attempting to sell traditional licenses.

The problem, you see, is capital availability. If you ever make it to break-even, then the subscription model is clearly preferred. However, the capital needed to grow such a model is tremendous, as the customer payments have been pushed out – i.e., the startup is providing vendor financing. When the ASP model began to buzz, many of the enterprise-software vendors did this math, and criticized the model as “unobtainable.”

Ironically, the difficult economy has created a situation where the customer seems to prefer the subscription model. Capital budgets have been cut, and everyone would prefer to buy by the drink instead of in one up-front lump payment. This has caused even the licensed software vendors to enter into financing agreements whereby the customer pays out over a period of time instead of up-front. Once again, schizophrenia is the only consistent theme.

So what’s the best model? Perhaps it’s a blend of the two. Recognize revenue on a subscription basis, but try to collect as much of the cash flow up-front as possible. This will give you a conservative brace from the trials and tribulations of the license model, but at the same time will not leave you starved for capital to run the business along the way. Of course, this model will require enormous patience to reach accounting profitability, but in the long run (forgive me Mr. Keynes), you will be much better off.

Posted by Bill Gurley on October 29, 2001 at 08:00 AM | Permalink | Comments (0) | TrackBack

October 01, 2001

Tapping the Internet

“Don’t believe what I saw.
A hundred million bottles washed up on the shore.”
     - The Police

In the weeks following the World Trade Center tragedy, many government officials were actively lobbying for increased Internet surveillance as a method of restricting terrorist activity. This is likely the direct result of numerous reports that Osama Bin Laden and his many supporters are heavy users of the Internet for organizational and informational purposes. From the floor of the senate, Senator Judd Gregg of New Hampshire called for “a global prohibition on encryption products without backdoors for government surveillance”. Also, many large ISPs, including AOL, Earthlink, and @Home, have reported that the FBI approached them after the tragedy and served them with Federal Intelligence Surveillance Act (FISA) orders to search for possible communications that
may have aided the attacks in New York and Washington. 

Protection of Freedom? This type of activity sends shivers down the spines of many pro-privacy technology activists. It should be noted however, that these outspoken and knowledgeable people are not pro-terrorist. In fact, many are terribly disturbed by the terrorist action. That said, they do not believe that you can protect freedom through the process of restricting or destroying it. As ammunition, they are quick to quote Constitutional contributor Benjamin Franklin -- “They that give up essential liberty to obtain temporary safety, deserve neither liberty nor safety."

Disregarding these strong-minded, civil liberties based perspectives; a closer look at Internet surveillance uncovers many problems in both implementation and potential effectiveness. For starters, there is a huge predicament with just how much of the genie is already out of the bottle. So called “strong” encryption techniques (those that are nearly impossible to decipher), are broadly available on the Internet. Moreover, these “programs” are cataloged and archived in many forms – software executables, source code listings, and simple algorithms that describe the general concepts. Also importantly, many of these algorithms have been developed outside the United States.

Another perhaps disturbing but real development is the increased use and availability of Steganography. Steganography is the act of embedding or hiding a message in another transport. Several programs on the Internet, many that are shareware and free to download, make it easy for you to embed one file in another. Typically the transport file (that which hides) is a large dense file type such as a JPEG photo or an MP3 file. Interestingly, these encoding techniques are so slick that the resulting file is indistinguishable to the human eye (JPEG) or ear (MP3). As a result of this “conversion,” a covert communication may appear as innocent as two parties sharing a Britney Spears song over the Internet. USA Today has reported that Osama Bin Laden and his followers are heavy users of Steganography.

As mentioned earlier Senator Gregg has suggested that we implement a “global prohibition on encryption products without backdoors for government surveillance”. This type of proposition has many difficulties once you look under the covers:

Whom do we trust? We can’t get a majority of leading countries to join a coalition against terrorism, and we think we can line everyone up in an organized assault on encryption? Many countries have much stronger perspectives on personal privacy and are therefore unlikely to participate. Other less industrialized countries are going to have a hard time considering this a relevant priority. More importantly, how will we implement the dissemination of government keys? Do we trust all governments that join the effort? Who gets to see cross border communication?

Outlaw a t-shirt? Many in the scientific community have pointed out the silliness in outlawing an algorithm (basically a flow chart of how the code works). First, any good programmer can convert a detailed algorithm into software code, and as such the algorithm (or formula) is the tersest representation of the offending material. Second, these algorithms are everywhere. They’re on the Internet, they’re on hard drives all over the world, they’re in books, and they have even been printed on t-shirts to highlight the free speech implications of such an attempted prohibition. There is absolutely no way to reign in all the copies of these ideas, or to restrict their trade amongst those determined to do so. It’s like trying to outlaw the story of “The Three Bears” – too many people already know it at this point.

A sauna in the desert. Once again, Senator Gregg wants encryption software makers to implement government backdoors in their products. The only people I know that actually use encryption products are those that hate, loathe, or at the very least mistrust the government. Government issued encryption programs will see about as much use as a sauna in the desert. They might as well put a sticker on the box that says “don’t buy me”. This would be a colossal waste of time.

Not so intelligent. Many have suggested that the terrorists are “more intelligent than you think” due to their clever use of these technologies. Another Senator, Jon Kyl of Arizona has commented frequently on the “sophistication” of the terrorists for this very reason. This presumed intelligence might be more a factor of their accusers own ignorance rather than their own aptitude. This stuff is ridiculously easy to obtain. Go towww.google.com, type “Steganography program,” and start downloading. You will be able to put an email message into a family photograph within five minutes. You must know the magnitude ofthe problem you are trying to solve.

“Your hands can’t hit what your eyes can’t see.” Muhammad Ali used this quote to refer to his lightening fast hands, but the same statement is true for message embedded using Steganography. How will the government identify potentially hazardouscommunications if every photo, music, and video file on the Internet is an unidentifiable transport? And even if you found the transport and decoded it, the message could still be encrypted using “strong encryption.” Seems impossible. It probably is.

One “big” haystack. There are an increasing number of ways to move files on the Internet. To name a few -- email, ftp, instant messenger, chat, file lockers, Napster, and Gnutella. In the next few years, the number of emails and instant messages sent each year will be measured in the trillions (for each). Peer-to-peer file transfers will easily number in the billions. How do you monitor all of this? Where could you even store the log data? The pin is small, the haystack is large, and astute cryptographers can use Steganography to increase the size of the haystack.

The government should not give up on computer surveillance. In fact, as a tool that is used to track down a particular offender after isolation and identification, these technologies can be extremely effective. However, we should not be unrealistic about what type of “magic” spy technologies are at our disposal. We are only going to spend a lot of money, waste a lot of time, and create a false sense of security.

Posted by Bill Gurley on October 1, 2001 at 08:00 AM | Permalink | Comments (0) | TrackBack

DISCLOSURE: The information contained Above the Crowd has been obtained from sources believed to be reliable but is not necessarily complete, and its accuracy cannot be guaranteed. Any opinions expressed herein are subject to change without notice. The author is a general partner of Benchmark Capital, a venture capital firm in Menlo Park, Calif. Benchmark Capital and its affiliated companies and/or individuals may have economic interests in the companies discussed herein. © J. William Gurley 2005-2006. All rights reserved.