Apple Computer, Inc. is the world’s largest corporation, with a market capitalization of $566 billion as of October 2012. This is greater than that of Microsoft and Google combined. Apple’s ability to delight millions of consumers with well-crafted, fun to use, life-enhancing devices accounts, in large part, for why the company has done so well.

But Apple doesn’t merely create superlatively innovative devices; it also allows just about anyone with an Apple Macintosh computer to design, write, and sell all sorts of applications that dramatically extend the functionality of those devices.

In the summer of 2008, Apple introduced its iTunes App Store online, where owners of iPhones and iPod Touches could view, purchase, and download small applications (“apps”) by means of the devices themselves. The “App Store” is now a household phrase, and 700,000 apps are available for the 400 million Apple devices purchased so far. Apple didn’t invent the software market, but with its new, highly profitable way to sell software, it built a better one.

Ever since Apple introduced this new marvel, the company’s competitors—Microsoft, Google, and Amazon—have been scrambling to keep up. This high-stakes scramble provides a glimpse into the remarkable, life-serving power of capitalism.

Apple’s App Store innovation not only rained wealth on the company and values on its customers; it also encouraged Apple’s competitors to exploit a plethora of new opportunities made possible by the innovation, and thus increase their own innovation, their own product offerings, their own profits.

Likewise for smartphones. Although Apple did not invent these, either, the company’s iPhone and its sister, the iPod Touch, were breakthrough handheld devices. People on the go could now carry what were essentially miniature computers capable of performing an unprecedented number of tasks, and these devices were sold at prices that millions of people could afford. These devices also expanded the possibilities for apps for mobile devices—possibilities that Apple proceeded to turn into actualities.

Initially Apple permitted only its own apps on the two devices. But the company soon realized that the potential for products and revenue would expand exponentially if it permitted independent developers to create software for them as well. Apple’s experience with the desktop computer—both positive (Apple II) and negative (Macintosh)—had demonstrated that the more apps there were for its hardware, the more hardware Apple would sell. But how could Apple ensure that the apps were of sufficiently high quality and value?

To understand how Apple answered this question, it’s helpful to recall how software applications were delivered over the preceding three decades—namely, by means of the proprietary, distributorship, and shareware/freeware approaches.

Most business and entertainment software involved the proprietary approach: Major corporations published software, with list prices ranging from about $40 for a game to hundreds of dollars for productivity software (such as Microsoft Office). Such software was written to disk, packaged with documentation, shrink-wrapped, shipped, and sold through independent storefronts such as Best Buy. Thus, the creation and distribution process required armies of developers, designers, architects, artists, shippers, and retailers.

To enter the market and successfully compete in the context of this proprietary approach, independent developers needed a stellar product concept or prototype and a lot of capital. And, once a company had proven successful, it was often purchased by a larger corporation, such as Microsoft or Apple, and its product line incorporated into theirs.

Under the distributorship approach, the developer created the software, packaging, and documentation all on his own, then tried to find a distributor to sell the product. The distributor, in turn, sent catalogs to lists of potential buyers, handled the financial details, shipped the product, and took a cut of the sale price (typically between 20 and 50 percent). With the introduction of the World Wide Web, distributors had a powerful new marketing channel, which aided both them and the developers.

In the third, relatively fringe approach, involving “freeware” or “shareware,” independent developers wrote applications and provided an online location where users—often other developers—could download the software. Freeware was software that was available for free, although often developers encouraged users to contribute a small amount if they found the software useful. Optional contributions compensated the developer somewhat and provided incentive for him to improve the product. Shareware was distributed for free, but the developer requested that the user purchase a license to use the software for an extended period. Because users often neglected to pay the shareware license, many software developers distributed shareware (alternately called “trialware” or “demoware”) that operated with limited functionality until the user purchased a license. With the World Wide Web, the freeware/shareware market expanded to some extent. However, because (among other deficiencies) it was difficult to monetize and provided no standard user interface for sites offering the product, this approach (although still widely used) captured only a small fraction of the software market.

Then, at the beginning of the 21st century, came the smartphone, and, with it, great opportunities for further innovation. The smartphone—essentially a small computer—could do far more than manage contact lists or calendars; it could run software applications. Independent developers quickly realized the opportunity to create applications that would enhance the smartphone’s features and capabilities.

Research In Motion’s BlackBerry became the first smartphone capable of sending and receiving e-mail, accessing the Web and internal corporate networks, and managing contacts in new, more integrated ways. Developers created programs for the BlackBerry that markedly enhanced its users’ productivity. But the business model for selling and downloading these applications was still the status quo, which severely limited the value of this new technology. The power of the desktop computer was moving onto handheld platforms, but the pattern for selling software for handheld devices was relatively archaic.

Apple changed that. In July of 2008, less than a year after the first Apple iPhone was sold, Apple added a new component to its iTunes Store: The App Store. No other smartphone manufacturer provided its customers with an online store—accessible from the smartphone itself—that enabled users to browse, search for, purchase, and download applications.

Apple further changed the way developers market and sell their software. For an annual fee of merely $99, Apple licensed a developer to write apps using a Macintosh computer (available for as little as $600) with free Apple development software, test it, and upload it to Apple for acceptance tests. After passing Apple’s acceptance tests, the app was placed in the online “App Store” and made available for purchase and download by any of the millions of iPhone and iPod Touch users.

Developers were responsible for writing their own marketing pitch, providing screen shots, and, if their app came with a price tag, providing Apple with information regarding their bank account. (In some cases, developers did not charge for an app but made it available to try to garner a following. Then, once established, they could monetize the app with upgrades or in other ways, or create and sell new apps to their already happy users.) But, apart from that work by the developers, Apple essentially took the ball from here. Apple handled the financial transactions (all purchases were made using a credit card) as well as currency conversions. Apple maintained the apps on its own servers, provided updated sales information, notified users of updates, and paid the developers monthly. And Apple took responsibility for ensuring that its virtual store was safe and secure for users to do their shopping. In return for all of this, Apple got a 30 percent cut of the purchase price. This also applied to any purchases a user made from within the app, such as buying extra “lives” in a game. In addition, developers could display ads in their apps and thus generate even more revenue.

Apple also provided libraries of code to support the creation of apps (including in-app purchases and advertising) and lots of example code—all for free. The burden of either finding a publisher or distributor to sell your app, or following up on all of the marketing minutiae yourself, was now gone.

Apple had tiered pricing for apps, starting for free, jumping to $0.99, and then increasing incrementally by $1. Most apps available at the iTunes App Store were free, with the next-largest number priced at $0.99, a nearly insignificant price to the millions of people who owned smartphones. Apple also allowed consumers to post reviews at the App Store of the apps they purchased, enabling readers to hear from existing users before making a purchase. Developers could peruse these reviews to discover what users liked, disliked, and wanted to see in future versions and new products.

So, for about $1,000 plus the time needed to write the code, an independent developer could create an app, test it, and sell it through Apple. No knocking on doors of publishers or distributors. No setting up a website, or purchasing software for shopping carts, or processing credit card transactions. No need to maintain servers, or to constantly guard against malicious attacks. Imagination, ambition, and persistence were the only limits to a developer’s success.

Consequently, Apple inspired its competitors to create their own online app stores. Google now has its Android Marketplace, Amazon has Appstore For Android (and there are many other Android app marketplaces as well). Other app stores include Microsoft’s Windows Phone Store, Research In Motion’s BlackBerry App World, and Nokia’s Ovi Store.

As of this writing, Apple and Google each offer 750,000 apps in their stores, and Microsoft offers more than 100,000 apps to users of Windows Phones. In January 2011, Apple opened its Macintosh App Store, so developers could create and provide desktop computer apps as well. Microsoft followed suit with the Windows Store.

Each online marketplace has its own rules for determining which apps it offers. Apple has the most stringent guidelines, intended to preserve its brand and reputation for providing unmatched quality and value. Google has few restrictions; Microsoft and Amazon are closer to Apple than Google in their policies. And an app rejected by Apple could well be accepted by Google, Microsoft, or Amazon.

All told, Apple has dramatically changed the software world. Not only has it created spectacular profit for itself and enormous value for its customers; it has enabled tens of thousands of developers to compete on multiple playing fields with relative ease and to profit handsomely from their efforts; it has inspired its own competitors to rise to higher levels of productivity, create better products, and make more money.

In sum, Apple has delivered an abundance of fabulous products, inspired many more, and created massive wealth that enhances millions—if not billions—of human lives every day.

This is capitalism in action.

[groups_can capability="access_html"]

Endnotes

1 Richard Rorty, Achieving our Country: Leftist Thought in Twentieth-Century America (Cambridge: Harvard University Press, 1998), p. 29. Rorty is here interpreting John Dewey and doing so favorably.

2 “From Logic to Language to Play,” Proceedings and Addresses of the American Philosophical Association 59 (1986): pp. 747–53.

3 Rorty, Achieving our Country, p. 27. Rorty is here interpreting Walt Whitman and doing so favorably.

4 Richard Rorty, “The Next Left,” interview by Scott Stossel, Atlantic Unbound, April 23, 1998.

5 Richard Rorty, Consequences of Pragmatism (Minneapolis: University of Minnesota Press, 1982), p. xlii.

6 Rand’s induction of this principle involves several steps; for details see Ayn Rand, “The Objectivist Ethics,” in The Virtue of Selfishness (New York: Signet, 1962); or Craig Biddle, Loving Life: The Morality of Self-Interest and the Facts that Support It (Richmond: Glen Allen Press, 2002).

7 Robert C. Mortimer, Christian Ethics (London: Hutchinson’s University Library, 1950), p. 8.

8 “St. Gregory’s Pastoral Rule [chapter XXI]” in A Select Library of Nicene and Post-Nicene Fathers of the Christian Church; Socrates, Sozomenus Church Histories (1890), edited by Philip Schaff and Henry Wace.

9 Rand, “The Objectivist Ethics,” p. 13.

10 Ayn Rand, For the New Intellectual, (New York: Signet, 1963), p. 123.

11 Rand, “The Objectivist Ethics,” pp. 27–28.

12 Cf. Leonard Peikoff, Objectivism: The Philosophy of Ayn Rand (New York: Meridian, 1993), p. 276.

13 Rand, “The Objectivist Ethics,” p. 28.

14 For a detailed overview of the Objectivist virtues, see Rand, “The Objectivist Ethics”; or Biddle, Loving Life, chapter 6, “Objective Moral Virtues: Principled Actions.”

15 To learn more about America’s Comeback Philosophy, read Rand’s novel Atlas Shrugged and her nonfiction books The Virtue of Selfishness, Philosophy: Who Needs It, and Capitalism: The Unknown Ideal.

[/groups_can]

Return to Top
You have loader more free article(s) this month   |   Already a subscriber? Log in

Thank you for reading
The Objective Standard

Enjoy unlimited access to The Objective Standard for less than $5 per month
See Options
  Already a subscriber? Log in

Pin It on Pinterest