avatars-000073120599-46q7im-original.jpg

a16z Podcast

A16Z Podcast: Holy Non Sequiturs, Batman! — What Disruption Theory Is … and Isn’t

Listen to this episode on SoundCloud.

This episode centers around Clayton Christensen's theory of disruptive innovation—one of the most discussed, analyzed, challenged, and misunderstood concepts in business. Christensen coined the term in a 1995 Harvard Business Review article and expanded upon it in his 1997 book, The Innovator's Dilemma.

A16Z's Sonal Chokshi interviews Michael Raynor, who co-authored Christensen's follow-up book, The Innovator's Solution, in addition to his own work, The Innovator's Manifesto, which attempts to test the predictive power of the theory. The key question at hand: what is disruptive innovation? And what isn't?

The Use and Overuse of the Term "Disruption"

Raynor notes that the term "disruption" has come to be used far too frequently with far too little precision. When people use it, especially in a business context, they forget what the word actually means, and often confuse it with the typical meaning of "disrupt": to hold up something, to slow it down, to interrupt an otherwise smooth and even flow. They also overuse the word to describe all manner of phenomena, many of which have little to do with the original meaning (Raynor calls this "verbal inflation").

The Innovator's Dilemma focuses on a particular class of phenomena whereby companies (often small under-resourced startups) are able to successfully enter markets that are dominated by well-managed incumbents. Disruption, as laid out in the book, describes a very particular pathway by which a scrappy little upstart is able to overturn a successful incumbent in an established market.

Chokshi gives her summary of the disruptive path: the startup comes in, usually from the lower end of the market, with lesser features or reaching a niche customer set that is not being served. The startup utilizes an accelerator technology that then allows them to be able to move up-market. Raynor notes that the accelerator (or "enabling technology" or "extensible core") is often ignored, but is a critically important, necessary condition to disruption.

Raynor lays out the three necessary conditions for disruption:

1. Disruptors start in segments of the market that incumbents aren't motivated to fight for, or fundamentally don't see (i.e., the "low end," or an entirely new market for that product, where the only competitor is non-consumption).

2. Disruptors have a fundamentally different business model that allows them to profitably serve the niche that the incumbents don't want. While the incumbents can't make profit there, the disruptor must—losing money in a niche market is not a disruption, it's just losing money.

3. Disruptors utilize an enabling technology that allows them to take the same business model and (later) serve the mainstream markets that the incumbents do care about. By that point, it's too late, the incumbents can't respond, because the disruptor has broken the trade-offs that they were depending on. 

Pace of Disruption

Chokshi notes that disruption seems to be happening a lot faster, and Raynor agrees. Disruption in the steel industry, for example, took over 40 years—Nucor, the archetypal disruptor, started with rebar, a low-volume, low-margin segment of the steel business that incumbent steel makers were not motivated to defend. The company built a fundamentally different business around the mini-mill, but it took 43 years for it to become the size of the largest integrated mills. Why? The enabling technology was electric-arc furnaces and continuous casting, which improve relatively slowly.

In contrast, personal computers, which started as toys sold to hobbyists, disrupted mainframes and minicomputers in just a couple decades, because the enabling technology was the microprocessor, which gets better quickly.

Disruption Theory's Predictive Power

In Raynor's book, The Innovator's Manifesto, he attempted to investigate the predictive power of disruptive theory, and whether it could be taught to MBA students in a way that improved their ability to predict company outcomes (he's since replicated the experiment with executives). The participants were presented a portfolio of case studies on businesses that had been launched by Intel over the years. First, they were asked to pick winners and losers, then they were asked to do so after being taught about disruption theory. The users of disruption theory improved their accuracy up to fifty percent, but in absolute terms, modestly. Their success rate was around ten percent in picking winners, and it was about fifteen percent in picking winners with disruption theory. (Because, Raynor notes, "it's a big noisy world.")

What is Disruption Theory Not

Raynor, Christensen, and another professor at HBS named Rory McDonald have a piece coming out in the December issue of the Harvard Business Review tackling this question.

First case: Uber (Chokshi belives Uber is disruptive, like the audiences Raynor surveys). Raynor discusses the characteristics of the theory as it applies to Uber. First and foremost, it's a theory of customer dependence—whom are you selling to? Whom did Uber sell to? Was it the low-end of the taxi market, that the established taxi companies simply couldn't be bothered to serve? Was Uber selling to people who found hailing a cab and paying for it so inconvenient and so expensive that they had just never used cabs before? No.

Uber was and is going after folks who want a cheaper, more convenient, cleaner, nicer, cab ride. A data point: Uber has gone from roughly 350k rides a month in Manhattan to 3 million rides a month in Manhattan. Over that same period of time, the drop-off in yellow cab rides was roughly equal: 3 million rides a month. 

Raynor reiterates: what disruption describes is a pathway, a particular way in which a small under-resourced entrant can succeed against well-managed, dominant incumbents. So it's a pathway. It's not a description of impact on an established market, which is how people have tended to use it. They'll say, "Oh, Uber's disruptive because it's turned the industry upside down..." While it has revolutionized the industry, it's not disruption, and that matters. Because if we think it's disruption, then other folks who want to pursue a disruptive strategy will think, "Well I need to do what Uber did". What did Uber do? Uber did something that, to Raynor's mind, is a fairly long-odds proposition: they built a better mouse trap.

Second case: Tesla. Was Tesla targeting a small, unprofitable, unattractive segment of the car market that was of no interest to incumbent car companies? No, they're targeting people willing to spend $100k on a car. Which is very interesting and important to companies like Mercedes and BMW and Lexus. The customers were underserved by existing solutions, meaning Tesla's electric drivetrain and software-oriented experience are examples of "sustaining innovation." (Disruptive innovations target overserved customers, for which established solutions are too good, too expensive.) Tesla is better explained by Jeff Moore in "Crossing the Chasm"—to cross the chasm, a company finds very demanding customers, and creates a highly effective solution that solves their problems really well, and then rides a cost-reduction curve into the mainstream. That's completely different from what disruption describes, which is a very different path from the fringe to the mainstream. 

Third case: Theranos (Raynor proposes that Theranos is disruptive). As Raynor understands the company, they've created a whole series of blood tests that are able to give a high level of accuracy with very low expense and very low inconvenience. That's an innovation, because it has broken trade-offs—an innovation for Raynor is "anything that breaks a constraint," or "more for less." Theranos is having difficulty finding adoption in mainstream hospital application, so they're finding their first commercial applications in clinics and drug stores—on the fringes of the core mainstream blood testing market. Theranos has broken certain constraints, and is following a path from the fringe to the mainstream. What Raynor doesn't know enough about is whether there's the enabling technology that is going to allow their solution to improve to the point that it can penetrate mainstream markets.

To reiterate: disruptors need to start at the fringe and move to the middle. While some people think disruption is "it started small and got big," that's meaningless. Almost nothing big starts big.

What Christensen Got Wrong?

Chokshi raises the argument that Christensen missed the disruptive potential of the iPhone. Raynor argues that companies face two distinct problems: the cross-sectional problem and the longitudinal problem. Apple showed up with the iPhone in the mobile phone market, with a better mousetrap: it did not enter the smartphone market disruptively. (Whom were they trying to sell the iPhone to? People who wanted a better phone.) So, "cross-sectionally," their entry was not disruptive, but sustaining. On the other hand, Raynor argues "the longitudinal problem" is how Apple raced up the disruptive trajectory displacing the personal computer. Every company is playing both games at the same time. They have to be winning the cross-sectional battle, as well as getting equipped for the long game.

Similarly, Xerox, in personal copiers, had a cross-sectional battle to win. They were competing with carbon paper and Gestetner machines. They had to win the cross-sectional strategic battle for the niche market that they wanted. Then they followed a disruptive path into commercial applications for photocopying technology. Companies need a different toolkit to understand how to win that cross-sectional battle, which is a strategy problem: strategy's about the constraints you embrace. The innovation problem is about the constraints you break, and you need a different toolkit to understand that, and I think a very powerful tool in that toolkit is disruption theory. And there are other tools: diffusion theory, crossing the chasm, and others. 

Are all disruptive products successful?

Raynor goes back to the core research that lead Clay to discover the theory: the disk drive market. With each subsequent generation of disk drives, from Winchester drives to eight-inch drives to five-and-a-quarter to three-and-a-half, there was a ravenous horde of companies seeking to deliver that new generation of technology, all eager and in fact quite ably following the disruptive path. Not all of them succeeded: some did, some didn't. Companies have to win the cross-sectional battle as well as the longitudinal one.

Disruption theory doesn't say anything about the longitudinal battle. That's not a shortcoming of the theory. Theories are powerful when they have boundaries. If you start applying the theory when it doesn't apply, you're more likely to make the wrong decisions than if you just didn't use it at all.

How to win the "cross-sectional battle"

Raynor points out there's a long stream of scholarship, both theoretical and applied, that seeks to tackle that problem. His contribution to that body, the 2013 book "The Three Rules," was an attempt to try and unpack: what does it take to win in the here and now? When you face trade-offs, which trade-offs should you embrace? And how do you go about remaining committed to those choices over time? 

The three rules: Better before cheaper, revenue before cost, and "there are no other rules."

The rules are intended to address the three core questions that define any business. In the first instance, how do you create value for your customers? There's basically two ways you can do that: superior value, or lower price. Raynor concluded that companies that deliver exceptional profitability over time focus systematically on better before cheaper.

The second question is how to capture value, in the form of profits? The arithmetic of profitability is pretty straightforward: revenue minus cost. But companies that deliver superior profitability focus on generating revenue before minimizing cost.

Finally, what do you change when everything around you changes? The answer is anything except those first two rules. Which is why the third rule is there are no others.

The rules pass the test of being falsifiable. The first rule, for example, might be "cheaper before better." There are people who believe that price-based competition is extraordinarily powerful, such as the big discounters in any industry. But the data pointed in the other direction. Similarly, arguing that being a cost leader is the key to superior profitability is sensible. But it happens not to be true. Systematically, over the long term, companies that focus on superior revenue, either through higher unit price or higher total unit volume, deliver superior products, and are more likely to deliver superior profitability than companies that focus on cost leadership. 

But there are exceptions, like Amazon, which is why it's called "The Three Rules", not "The Three Laws". But if you, as a strategist, can't be bias-free (and you can't), the best you can hope for perhaps is to have the right bias. Play house odds, if you will. The bias should be better before cheaper, revenue before cost. If the data convince you otherwise, go in the other direction.

OTHER PODCAST RECAPS YOU MAY ENJOY:

A16Z Podcast — What Comes After the Smartphone

Listen to this episode on SoundCloud.

This post features A16Z's Benedict Evans and Steven Sinofsky, interviewed by A16Z host Michael Copeland, discussing one of the most interesting questions in the future history of computing — what comes after the smartphone? When a technology gets to everyone on Earth, what comes along that is 10x the size?

They (kind of) answer the question. Read on.

What do the iPhone 6S and Microsoft Surface Book say about the present state of technology?

Evans notes that "the feeling in mobile is" that we're at the end of one wave, without another wave coming yet. We've had the smartphone wars, and Apple and Google both won, and we've had the messaging wars and Facebook/Whatsapp/Instagram/etc. won.

At first blush, it looks like phones aren't changing much—they look like last year's phones, which makes it seem like we're on a flattening part of the S-curve. In contrast, if you look at the iPad Pro, Chromebook, and Surface, and it feels like a lot is changing in PCs.

But things tend to look best, be most refined, and have the coolest stuff just before they're about to be completely obsolete. The best sailing ships came at the end of the 19th century, the best battleships in 1945, and the best spy planes just before satellites make spy planes pointless.

You might look at the Surface Pro and think: the PC has been perfected, it's got everything you could possibly want. And in contrast, you might look at the smartphone and think it's kind of boring, and that nothing much new is happening here...but you could look at it the other way around. What's happening in the PC world is it's being perfected because it's over.

"The PC world is over"

In contrast, the smartphone world is kind of just going—we've built the platform, and now we're getting an explosion of innovation atop the platform.

Sinofsky emphasizes the importance of understanding how platforms innovate and diffuse; how they go from one stage to another. What's happening in mobile is that the underpinnings have started to solidify and become predictable (more capabilities, more sensors, better battery life, thinner). We will someday reach a point where you can't improve on the device in a 6-inch form factor, but what's really happening now is the innovation has moved up the stack, into constellations of innovation.

In messaging there were dozens of companies, and now there's a center of gravity that's very substantial. Playing it forward, many areas are unsettled: banking, entertainment, and certainly productivity. The activity in each area is massive because it's building on the stability of the smartphone. This buildout is analogous to the web on top of the maturing PC.

Evans has argued that "the smartphone is not a neutral platform," meaning that while the web browser allowed developers to be agnostic to the underlying operating system, on the smartphone, there's not only the browser to build on, and apps to deploy, but, additionally, Apple and Google are integrating services deeply into the device itself. Thus innovation is occurring on three fronts, but the device is kind of set—it's roughly this size, it's got these capabilities, and performance has grown to the point that we can do things that we wanted to do 3 or 4 years ago. The smartphone becoming a solid platform has enabled a Cambrian explosion of innovation.

THE SIZE OF THE SMARTPHONE REVOLUTION

Sinofsky takes us through the size of each computing revolution.

The whole world of mainframes was less than 100,000 computers. (Actually, they didn't used to measure mainframes in terms of "actual boxes"—they measured them in the MIPS, millions of instructions per second, delivered.) At the height of MIPS utilization, there were about 11.1 million MIPS active, which roughly equals 200 Macbooks.

At the height of word processors (not software word processors, but dedicated machines for typists, from companies like Wang or Sperry), there were maybe 500,000 to a million of those in total, and most of them were in the government.

There were just over a million Digital Equipment minicomputers. To Benedict's point, the DE Vax minicomputer "VMS" software was at its height in 1988-89. It supported whatever programming language you wanted to support, it had distributed systems, the best shell, the best tape drives, disk drives, peripherals, and it all fit together. And then nobody bought it. DEC evaporated seemingly overnight, because it was over. The PC had already showed up.

There were about 5-6 million Apple IIs ever sold, which is arguably the first PC. There were about 17 million Commodore 64s (the most popular single computer model of all time). Why? Because it broke into the den, the living room, not just the garage and the office. Since the dawn of the PC, about 4.5 billion PCs have been sold.

The interesting thing is that for each of those, the first people to buy minis were not the people who bought mainframes, but the people who didn't buy the mainframe. The people who bought the PCs were not the people who bought everything that came before it. Evans points out that the people who had word processors pooh-poohed PCs as "just a toy".

Evans takes us through the PC-to-smartphone numbers: there are something like 300-325 million PCs sold each year, with 1.5 billion active. There are about 2 billion mobile phones sold every year, and almost all of those will become smartphones in the next couple of years, (over half are now). Adding tablets, it's about 2.5 billion each year, with 5 to 6 billion of those in use at any given time. It's ecosystem that's an order of magnitude bigger.

What comes after the smartphone?

Evans ponders: where's the market that's bigger than the phone? The PC industry wasn't created by converting mainframe customers, it was created by building a new industry that was 100x bigger, which left everything else marooned. And that's really what's happening with mobile—first, mobile becomes much bigger than the PC, but the PC continues in parallel. Then, mobile leaves PC marooned and the PC shrinks away to a much smaller base over time. But what does that to mobile? What comes along that's 10x bigger than mobile, that's the next generational change, when mobile has everyone on Earth? There isn't another generational change of that kind—it has to work in some other way.

Sinofsky points out the answer might be hiding in plain sight: in software, written atop the smartphone, which already is 10x bigger. Additionally, "internet of things" (IoT) devices will definitely be bigger, but the key question is how they will be built, and what ecosystem they will be built on top of.

Hardware-wise, Evans says they'll mostly be made of smartphone components. Not the 10-cent devices or the 5-cent sensors, but everything else—a thermostat is a smartphone on a wall, and a drone is basically a smartphone with wings. He believes connecting to and controlling these devices will happen through the smartphone ecosystem. It's not obvious that IoT creates this whole separate ecosystem that can crush Apple, Google, Samsung, ARM, Qualcomm, etc. It feels like IoT is an extension of this business rather than the next generation that's much bigger.

The IoT future

Sinofsky says there will be hundreds of devices per person—we won't be able to count. But the stitching of them together will be via the cloud, and there will be a notion of identity across all of them, with a shared user experience and shared system for developers to program on top of (there's only so many OSs people are going to do things for).

Evans agrees, saying that you won't have 15 software ecosystems in your home, you'll have maybe 3, and a bunch of overlapping Venn diagrams of communication. The light bulb will talk to the phone or the thermostat, via some shared system.

Sinofsky notes that right now most IoT devices come with their own screen or USB router/hub (like his door lock and garage opener), which won't be the way forward. He believes that people are underestimating the opportunity to build whole new ecosystems on top of the "stack" provided by the smartphone.

He provides a litany of historical analogues. When the PC was taking hold, the debate was over operating systems and graphical interfaces (PC vs. Mac), everyone underestimated the impact of Microsoft Office, which created a whole separate ecosystem layer, which was equal in size to the OS. The same thing happened on mainframes—the mainframes were interesting, but Oracle built a database ecosystem on top of them, and then that ecosystem was established, SAP et al built on top of that. The infrastructure of the web has allowed for whole new companies, which often dwarf the infrastructure.

The IoT space will be similar, he believes. It will be built on top of the existing frameworks, which will be very empowering for companies in the smartphone stack, from the people who make ARM chips, through the supply chain, through the phones, to the companies who provide the software support people have. The companies that make garage door openers or light bulbs will lack competency at many of the elements of creating good customer experiences, and other companies will spring up to provide those services.

The reinforcement of the smartphone ecosystem

Evans notes that the smartphone ecosystem is following a well-worn path that the PC blazed in the '80s and '90s. In the past, if you wanted to put computer into something, you used a PC (e.g., an ATM, electronic kiosk, or machine tool would use a PC as its computer). Today, you wouldn't do that. You'd use mobile components instead of Intel, and you'd use Android, or something connected into iOS. Thus the smartphone ecosystem becomes bigger and bigger, and swamps the old ecosystem (PCs) in terms of innovation and scale effects. Statistically, the Wintel ecosystem is now 15% of computing unit sales, and is going to go down to maybe 10%. Like with PowerPC and Mac OS in the '90s, people will shy away from investing in innovation on such a small platform.

To demonstrate this, you might put a Surface Pro next to an iPad Pro, next to a Macbook, next to an iPhone 6S, next to a Lumia (which you can plug a screen/keyboard/mouse into, as it's running Windows 10, meaning in practice it's a PC, or a Mac Mini with a screen). While industry analysts might draw distinctions like "that's a tablet, that's not a tablet, that's a smartphone with removable keyboard...", these distinctions aren't meaningful. The only distinction that's meaningful is: which of these devices are based on the future, and which are based on the past? Or, which ecosystem are they on? Are they on the ecosystem that has all of the scale and all of the growth, and where all of the innovation is going to be focused, or are they on the ecosystem that doesn't have that and is going to be left behind?

To Evans's earlier point about the best-ever sailing ships being made at the end of the 19th Century—that's what a Surface Pro is, compared to the iPad Pro. The first steam ships had masts because they kept breaking down or sinking (like the iPad Pro has a keyboard). The sailing ships were much better and faster for a little bit, but they were on the flat part of the curve. This is kind of where the whole x86 Windows architecture is—it's perfect, but it's reached its logical conclusion.

Sinofsky presents another thought experiment: imagine you're an electrical or mechanical engineer, and you develop new kind of sensor. You need the rest of the computing platform to use your sensor. Which chip manufacturer do you want your sensor to be tightly integrated with? It's not just unit sales, but it's the manufacturer that has the health to absorb it at the software level, the firmware level, the integration level. As Benedict mentioned, this is basically what happened to Apple in the '90s. It was just too small to support the innovation at the scale needed to compete with what was going on at Intel.

WHERE DO STARTUPS FIT INTO THE PLATFORM OF THE FUTURE?

Evans says it depends on which part of the hardware-software stack you want to think about. Working up from the bottom, you start with ARM, and all the licensees and companies who make chips for ARM. Then you have Qualcomm, Spreadtrum, and Mediatek, who are packaging the chips up so that people who don't know anything about cellular technology or semiconductor design can still create smartphones. And then you have the whole Shenzhen ecosystem of phone makers, and it's unclear how that will play out, particularly whether you'll have smartphone equivalents of Dell, HP, and Compaq, who become the global scale players. (The future of Android is unclear in general).

Next up is the software ecosystem, which is Android and iOS. The Apple ecosystem has 700-750 million active devices, two-thirds of all App Store revenue, and roughly half of web traffic, so it has sufficient scale to attract developers. (Which means this operating system war is the first time we've had two winners.) Further up the stack is Google, creating discovery, and Facebook, creating social and discovery, and many other companies trying to create other waves of value on top of those.

Sinofsky highlights the opportunity for startups in enterprise or business computing, which is very understandable (the customers actually buy things). There are numerous opportunities to solve business process and business innovation challenges, on top of commoditized infrastructure and scale, like AWS, Azure, or Google Cloud.

The two stable points of iOS and Android allow a whole bunch of innovation to happen. In the PC era, once Windows and Intel stabilized, it enabled the web to exist, because it gave targets for the browser, targets for graphics drivers, etc. So we're in a golden age where if you start a company, the level of uncertainty about where to begin is much lower than it was even a year ago.

OTHER PODCAST RECAPS YOU MAY ENJOY:

A16Z Podcast — The Power Of University Open Source and the Future of Systems Software, Part I

This episode is about AMPLab at UC Berkeley, which is a five-year collaborative effort between students, researchers, and faculty that is focused on addressing the "Big Data" analytics problem. That problem? While massive amounts of computing power and huge amounts of data (from a broad variety of sources) are available, the software and algorithms necessary to take advantage of these opportunities is not as good as it needs to be.  "Algorithms" and "Machines" are the "AM" of "AMP," and the last piece of the puzzle, "People," refers to the fact that humans are often necessary to solve the problems that machine-learning algorithms cannot. 

AMPLab has birthed a number of major leaps forward over the last few years, including Apache Spark , Apache Mesos, and Tachyon Nexus (which is discussed in Part II of this episode). For those interested in how the various AMPLab projects fit together, this chart is instructive

In Part I, below, a16z's Michael Copeland speaks with co-founder and director of the AMPLab, Michael Franklin, and a16z’s Peter Levine to discuss the AMPLab model, and their relationship.

In Part II, Haoyuan Li, founder of Tachyon Nexus, speaks with Copeland. 

The story of AMPLab

Franklin and Ion Stoica (co-director of AMPLab) took two years off to start a company, and when they got back, academia seemed comparatively slow and quiet. A project at Berkeley called RADLab, which had organized a number of systems/machine learning experts to work on autonomic computing, was getting ready to wind down, and Franklin thought these people could be oriented towards "the big data revolution"—every company was getting more and more data, and would somehow need to manage it. 

The ingredients to AMPLab's success

Levine argues the key is a macro trend in systems software. While, traditionally, systems software experts left Cisco/Oracle to create and join startups, their progress was more incremental. The new generation is working on the most interesting CS problems applied to systems software—big data, databases, OS software—and brings new thinking and new ways of doing computing.

Franklin points out that open source has made a lot of AMPLab's success possible. When he was a researcher, you would present new software ideas to Oracle/Microsoft, and they'd hire you, steal it, or ignore you. Now you can make a piece of software, blog about it, put it on Github, and if it's useful, people start trying it. The friction is much lower. 

Corporate involvement in AMPLab

Funding comes about one-half from NSF, Darpa, and the like, and half from companies (30 sponsors). The corporate support is valuable because the Lab can share plans and receive instant feedback, in addition to information on problems the companies are having. The engagement also makes the sponsors more inclined to use AMPLab's output. That said, the companies don't get any IP rights—everything is open source. 

While open source may appear to be less profitable for Berkeley than patents and licensing, Franklin notes that the Lab has brought in eight-figures of industrial donations, and that at most one University patent, ever, can match that (he speculates they might have an early web browser patent). 

a16z's relationship with AMPLab (they've invested in three companies)

Levine is particularly attracted to the centralized mechanism for new project generation at AMPLab. In terms of how to monetize open source, for a16z, IP ownership is less important than who wrote the code. The ideal project-company has the inventors as founders (forks tend to be inferior), and AMPLab companies usually have this.  

How does AMPLab know what's a good idea?

A lot of it is someone in the lab being passionate (clichéd, but true)—often projects posted on Github get traction. Commercial potential is not high on the list of criterion. Franklin argues that those who believe that there's a dichotomy between good research and useful projects are wrong, pointing to AMPLab's success both in research prizes (such as the ACM dissertation awards) and commercially.

The most important near-term advances in Computer Science

According to Franklin, machine learning and deep learning are the next big thing, as we can now collect data and do real-time big-data work (a la Apache Spark). We can be much more predictive about the data we have. (New databases is also an interesting area.) 

Franklin would also like to reach out from databases to affect the world—cloud robotics, drones, and Internet of Things. The old version of IoT was putting sensors out into the world, while the new version involves interacting with the world (including machines co-existing with people).

Levine notes that both universities and companies are pursuing the idea of moving "compute" (the process of computation) out to the endpoints (smartphones, PCs). Currently the world is centralized, wherein most heavy computation happens in the cloud. Now the supercomputers in our hands are actually being used as computers to do real-time analytics, and not merely acting as displays.  

People as the "P" in AMPLab

The role of people in the lab has evolved. At inception, the idea was that Algorithms, Machines (cloud computing, clusters), and People are the three types of resources available to make sense of data. 

People's role, specifically, is in human computation and crowdsourcing. For example, Tim Kraska, in the early days of AMPLab, worked on a project called CrowdDB. If you asked the database a question without a network connection, it would ask the user, "I don't know, what do you think?" With a network connection, it would use Mechanical Turk and let the crowd answer. It was a "dumb" database that leveraged people to answer questions machines could not.

These days, AMPLab is still doing a lot around getting people (individual experts/analysts or crowds) to do data cleaning, and to solve machine learning problems that the machines aren't up to snuff on. The Lab is also concerned with the fact that the ultimate results of most analyses will be in front of a person, and how to best present the output. 

Read Part II of the Podcast Recap, about Tachyon Nexus and their technology, birthed at AMPLab, here.

OTHER PODCAST RECAPS YOU MAY ENJOY:

A16Z Podcast — The Power of University Open Source and the Future of Systems Software, Part II

This is Part II of an episode about UC Berkeley's AMPLab. You can read Part I, about the founding of AMPLab, its operating principles, vision for the future, and more, here

This part focuses on Haoyuan Li, founder of Tachyon Nexus, a company birthed at AMPLab. Tachyon Nexus is a memory-centric distributed storage company.

What does that mean? Well, essentially, one of the fundamental pieces of software for any type of computing is the "file system," which tracks and manages the data being stored (it's analogous to a real-world filing cabinet). Until recently, file system software has been oriented around the way the world works today, with most data stored on hard disks or solid-state drives. Memory-centric storage is built around the idea that memory (RAM), which can be accessed faster than drives, is coming down in price to the point where much data will be stored in memory rather than in slower storage. 

Tachyon is a piece of software that is built around the idea that data (particularly "big data", distributed across a number of servers) will live in memory. It's a file system that is built from the ground up around the principle. As you might expect, it is way, way better than older filesystems at managing distributed memory-based systems.  

Tachyon Nexus, the company, was founded by Haoyuan Li ("HY"), the co-inventor and developer lead of Tachyon (the open source software), which he helped build at AMPLab. A16z's Michael Copeland talks with HY, along with a16z General Partner Peter Levine, about the company's past, present, and future. 

Tachyon's start at AMPLab + A16Z's storage investment thesis

HY was personally very interested in storage, and AMPLab had two other successful projects—Apache Mesos and Apache Spark—at different layers in the systems stack. At the time, the storage piece was still missing.

According to Levine, the idea was very attractive as an investment because of some of his long-held beliefs about storage, starting with the idea that memory will replace spinning disks and CPUs, due to cheaper and cheaper memory (driven by the mobile phone supply chain). Given this assumption, we'll need a filesystem and architecture that supports this new type of computing, as computers and OS code to-date has been written around the assumption that there's a memory hierarchy that goes from fast/expensive to cheap/slow.

Tachyon maps into that future perfectly. The grand view is that memory flattens and Tachyon becomes the in-memory filesystem for all computing, but even if that doesn't happen, it's still a great filesystem for big data and other applications. 

The idea that memory will "flatten" is also somewhat heretical, which Levine believes makes it an even more attractive early-stage investment. One of his mental models is "what would happen if this very popular thing doesn't exist anymore," and Tachyon fits into the version of the future where disks are no longer used for storage.

How AMPLab creates the ecosystem for big data infrastructure ideas

Levine notes that if you take Tachyon, Spark, and Mesos (in architectural order), AMPLab has created a full stack for the next generation of big data infrastructure. 

HY says that the lab removes the pressure to publish that exists in many graduate positions, which allows researchers to focus on the long-term. The diversity of interests in the Lab provides a lot of different people to bounce ideas off of, including professors (who don't have offices, and sit in the cubicles with the students). The Lab also allows for ready communication with industry sponsors, which helps guide development. 

What makes successful entrepreneurs

Levine says that a deep understanding of their space and the passion to go after it are critical, but equally important is a willingness to learn all the things the entrepreneur doesn't know. HY is a great example—he's very passionate, understands his area better than anyone on the planet, but needs to learn a lot of little things to run a company. A16z looks for someone who will be coachable, because there is a blueprint on how to build a company—from building out a sales organization, to hiring a product manager, to hiring a CFO, to marketing. A lot of folks don't want to be coached, saying, "I'm just going to go do my shit, leave me alone." And you probably can't build a company if the only thing you can do is technology.

HY's development as entrepreneur

HY believes most of the necessary skills to run a company can be picked up as you go along (except for, notably, building a "production-level system"). Hiring is a representative example of how to improve—in the beginning the company didn't have a process, and then created a process, and now is modifying and improving the process incrementally. Prioritization is the key to not being a "bottleneck" at the company, in addition to hiring a great team that can move forward quickly without HY's input. That said, in some sense, he admits the CEO is always a bottleneck, since he has infinite things to do. 

Levine notes how incredibly difficult it is to transition from individual contributor to manager/leader. As an individual, you can do everything and hack your code. When you become a manager, you have to do that through other people, and that transition is non-intuitive and very difficult. As a leader, most of your time ought to be spent hiring and coaching new people, and that's completely counterintuitive—if you don't hire people you'll continue to be the bottleneck. It's particularly difficult if you've written the code and know every line. 

Tachyon Nexus's future challenges

For HY, the next hurdle is continuing to hire the best people, which is hard and takes time. 

For Levine, the company's next hurdle is figuring out how to commercialize the open source project—deciding what's paid for and what's free (if anything). Without a business model early-on, potential partners and customers don't know what they are planning to charge for. The longer this goes on, the harder it is to start charging (things that have been free for a year can't be suddenly not-free). It's important to lead with crumbs, pre-conditioning the community to understand what the business model is going to be. 

Knowing what he knows today, would HY have started the company?

HY: Absolutely.

Levine: He'll let you know in five years.

OTHER PODCAST RECAPS YOU MAY ENJOY:

A16Z Podcast: Dell + EMC — Why the Python Just Ate the Cow

A16Z Partner Peter Levine, Co-founder/CEO of Cumulus JR Rivers, and Founder/CEO of Actifio Ash Ashutosh, discuss Dell's $67 billion merger/acquisition of EMC, the largest tech M&A deal ever, with A16Z host Michael Copeland.

What does Dell need and what does EMC get them?

According to Levine, Dell is looking to shift their business from the declining PC market into enterprise IT, and this cements that move. Dell's core competency is developing servers for enterprise, and the missing part of the plan was the storage piece, which EMC solves. Dell is moving from server vendor to systems vendor—merging servers, storage, and networking. 

Rivers notes that Dell was dancing around moving into storage and EMC was dancing around servers—both parties have been dancing around each other for a long time. Additionally, Dell gets EMC's well-vetted enterprise sales force, of which there are very few. 

Are these companies "dancing at the right party"? How does the datacenter landscape shift going forward?

Levine argues that the proverbial "puck" in the data center world is moving towards architectures  like Facebook, Google, and Amazon's, and less like "Wall Street datacenters" (Cisco, EMC, Oracle databases, Sun Microsystems servers—the blueprint 10-15 years ago). In the new environment, the incumbent hardware (Cisco, EMC, Dell, HP) is not really there. While companies are still spending a lot in this area, many are moving to "hyper-scale deployments," where the hardware is a commodity and software is more important. Dell and EMC need to reckon with this future.  

Rivers notes that while the highest-achieving companies (Google, Amazon) want to build their own infrastructure, companies "one step down" want it to feel like they built their own, with the same architecture. Dell aspires to satisfy that need, and EMC can provide infrastructure hardware in "vBlocks," which are data center racks with integrated storage and provisioning from EMC, switches and servers from Cisco, and VMware virtualization software running on the servers.

Ashutosh points out that there's a divergence in the IT world, exemplified by two conferences that happened last week. In Las Vegas, Amazon AWS re:Invent hosted "a bunch of anarchists asking for freedom from operations," meaning those who prefer not to deal with infrastructure, but prefer to buy it on-demand from Amazon.

The exact same week, in Orlando, a Gartner event hosted 8,500 CIOs, "wringing their hands." The CIOs need to respond to the fact that their customers need AWS-level capabilities (speed, APIs, platforms), which means infrastructure is even further a commodity.  Given the shift to commoditized hardware with rapidly-evolving software, in order to succeed, Dell-EMC "has to be the biggest Wal-Mart around"—it cannot be a boutique selling commodity product, but rather must be the retail king. 

How fast is the datacenter "puck" moving? The CIOs spend a lot, and it's not going to shift overnight.

Rivers argues that convergence is coming, but not soon, and Ashutosh agrees that hybridization is the name of the game today. Vendors are putting infrastructure on sale, and CIOs are taking it (from AWS, Azure in particular, and IBM). It's a buyer's market—everyone's giving free infrastructure for 2 years, and CIOs are taking that time to plan. 

Levine predicts that while the data center of the near-term is a hybrid between private data centers and "public" cloud storage, the architecture will still look like Facebook/Amazon/public cloud. The data center may be on-premise or hosted somewhere, but architecturally, the system is new. Commoditized hardware and sophisticated software are the essence.

Is the merger a good idea?

Rivers points out some of the underlying financial realities—hardware is a commodity, so Dell-EMC plan to excel on distribution (with their enterprise salesforce and the beachhead of VMWare). Given that EMC owns VMWare, which would be worth at least $40 billion on its own, Dell only "really" paid $20 billion to do so.

Ashtosh says it's a win-win-win. Customers win, as they can get best prices from big player. EMC wins, because they were cresting (they had nowhere else to grow). Dell wins, because they can come back and be a megastore, not a boutique.

Levine brings us to our next topic: did EMC have to do this deal because of activist shareholders? You could imagine them buying other companies in their ecosystem and becoming "Dell-EMC", without the Dell. In this case the python (Dell) is eating the cow (EMC). It's an upside-down acquisition.

How do activist shareholders play into this? Public markets?

Levine argues activists limited EMC's optionality. While activists claim to be representing shareholder interests, they are short-term focused, and are generally in favor of selling companies, and returning cash to shareholders. (Good) management has a longer-term view, and generally wants to invest in R&D, M&A.

EMC did one of the best enterprise acquisitions ever when they bought VMWare, which would not have happened with activist board members, because of their limited appetite for risk. Innovation is risky, activists are limiting that risk. Dell was able to do this deal because they were private, while EMC had to do it because of activists. Large companies have very little optionality, so what's the point of being public?

Rivers piles on: public markets are great when you're growing, but if a shareholder owns 7%, they can't sell all of that at once, so the only way to make money is through sale or privatization. With as much cash as EMC was generating, Dell can "normalize operations," increase cash flow, and make out (in a profit sense).

Ashutosh raises the point that CIOs might be spooked by this acquisition—if EMC can be acquired, who can't? Can they depend on anyone's products not to be cancelled or changed drastically? 

So who is vulnerable to acquisition or failure?

All the interviewees fail to address this head on. Rivers predicts a more "casual" relationship between buyers and suppliers. While Google-style infrastructure (the new normal) is re-invented every two years, (the old normal) "Wall Street"-style infrastructure was added to gradually. Enterprises will now have to work more incrementally/flexibly.

Ashutosh points out that the rise of platforms, APIs, open standards, and open source are becoming one of the critical parts of enterprise strategy, as their ownerless nature makes them more reliable. For example, Git and NoSQL cannot be acquired. They are the new IT, and infrastructure is just a supporter. 

Mergers have a long history of not working. What can go wrong?

Many things! Levine brings up the analogy of "the collision of two garbage trucks, with crap everywhere." A merger's success is not about the technology, but because the go-to-market and sales organizations are muddled.

Dell is good at selling commodity components, and EMC is good at selling high-margin components. An EMC sales rep might not enjoy selling a Dell server for a low commission. A Dell telesales person selling a complex EMC storage device may not understand its complexity, or be able to provide adequate customer support. 

On the positive side, Rivers notes that EMC sells highly-engineered systems in which they can now integrate Dell commodity components, so that the systems can be more cost-effective. Dell's salesforce can sell the same components into a different market, so the total market for the components will be vastly expanded. Additionally, EMC is tremendously profitable ($6 billion last year), and the merger to a private company gives them time to figure it out.

Levine notes that cultural fit, engineering and sales might not be able to be combined effectively, even with infinite time (Symantec and Veritas are a salient example), it doesn't always work. Given that it's the largest tech merger ever, everyone will look back on it as the smartest or the dumbest M&A of all time. We'll see!

OTHER PODCAST RECAPS YOU MAY ENJOY: