SMART Letter #4:
Stupid Computer Networks?
March 14, 1998



            SMART Letter #4 -- March 14, 1998

        For Friends and Enemies of the Stupid Network

            Copyright 1998 by David S. Isenberg

      This document may be redistributed provided that  

      the 11 lines containing this notice accompany it. -- -- 1-888-isen-com

      It takes SMART people to design a Stupid Network



SMART People,

Since "the story" appeared in The Wall Street Journal

(did you miss it? see

the SMART List has grown to 465 people.  There has been a 

commensurate increase in offers of pro-sulting work, and it 

is fair to say that is now, unambiguously LAUNCHED!  

There has been a downside to all the publicity, though.  

I have had to stop responding personally to each new list member.  

The big thrill of starting has been meeting

so many amazing people doing so many incredible things.

But I am now up against what George Gilder calls "The Span-of-Life

Paradigm."  He means that we are allotted but 24 hours in each day.

I still love the stories that flood my inbox, I only regret that

I can't get inside each of them, to the people who are living them.


New SMART List members:  You got in touch with me because you are 

interested in the future of communications networks. You probably 

have some unique perspectives or singular observations to share with 

the rest of us.  I strongly encourage you to do this!

One way to do it is to write to me, and while I will *try* to 

reflect the remarks that I consider important in the "moderated list"

format you see here, the "moderator" is really a dictator, so

there is no guarantee that your contribution will be 

seen by anybody else.  So a second way to participate is to post 

your remarks to -- then everybody

can see your contribution on .

And you CAN do both!


Are you going to VON in San Jose? (It's March 30- April 2, see I will be hosting a STUPID NETWORK SESSION on 

April 1 (April Fools Day) at 9:30.  If you are a SMART person, 

and if you will be there, YOUR PARTICIPATION would be welcome!

Instructions for participants are on

To get your two minutes at the podium, send me email!


Other events are on the horizon:

I will be the luncheon speaker at a NSF-Sponsored Emerging

Wireless Communications Workshop in Washington DC, May 4 & 5.  

Wireless is the Stupid Network in action, an end-run on 

Control-Central.  See

And on May 18 and 19, I will be back in Washington DC as

lead-off speaker for a conference called "The Bandwidth Explosion:

Understanding New Technologies That Are Driving Business Opportunities."

No website yet, for more info call 781-545-5230.


Advertisement:  Need 155 MBit wireless access?  I am working with

a just-out-of-the-garage company that can do wireless, unlicensed,

point-to-point OC-3 at up to 750 meters, using transcievers the size

of an Office95 manual.  It is very exciting, and very Stupid.  

A recent urban technology trial was a big success.  They're looking 

for a few friendly first customers.  Email me for more.




One morning this week, the following note came across my screen:

>While thinking through the "stupid network" argument, I got to wondering

>whether much the same concepts could be applied to one of the current

>controversies over computer networks. In some respects, the 1980s move away

>from the mainframe / dumb-terminal model to the PC / dumbish-network model

>fits your argument pretty well. Sure, servers supply some computing power,

>but most of it resides on the desktop, be it on a PowerMac or a

>SparcStation. But now we have Ellison and McNealey telling us that what the

>world really needs is super-intelligent server-based networks linked up to

>thin clients (ie, net computers). In other words, intelligent network, dumb

>terminal. (Oh yeah, and Microsoft dies...)


>This seems counter-intuitive in several respects. For one thing, it returns

>the world to what is essentially the mainframe model, MkII -- with all the

>flaws that entails. For another, it assumes that computer users will

>willingly give up the power they've accumulated on their desktop. And it

>goes squarely against your telecoms model, as well.


And in an amazing burst of synchronicity, 8 minutes later this arrived,

completely independently (I now believe it is from Red Herring):

>"This war, of course, is over whether or not, sometime in the

>not-so-distant future, PC users will be willing and able to

>hand over much of their computer system management tasks to

>some faraway Internet service provider. According to the

>McNealy/Ellison worldview, the new PC metaphor will be much

>like that of the telephone. That is, all the heavy lifting

>-- such as application management and distribution, data

>storage, and communication services -- will be handled by

>professional computer geeks behind the scenes, invisible to

>the end user. This view is in clear opposition to that of

>Bill Gates, who is still preaching the increasing

>complicated proposition of Windows NT, Windows 98, and

>Windows CE, as the "growth engines" behind Microsoft's



I replied:

I guess the arguement hinges on two things:

1) Centralized vs User Control, and

2) Where is the "edge" of the network.

Re: 1) I do not think these "NCs" are selling

into creative or entrepreneureal environments -- the natural market

seems to be telemarketers, reservation agents, and other functionaries.

Re: 2) Whereas in the past, the telcos controlled the whole 7-layer

cake, now there can be a clean separation between switching and higher

functionality, so if the centralization is friendly and in-house,

it is not as big a barrier to (institutional, organizational) innovation

as in the old Phone Co Knows Best days.

Furthermore, the server can be sized to the clients and each (relatively

inexpensive, relatively autonomous) server can be adapted to 

the particular clients' functional needs.

My bottom line is that there will be both models, and a third -- completely

dumb, single function appliances -- and a fourth -- currently used items,

like cars and house heaters and refrigerators and door locks -- that gain 

a 'voice' on the network.  None of these will "win," all will share the

marketplace, each will find a customer base 

in which they are useful, and each of these 3 models

will make the others easier to use.


But UCSD Professor Phil Agre, purveyor of the esteemed e-letter 

Red Rock Eater News, -- see --

spoke to the issue with even more depth in his March 8 letter:

. . . . 

More about David Isenberg, whose Stupid Networks newsletter I sent out

to the list a few weeks ago.  David was one of the small furry animals

at AT&T -- an old-time telephone guy who nonetheless saw and mapped

the evolutionary path forward during the reign of the dreaded-yet-doomed

Allenosaurus.  He got less than no thanks for this, and he is now out

on his own.  His "stupid networks" paper, long circulating in samizdat

form and now seeing the light of day, is an instant classic.  It is

the most straightforward explanation I've seen of the reasons why the

phone companies ought to fear the Internet.  And I'm glad it's out,

because I got a lot of absolute heck when I mentioned the issue here

a while back.  One person, for example, told me that I must be unaware

that the phone system has long been digital.  Now I can direct the

doubters to David's paper.

The bottom line is that the Internet distinguishes different service

layers in a parsimonious way, so that each layer can be applied in the

widest possible variety of contexts.  Clean functional differentiation

among service layers, however, means that simple data transport now

threatens to become a commodity business -- a disaster for the phone

companies, which are addicted to the extra revenue that they obtain

from the voice-specific "intelligence" that they wire into the phone

system.  In any case, the Wall Street Journal wrote up David's article

recently.  The best reference I have is Gautam Naik, "Internet Threat:

Will Technology Trip Up the Telecom Titans?", The Wall Street Journal

Europe's Convergence Supplement 3(4), Winter 1997 (11/17/1997), but

it may also be available on, to which I don't subscribe.

Perhaps this is a commonplace, but I see a rough analogy between

the phone companies and Apple Computer.  Telecommunications networks

and operating systems both exhibit two important economic phenomena:

network effects -- the benefit of using a given system increases

as other people use it -- and economies of scale -- the price of the

software can decrease rapidly as more people use it.  This combination

dramatically favors whichever system has the most users, and the best

strategy somehow combines a steady flow of capital to finance software

upgrades with a steady increase in the size of the network -- that is,

in the number of people who are using the system.

Now, in each case, the product in question consists of an underlying

commodity that can easily be standardized in its outward functionality

-- i.e., the personal computer hardware or the transport of digital

data -- with software built on top -- the operating system or the

various value-added phone system functionalities.  Apple and the

phone companies have both depended upon what is increasingly called

a "stovepipe" strategy: they sell the underlying commodity and the

added software functionality as an integrated unit, and they get

their cash flow from the high profit margin they obtain by selling the

whole package.  By conscious design or not, Microsoft and the Internet

depend upon a different strategy: they break apart the underlying

commodity from the added software.

Why does this strategy work better?  The strategy pursued by Apple

and the phone companies is stable and even highly profitable in the

absence of competition.  But in the long run, as I suggested above, it

resembles an addiction.  In particular, Apple and the phone companies

are getting their cash flow from the wrong source.  The strategy

pursued by Microsoft and the Internet produces value through network

effects and economies of scale, and so once their system gains a

critical mass of users, it can combine constantly lowered prices

with high profit margins.  Why can't Apple achieve the same goal --

lower prices to expand the market, then capturing back the investment

through economies of scale -- by pushing for production efficiencies

on its hardware?  It can try, but it is competing against a lot of

companies that are totally focused are reducing overhead and gaining

incremental production efficiencies in a commodity marketplace -- the

market for hardware that conforms to the IBM-PC standard.  This is the

accidental genius of the IBM-PC regime: one company (Intel) captures

big profit margins by controlling the standard for the processor,

another company (Microsoft) captures big profit margins by controlling

the standard for the operating system, and all of the other companies

work like dogs to obtain economic margins on all of other components,

none of them encumbered by strong intellectual property protections.

What is the analogy, then, to the Internet?  Isenberg's analysis is

that the phone companies are shafted because the phone system is too

specialized to a certain range of functionalities.  The Internet, by

defining its service layers more parsimoniously, has the potential of

being applied to a much wider range of applications, including all of

the applications of the current phone system.  As a result, the fixed

costs of developing the Internet can be distributed across a larger

number of customers.  Once the Internet achieves a subscriber base

that's anywhere in the ballpark of the phone system, therefore, we

are likely to see the same vicious spiral that consumed Apple: those

who control standards (if, in fact, *anybody* ends up controlling

any major Internet standards) will be able to lower prices while

also enjoying high profit margins through the economies of scale in

software, while everyone else will have to make their money the old-

fashioned way, as companies like Cisco (in the case of the Internet)

and Compaq (in the case of PC's) do now.

Beyond this, of course, it's hard to argue in a principled way about

whether the phone companies will wake up in time.  Yes, the phone

companies handle most of the long-distance Internet traffic -- but

then, on one level an Internet backbone is basically a permanent

telephone connection.  And yes, even AT&T is announcing something that

is said to resemble Internet telephony.  The issue is not whether the

phone companies will handle a lot of Internet traffic -- that seems

certain.  The issue is whether they will make any money doing it.

This is what the Internet people are looking forward to: once TCP/IP

has finally cleaved the transport layer from the applications layer

(or layers), the stovepipe model no longer applies.  Because the

layers are no longer integrated, it is no longer certain that the

same company will operate them.  Transport will become a commodity

business, and a hundred applications flowers will bloom.  

That's the theory.  And it's an appealing theory.  So let me try to

generalize it.  What doesn't work, according to this theory, is the

stovepipe strategy.  This strategy is based on we might call "cross-

layer bundling" -- in other words, bundling of functionalities across

service layers.  The problem with vertical bundling is that you're

vulnerable to the emergence of a new de facto standard that replaces

the functionality on the lowest layer.  When this happens, your users

risk becoming stranded and it becomes difficult to recruit new users

who wish to comply with the standard.  Faced with such a dilemma,

one approach is to rapidly port your system to the new standard --

for example, when Lotus Notes was ported to the Internet.  Another

approach is to wither and die -- for example, General Magic's

Telescript, or the whole lot of "push" technologies for the Internet.

Looking at the big picture, every software vendor that specializes

in a particular vertical market -- medical software, CAD, etc -- is

probably engaged in cross-layer bundling.  This is obvious in the case

of systems that depend on any networking protocol except TCP/IP, but

it is also true on other layers as well.  A vendor who establishes a

de facto standard that cuts across vertical markets -- for example,

Microsoft's Windows, Office, and Back Office -- can spread development

costs across all of those markets.  Applications vendors in those

markets may then be compelled, often at great expense, to port their

systems to the new platform.

When this dynamic works right, the result resembles the Internet:

each layer is defined in a clean, parsimonious way, and the de facto

standards are all open.  When this dynamic works wrong, the layers

are defined badly or somebody owns them.  However it works, I call

this dynamic the "platform cycle", and I think it is one of the really

important patterns in the development of information technology.


Thank you for your time and attention.  

What do *you* think?

David I



David S. Isenberg     


18 South Wickom Drive   888-isen-com (anytime)

Westfield NJ 07090 USA  908-875-0772 (direct line)

                                908-654-0772 (home)


     -- Technology Analysis and Strategy --

        Rethinking the value of networks 

      in an era of abundant infrastructure.


Date last modified: 14 Mar 98