The Existing IP and Non-IP Address Structure: Preparing for Remodeling
Designing Addressing Architectures for Routing and Switching
Author: Howard Berkowitz
Idealism is fine, but as it approaches reality the cost becomes
William F. Buckley, Jr.
The metaphor of the melting pot is unfortunate and misleading.
A more accurate analogy would be the salad bowl, for, though the salad is
an entity, the lettuce can still be distinguished from the chicory; the tomatoes
from the cabbage.
Carl M. Degler, Out of Our Past: The Forces
That Shaped Modern America
Stand ye in the ways, and see and ask for the old paths, where
is the good way, and walk therein.
Before the backbone can be designed, you need to know what is at the
edges. You then need to consider how you will carry non-IP protocols, or poorly
numbered IP traffic, across the backbone.
You might have an existing backbone, or perhaps you are first building
one to link growing edge sites. Possibly, you might have multiple backbones
for different protocol families, especially if you run IBM networking as well
Our emphasis in this chapter is on the legacy physical networks, as
well as the current IP addressing structure.
Support costs are a major part of overall network cost, and legacy mechanisms
tend to be among the most expensive to support. Principal among these support
costs is that of network design and configuration control. Chapter
6, “Internet Failing: Details at 11,” introduces the need
for conscious addressing design and management, which will be detailed further
in chapters on network management and configuration control.
A basic assumption in this book is that backbones principally use
IP routing, possibly supplemented with some direct Layer 2 connectivity with
Frame Relay or ATM. Nevertheless, there are perfectly valid reasons to have
other routed protocols in an enterprise backbone.
Most of the sections of this chapter provide guidance on addressing
considerations in different protocol families. They will prepare you to take
an inventory of your present environment. You'll find worksheets for that
inventory at the end of the chapter. Some protocol families, such as AppleTalk,
DEC's Local Area Transport (LAT), default NetBIOS networking on Microsoft
systems, and older Novell systems, simply do not scale well to the wide area.
IBM networking expects to control its own network resources, but it can be
far more economical to have only one set of network resources over which all
protocols run. These can be made to work on an IP backbone with various forms
of tunneling. Application gateways might be a better choice.
When part of an enterprise network, the key problem in NetBIOS, Novell
Internetwork Packet Exchange (IPX), and AppleTalk networking is to make sure
that endpoints can find one another. In these protocol families, hosts and
clients search for one another by name. The result of this search is an address.
In most cases, these are not IP addresses, so you need to ensure that you
implement ways to map these addresses to IP addresses and provide mechanisms
for the non-IP protocols to reach the IP tunneling or other mechanism.
You need to decide whether you will use
ships in the night (SIN) routing for your non-IP protocols or tunnel
these in IP and rely on IP routing for path determination. In SIN routing,
more than one protocol family's native routing protocols coexist in the same
network, but do not interact with one another.
Good examples of pure routing protocols that might run separately are
the protocols for IP and for DECnet Phase IV. There also are integrated routing
protocols, such as IS-IS that handles IP, OSI, and DECnet Phase V. Cisco's
EIGRP handles IP, AppleTalk, and Novell.
Another range of alternatives is to use Layer 2 connectivity for non-IP
protocols, such as Frame Relay with RFC 1490 encapsulation or ATM with MPOA
encapsulation, discussed in Chapter 4, “Transmission
System Identifiers and Logical Address Mapping: A View from the Bottom.”
The major protocol families, called desktop, workgroup, or LAN protocols,
historically have had much less of an idea of specialized name resolution
services, such as DNS, than the IP protocol family. Their basic mechanisms
have involved some type of broadcast or multicast among client and server
endpoints. In some cases, routers are involved in this process of name resolution,
but they mimic endpoints when doing so.
As the workgroups grow, and, more importantly, interconnect to other
workgroups in a large enterprise, there have been distinct trends to add directory
services to the protocol families. Directory functions can be either in native
directory services or in migrating application protocols to an IP world where
they can use DNS.
resource discovery protocols assume that the client and server interact directly.
Routers can pass along information, but they are viewed more as passive conduits.
routing, a black hole is a destination that appears in routing
tables, but, when traffic is sent to that destination, the traffic is
discarded. Black holes can be configured deliberately. Although a detailed
discussion of the use of black holes in routing is beyond the scope of
this book, intelligent use of them can increase network stability.
The principle here is that certain packets cannot be delivered and sending
them causes a destination unreachable message to be generated
for every one of these packets. Given that this message is generated by
some routera distant one if routing information about unreachability
is not propagated and a local router if such information is propagatedthere
is a design tradeoff between the overhead of tracking detailed reachability
information and the overhead of passing some traffic that could be discarded
earlier if some unreachability information were known.
In large networks, it is usually better to avoid the detailed tracking
overhead, which avoids the need for constant recomputation of routing
Because the broadcast-based mechanisms tend to produce a lot of overhead,
various strategies for reducing overhead have been developed. These tend to
be alternatives to lower-overhead directory services, but have the advantage
that they can be implemented in routers without disturbing a large number
of hosts. The following are some of these strategies:
Filtering, to prevent queries or announcements from going
where there will be no response or other interest.
Static definition, where a local router has a fixed definition
of the location of the endpoint and will respond as if it were the endpoint.
The router does not know if the endpoint is actually reachable and becomes
a black hole if the endpoint is not reachable. Such definition can require
significant configuration setup and maintenance.
Proxy resource resolution, where the router pays attention to the content of
broadcast or multicast requests and saves the response. When subsequent queries
are made, the proxy server answers them from a local cache. Again, the proxy
might not have the most recent information, so black holing is a possibility.
A compromise is to put a lifetime on each cache entry and periodically resolve
the cached resource entry when this timer expires.
How helpful filtering can be to you depends on the structure of your
communities of interest. If there are many communities of interest, and services
that are only used within the community, filtering can be extremely helpful.
If you have large numbers of servers that need to be seen everywhere, you
cannot filter out their existence. In such cases, you want to look at more
efficient resource location protocols or static/proxy mechanisms.
Black hole behavior by static or proxy responders might not be as important
an operational problem as it first seems. Think about the operational realities.
If a user cannot reach a resource, he calls the help desk to complain. The
user behaves in the same way if the resource is actually down, if the service
mechanism is pointing at it incorrectly (for example, it has moved and static
definitions have not been updated), or if a proxy points at a correct location
but connectivity to that location has been lost.
To diagnose a user report of this sort, the normal procedure is to test
connectivity to the user. If the user is unreachable, then there is a network
problem that needs to be fixed. If the server is unreachable, again there
is a network problem. If the server is reachable but the service is down,
a host problem needs to be solved. Independent network management tools might
already have been monitoring the server, and the help desk might already know
about the problem.
So, if a proxy or static definition causes
user traffic to be black holed rather than getting an immediate unreachable
message on the workstation, does that really slow the process of troubleshooting?
If it does not, then proxy or static definition are valuable operational tools.
One of the primary architects of OpenCable, Michael
Adams, explains the key concepts of this initiative in his book
Broadband, Second Edition
by George Abe
Introduces the topics surrounding high-speed networks
to the home. It is written for anyone seeking a broad-based familiarity
with the issues of residential broadband (RBB) including product
developers, engineers, network designers, business people, professionals
in legal and regulatory positions, and industry analysts.