After turning that remark over in my mind for several days, I had what I think is a mildly important recognition about the limits of *any* interaction model.
I should start by saying I've been thinking about this whole problem of "what topological model of the underlying network do we present to the higher layer" in the context of a more advanced routing architecture. (In my case, I chose something which looks vaguely like Nimrod, of course! :-).
I know this may sound sort of useless, thinking about the problem using
tools we don't currently have, but I would argue to the contrary. My
reasoning is that I now think this problem has some *fundamental*
limitations, and it's easier to see those limitations if you take away the
artificial constraints imposed by our current less-than-ideal routing tools.
Anyway, my thoughts were prompted in part by a later comment of Curtis Villamizar's:
Note: networks built using the overlay model need not provide a full mesh contrary to misinformation provided on this mailing list. Today ATM PVC based IP backbones are often not built as full mesh networks to reduce the nuber of routing adjacencies.This comment reminded me that there's a general problem whenever you try and "abstract" a piece of the network topology. What I mean by "astract" is the following process:
There's a reason I defined my second step the way I did, instead of simply
saying "replace any group of nodes bounded by closed line with a single
node" (which I'll call "non-virtual abstraction"). The process I defined
allows one to create arbitrary "virtual" representations of a piece of
topology one is trying to abstract.
Here are some examples of this 'vitualization' process. Let's assume I'm trying to tackle the situation Curtis alluded to: I have an ATM network, and I'm trying to model the connectivity of the routers attached to that network (represented in the graph by nodes which have arcs crossing the circle drawn around the set of nodes which represent the ATM network elements). I have a vast number of options for doing so.
For one, I could model the routers as all being connected to a single giant node in the center, a node which replaces the entire ATM mesh. The problem with this is that if you're trying to put some reasonably correct metrics (or, more generally, attributes) on the arcs that connect each router node to the center node, there's no way to get completely accurate results; the topology is too simplified.
For another, you could do a full N^2 model; in this you can accurately model the attributes of each indidivual router-router "connection" (it's actually a "current path"). However, this has two drawbacks, one obvious, one subtle. The obvious one is that it's N^2. The subtle one has to do with using alternate paths - you get an even bigger combinatorial explosion if you try and start representing alternative paths. (Note also that there's no way to produce this representation using non-virtual abstraction.)
Then you could do what Curtis alluded to - something less than full N^2.
This has the limitation that paths through the fabric between routers X and
Y may actually exist - but such a path is not represented in our
"virtualization" of the ATM network, and the only way to get from X to Y is
to find some router Q which *does* show a pair of virtual links, one to X,
and one to Y, and take an "extra hop" through it.
You can go on for a while, trying different things, and you soon discover a simple truth: in basically all real networks, there's *no* virtual representation of a part of the topology graph which is both i) simpler than the full representation, and ii) accurately and fully describes the connectivity through that part of the graph.
In other words, the real graph is generally the most compact representation of the connectivity, for path selection purposes.
Now, here's where the going gets tough. Obviously, in a real network,
everyone can't have a full graph of the network. So you *have* to produce
simplified representations - and accept that some paths are going to be
non-optimal. In other words, you have to trade-off two kinds of overhead:
routing overhead (i.e. the cost handing around routing data), and non-optimal
In the optical case, we have an *additional* constraint, which is that even if people *outside* the optical part *did* have a full map, they wouldn't necessarily be able to use it - because finding optical paths is a process that has unusual constraints.
So, again, you're forced to produce a model of the connectivity across that
part of the connectivity graph; i.e. select a set of paths, and advertise
that set of paths as the connectivity graph which is a virtual
representation of the connectivity of that part of the network. And, as
we've seen, there are fundamental limitations on how well any such virtual
representation can work.
So, where does this leave us, here in the real world, with the real tools at our disposal - tools which unfortunately don't allow us to create any virtual representation, but just a few selected ones?
Well, the main conclusion is that if you're producing non-optimal results, it's not necessarily because your tools are bad - it's an impossible problem even with the best possible tools.
I will also stick to my earlier conclusion: the problem will be easier to
tackle in a system which uses a uniform routing system at both the local
and higher layers. Whatever information flow you *do* decide to do (i.e.
whatever virtual representation of the local topology you do decide to
propogate out to the higher layer), you will have more flexibility in doing
so if the two are using the same routing system.
Back to JNC's home page