| UNIX networking in the 90s -- DOS/UNIX Connectivity
 
Bruce H. Hunter 
From about 1970 to 1980, UNIX evolved and grew at Bell
Labs and at 
universities until around 1980 when it finally became
robust enough 
to function in commercial computer environments.  
From 1980 to the present, UNIX continued to grow, responding
to the 
complex, demanding needs of the engineering, scientific
and business 
worlds. Many of these UNIX technologies are interesting
and worthy 
of mention, but perhaps the past 10 years of network
developments 
have been the most significant, because they so are
monumental in 
nature that they have not only changed UNIX but also
all computing 
everywhere.  
AT&T got the ball rolling early by putting UUCP
into UNIX. While a 
start, uucp hardly qualifies as a really useful network
by 
today's standards. Besides, whereas cu and uucp operated
at 300 baud on phone lines and whatever speed the I/O
card's UART 
could handle, 9600 max on a hardwire, today moving data
at 300 baud 
on a switched network would seem as exciting as watching
paint dry, 
and it wouldn't be much faster. 
The real networking revolution came from outside AT&T.
ARPANET was 
developed by the U.S. government, which gave birth to
the Internet. 
Meanwhile, Ethernet technology was developed by Xerox,
Intel, and 
DEC. The Internet uses TCP/IP protocols, and eventually
Ethernet-TCP/IP 
protocols proved to be a robust and commercially successful
protocol 
stack. Ethernet-TCP/IP made its commercial debut on
the UNIX scene 
when the University of California at Berkeley implemented
it in BSD 
Version 4. 
Compared to UUCP's 300 baud, the Ethernet did its work
at a blazing 
10,000,000 bits per second, and the line speed was only
part of the 
formula; the protocols were equally important. The Berkeley
R protocols 
(rlogin, rcp, rsh) did their thing without the 
network being apparent. 
NFS (Network File Systems developed by Sun Microsystems)
was the next 
major step in the UNIX network revolution. With NFS,
any system in 
the domain could access files as if they were on the
local system. 
YP (now known as NIS) was a natural companion. 
Meanwhile, universities continued to contribute important
networking 
research. Columbia University developed Kermit, which
allowed any 
computer with a modem to get files from any other computer.
Although 
at first glance this may seem relatively innocuous from
today's perspective, 
this simple program opened up a new paradigm for the
period: heterogeneous 
network access. Meanwhile, Stanford was giving rise
to a windowing 
system called W, which, under Project Athena at MIT,
was further developed 
as X. As a result, today any UNIX system can have a
graphical interface 
and execute programs on any other UNIX system as if
it those programs 
were local. 
It's important to recognize that Kermit was an innovative
idea. It 
was, in effect, the first successful heterogeneous networking
tool, 
because the OS made no difference. It set the stage
for what followed 
and for what people would eventually come to expect
out of networking: 
remote file access as if it were local, remote execution
as if it 
were local, and full graphics from any machine in any
OS to any other 
machine in any OS. 
These revolutionary developments ultimately came about
in two ways: 
1) by universities developing them to answer a need;
and 2) by companies 
commercializing those that already existed or creating
new ones to 
make a profit from a need. However, the next phase,
full heterogeneous 
interoperability, is coming from private industry this
time, and not 
from giants like IBM, DEC, or Sun, but rather from companies
that 
were little more than garage operations only a few years
back. 
A Brief History of UNIX Networking 
Early system administrators didn't get involved with
the network or 
its hardware. Networking was taken care of by datacomm
people, and 
the system staff stuck strictly to system administration.
How times 
have changed. Thanks to explosive network growth, both
in count and 
size, network administration comprises most of our system
administration 
work. Not only that, administrators are also expected
to know the 
network hardware intimately. 
Increasing size and the resulting problems with out-of-band
traffic 
led to routers and subnetworking, which in turn created
the market 
for sophisticated and intelligent multi-processor routers,
multi-port 
multi-processor file servers, and highly specialized
devices like 
the Kaplana Etherswitch. To give you an idea of how
fast this networking 
revolution has been taking place, three years ago, Auspex,
Cisco, 
Wellfleet, Proteon, and Kaplana were small or non-existent
companies, 
but today they produce leading-edge network-related
technologies.  
Paying the Price for Increasing Network Sophistication 
Today subnetworking makes our domains manageable by
increasing the 
number of systems we can have while reducing the traffic
on any one 
piece of the overall domain network. Unfortunately,
we have to pay 
an expensive price: increasingly complex network design
and maintenance. 
A multi-Ethernet ported file server with six ports on
six subnets 
has six system names and six Internet and Ethernet addresses.
The 
network supporting this machine needs at least a 6-port
router to 
deal with the subnets, and every system must have that
router added 
to its route table or it won't even come up. In addition,
you must 
map the wall plates in every office to know which subnetwork
it serves, 
and separate and custom /etc/fstabs or /etc/filesystems
files must be maintained for each subnetwork. In fact,
you can't even 
casually move a workstation from one office to another
unless you 
know for sure they are on the same subnet. With all
this complexity, 
it's not surprising that network-related administration
currently 
comprises most of a system administrator's time. 
As workstations redouble their power yearly, the load
on the network 
increases proportionally. Naturally, stress on the network
increases 
along with it, and administrators have no choice but
to respond. 
Today we find ourselves looking at Etherswitches to
route traffic 
through bottlenecks and pass single-wire capacity by
using large-scale 
data transmission in parallel. Even watching the network
has brought 
about special hardware, such as the network probes used
on monitors 
like Concord's Trakker.  
As for future trends, a few short years ago, FDDI's
promise of 100,000,000 
bits per second seemed staggering, but today that is
being challenged 
by CDDI, 100-megabit transmission over copper twisted-wire
pair. However, 
since both FDDI and CDDI are token ring technologies,
they test out 
at only 2.5 times faster than traditional Ethernet in
actual use, 
and no one seems all that impressed. In fact, many technical
facilities 
have been running at 1 billion bits per second for a
long time, and 
I've heard of specs for 10 billion bits-per-second networks.
Where 
networking will be in 10 years no one knows for sure,
but, if nothing 
else, I can guarantee that its speed will be dazzlingly
fast.  
UNIX/DOS Interoperability 
The influence of networking is so pervasive, it is introducing
problems 
that never would have fallen within the domain of UNIX
system administration 
a few years ago. For example, in the past few years,
the word "interoperability" 
has had to be redefined in the UNIX context because
of ongoing networking 
developments. At first it was used to mean that one
UNIX system could 
talk to another, but then the X Window System redefined
the term to 
mean any UNIX system could be either server or client
for any other 
UNIX system. Lately it has been redefined to cross operating
system 
lines, and UNIX/DOS connectivity is the hottest network-related
topic 
on many UNIX sites today.  
Many UNIX administrators looked down on DOS and managed
to ignore 
it for as long as possible. This standoffish approach
to DOS wasn't 
surprising, considering that a few years ago administrators
were scrambling 
to separate noisy DOS traffic from their heavily taxed
UNIX networks. 
NetBIOS is fine in its own world, but when it is turned
loose on a 
UNIX network, the net slows to a crawl because it is
flooded with 
error messages and illegal broadcasts. In order to maximize
bandwidth 
and to minimize errors in both worlds, most UNIX sites
deliberately 
separated their Ethernet TCP/IP traffic from noisy DOS
traffic by 
bridging with adaptive bridges or by subnetworking and
using routers 
to span the gap across the networks.  
Indeed, there was a time when people speculated about
who would win 
the OS war, UNIX or DOS. But today it's becoming all
too apparent 
that DOS will never displace UNIX, and vice versa. The
war is over, 
and interoperability is the current goal.  
Thus, many administrators are currently in the curious
position of 
having to do an abrupt about-face and confront DOS on
friendly terms 
by trying to achieve UNIX/DOS connectivity. One of the
first things 
we must do is learn how to administer the interface
to the DOS world, 
which so far is proving to be rather difficult from
the UNIX side. 
Most of the advances so far have been from the DOS side,
but since 
UNIX/DOS connectivity has become a priority in the computer
industry. 
it's only a matter of time before interoperability becomes
standard.  
Network Development on DOS and UNIX 
UNIX and DOS took different commercial paths. UNIX machines
developed 
and spread slowly on machines that stressed technical
quality, power, 
and speed, and networking was deliberately developed
and built into 
them. Independent DOS PCs, on the other hand, propagated
explosively 
on cheap, tiny machines on which user-friendliness was
always the 
top priority, Speed, technical quality, and power would
come much 
later, and networking was never a selling point until
recently. However, 
the market potential of having millions of PCs working
in concert 
was too large to ignore, and because no networking facilities
were 
built into PCs, Novell and Banyan made their names by
essentially 
doing for DOS what NFS did for UNIX: they created servers
to do the 
networking for them. All PC users needed to do was was
load up the 
appropriate cards, drivers, and software, and the server
would do 
the rest. The hardware manufacturers who made the cards
(3COM Corporation, 
Ungermann-Bass, Racal Interlan Inc., and others) also
made a 
fortune, and all of these developments resulted in increased
demand and competition, which, ironically, ultimately
benefited 
the UNIX world, because Ethernet cards went from $800
to as low as 
$85. 
The Technical Nitty-Gritty 
How did UNIX networking succeed so well? UNIX and the
Internet have 
a no-lose protocol set. In fact, Ethernet-TCP/IP has
proven to be 
the most robust and troublefree protocol set in existence
to date. 
Another major reason for UNIX's success (and the Ethernet's)
is that 
every UNIX system is self contained in terms of networking.
In fact, 
UNIX networking has been so carefully developed over
the years that 
today it is no exaggeration to say that every major
version of UNIX 
and all its variants are fully equipped with networking
-- not 
just TCP/IP (which frequently means the R protocols,
telnet and FTP) 
but also NFS and NIS as well (what an irony in a time
when some of 
these UNIX versions no longer come with something that
used to be 
considered standard UNIX, a C compiler). It just goes
to show how 
commercially important networking has become.  
The technical network developments on UNIX systems are
marvelous and 
fascinatingly thorough. Not only are most UNIX systems
ready to do 
networking as an ordinary node, they can do routing
as well. That 
is, any UNIX system that is network ready can be an
ordinary node, 
an NIS (YP) master or slave, a printer server, or an
NFS server -- 
it's a function of how many Ethernet cards the machine
has and what 
software is enabled. Add another card or two and the
machine becomes 
a router -- all of this for the price of a UNIX binary
license. 
The key to the satisfying thoroughness of these developments
is that 
UNIX is an open system. 
What a contrast to DOS. DOS does not automatically network
as sold, 
but since it is ubiquitous, DOS networking developed
because of its 
tremendous market potential. The networking products
don't act like 
UNIX built-ins; instead separate systems act as file
and network servers, 
and the software is installed separately on the DOS
system. For example, 
booting a DOS system with Novell you are asked if you
want the Novell 
product (active). If not, you have DOS; if so, you have
Novell. 
The protocol stacks are very different indeed. UNIX
uses the classic 
Ethernet-TCP/IP stack in the lowest four levels of the
OSI model. 
 
TCP UDP ICMP ...  transport
IP                internet
Ethernet          link
Ethernet (twp, coax, fiber)
physical 
 
Novell and Banyan are substantially different, both
from UNIX and 
each other. Here is the more familiar Novell protocol
stack: 
 
NetBIOS
session
XNS
SPX RIP PEP       transport
IPX               network
Ethernet || token ring ...
link
twp, coax, fiber  physical 
 
Banyan's virtual network system (called Vines for short),
is disconcertingly 
different, showing the troublesome lack of consistency
among DOS network 
products today: 
 
NetBIOS           session
TCP XNS           transport
IP                network
Ethernet, Token Ring, .. link
twp, coax, fiber  physical 
 
It resembles Ethernet-TCP/IP, doesn't it? Only the Xerox
protocol 
XNS sharing the transport layer and IBM's NetBIOS just
above it reveal 
how different it is. However, it is similar enough at
the physical 
and link layers to share the wire with UNIX, but not
without its share 
of problems. These problems can be so severe that they
must be isolated 
by bridge or router to prevent chaos on the UNIX side.
 
Of course, achieving true UNIX/DOS connectivity is not
exactly going 
to be a walk in the park. Here are some of the catches.
NetBIOS is 
a software specification and is implemented differently
by individual 
vendors. XNS is implemented at different levels by Banyan
as opposed 
to Novell, and it is not used by UNIX at all. IPX (Internetwork
Packet 
Exchange) is very different from IP, and on it goes.
At this level 
in UNIX, (IP) networks are known by unique addresses
such as 128.123.0.0. 
If the address was obtained legitimately, it is truly
unique, the 
only one in existence in the world. But Novell networks
are nowhere 
near as comprehensive as the 4-byte/32-bit Internet
address, they 
are hardly unique (with numbers like 1, 2, 3 ...), and
they are not 
compatible with the IP part of TCP/IP. You can see the
problems.  
Joining DOS Networks to UNIX 
With all of these protocol stack differences, how are
UNIX system 
and network administrators going to join DOS networks
to UNIX networks? 
Standalone DOS is no problem. SCO, for one, has DOS
running as a process 
on UNIX. There are even non-X86 emulations like IBM's
86sim for AIX 
and SoftPC for Sun SPARC systems. So a simple solution
would be adding 
another Ethernet card to your 486 and running one of
these packages. 
Unfortunately, there is a catch: DOS over UNIX is virtual
and can't 
see a hardware device like an Ethernet card. In other
words, DOS cannot 
reach UNIX's hardware. The emulation software will rely
on UNIX to 
get to the hardware through its own drivers, and UNIX
will go for 
the card on the UNIX network every time.  
The bulk of the solution will have to come from the
PC side of the 
line. A multi-protocol interface will be required, and
fortunately 
they exist. ODI is the popular choice at the moment.
A commercial 
product that provides this multi-protocol connectivity
is FTP's PC/TCP 
software. Novell seems better equipped (at the moment)
to handle multi-protocols 
and UNIX connectivity, although Banyan has promise.
If you are married 
to Banyan, you may have to look at Novell as an intermediate
solution, 
going from Banyan to Novell to UNIX with Novell's portable
NetWare.  
Even if these solutions get you to DOS, how do the users
get any of 
their applications to run? By using X Windows. X Windows
on DOS? Why 
not? The OS is not a part of the X Window specification.
Now the workstation 
is the host and the application, even if it is on a
DOS client. That's 
what X is all about.  
One Possible Software Solution 
Here is one possible UNIX/DOS connectivity solution.
Let's say that 
you have a mid-sized UNIX LAN that needs to contact
a Banyan server's 
disk by way of UNIX workstations. The PC/TCP software
can handle the 
multi-protocol problem at the lower layers, while the
Quarterdeck 
DESQview/X package can handle the upper-level protocols.
 
Users will do an rlogin to the DOS system and if necessary
do an xhost to clear the security barriers. From there
Microsoft 
Windows on X can do whatever you want to do, including
getting to 
the Banyan server. The code was originally written to
get to Novell, 
and it does so quite well. An additional plus, the DESQview/X
software 
is relatively easy to install and run, and it will give
you good results 
once a few access permissions and files are taken care
of.  
DOS Gateways 
There are also gateways that can do the job as well,
for a gateway 
is a device that does protocol translations in and above
the IP layer. 
As it turns out, Logicraft has been providing a DOS/UNIX
gateway solution 
for a few years now. It's Omni-Ware product gives you
DOS on a PC, 
accessible by UNIX, on the UNIX side of the network.
Omni-Ware software 
is loaded on the workstations, and an Omni-Ware card
and appropriate 
software is added to a PC in a simple, straightforward
operation. 
Now the workstation is a keyboard and monitor server
to the DOS system 
whenever DOS needs to be run. It runs, as you would
suspect, in its 
own window on X, freely allowing the workstation to
go about all of 
its other UNIX work.  
To get to a DOS network, an Ethernet card is added to
the DOS PC. 
Now it has two, the Omni-Ware card and the add-on Ethernet
card. When 
working in the DOS window (under X of course), you can
now access 
the Banyan or Novell network.  
More Solutions to Come 
Computer trade magazines and papers are full of articles
on UNIX/DOS 
connectivity these days, and there will be many other
solutions to 
read about in future issues. But if you need DOS connectivity
on your 
UNIX site right now, you will probably have to opt for
something similar 
to the hybrid described above. The only current alternative
is a UNIX 
workstation and a PC DOS system on every desk, not exactly
the most 
economical way to go. 
As we move further into the 90s, networking problems
and solutions 
will almost certainly become more complex, not less.
Promises of easy 
solutions, simple administration through object- oriented
code and 
easy-to-use graphical interfaces only mask ever more
sophisticated 
underlying technologies. Now, more than ever, the only
real answer 
to large-scale systems management in the 90s is educated
administrators.  
 
 About the Author
 
Bruce H. Hunter is the co-author, with Karen Hunter,
of UNIX Systems
Advanced Administration and Management Handbook (Macmillan:
1991). 
 
 
 |