Why a Home Network
Getting to the Internet/intranet
Accessing Mail, News, and the Web
Where do you get it?
This is still a learning exercise for me though. These functions have been setup over a period of time. The choices I have made may not have been the best choice at the time, and certainly may not be the best choice now, so I am very happy to get suggestions about updating to other programs to do these tasks. Also, everyone, please feel free to throw in other advice.
There were some annoyances with this setup. I had to keep rebooting to run different OSs (linux for remote-X, DOS/windows for games, OS/2 for reinstalling). Also, my computer was starting to lack grunt for DOS/Windows games, and I knew at some time I would need to get win95 for compatibility with work and new applications.
Instead of replacing my system which would solve some, but not all of my problems, I decided that I should add a new machine, and network them together. Work kindly supplied a laptop, so I could then devote my home machine to linux, which would become a server. At some time I will also add another machine of my own to this network.
The server would have the modem connected, and every day before I arrived home, it would have dialled my ISP and up/downloaded mail and news for me.
Since the laptop belonged to work, I didn't want to store my own software on it. However, it is the only machine that is really feasible for running the latest games. Therefore, I wanted the linux machine acting as a file server for the laptop. This was the basic reason for starting my home network. There are more functions that I have wanted to add since then.
Now how should I do surfing, if I want to do it from the laptop? In this configuration, I have a modem connected to the linux server so that it can automatically connect to my ISP daily for mail and news. I could reconnect the modem to the laptop, or use a modem in the laptop. Either way, it means connecting it in manually each time to use it. Another option is to use the network to go from the laptop to the server, and to dial out from the server. This is by far the best solution.
This solution has some obstacles though. My home network is set up with it's own IP addresses from the private range. This is defined in the NET-2.HOWTO (data from RFC 1597) as the ranges:
|10.0.0.0 - 10.255.255.255||255.0.0.0||One Class A address range|
|172.16.0.0 - 172.31.255.255||255.255.0.0||Many Class B address ranges|
|192.168.0.0 - 192.168.255.255||255.255.255.0||Many Class C address ranges|
Typically, when you dial in to an ISP (or an intranet like with my work), you are normally allocated a single IP address for that session from the address range for that ISP. All traffic from you must originate from that IP address, and all traffic towards you is sent to that one IP address. But now, the server is making the call and is being allocated an IP address, but the host that I am using is on one of my private IP addresses. How can I communicate with this internet given the conflicting IP addresses? The solution is IP masquerading.
As you may already know, IPv4 datagrams are shown below. They consist of a header which contains (among other things), a protocol identifier, a source IP address, and a destination IP address.
|Some header fields|
|Source IP address|
|Destination IP address|
|Possibly more header fields|
|Some header fields|
|Source IP address|
|Destination IP address|
|Possibly more header fields|
|Source port||Destination port|
|Additional TCP header fields|
Let's take a quick example of the use of ports. When you want to initiate a telnet session from your host to a server, your host sends IP datagrams to port 23 of the IP address of the server, so the datagrams going forward have your IP address for the source, the servers IP address for the destination, and 23 for the port number.
The replies from the server have the source and destination IP addresses reversed, but what about the port numbers. It can't use port 23 towards your host, because port 23 is for other hosts that want to initiate telnet sessions towards you. Instead, at the start of your telnet session, your host selected a free port number on the host to be used for the telnet session. The datagrams from your host to the server carried this port number as the source, which told the server to use this port number for communication back to the host. As far as the remote server is concerned, any port that you sent it is acceptable, and it will respond on that port.
Now, we go back to the case of our local network configuration. For convenience, I will refer to the various machines as the destination (server), the laptop (source host), and the firewall (for reasons that will become clearer later).
When we initiate the telnet session from the laptop, the telnet application in the laptop will obtain a free port number for this session. A TCP datagram is sent from the laptop towards the server with the source and destination address, the destination port (port 23 on the remote server) and the source port (as selected).
A funny thing happened on the way to the server.....
When the datagram reaches the firewall which is setup for masquerading, the firewall selects a free port of it's own, and adds it to a table it keeps internally of ports that are in use. It then replaces the source port number in the datagram with the port number it has selected, and changes the source IP address to it's own address, then forwards the datagram to the server.
When the server sends replies, it sends them to the IP address and port that it received as the source. Due to the changes made by the firewall, these replies from the server are actually directed to the firewall rather than the original host. Of course, this is actually necessary, because of course, the firewall is the only host on your network that is visible to the internet. When the firewall receives the datagram, it looks up it's table for the port number, and then reads the proper IP address and port number for the session, modifies the datagram with this information, then forwards the datagram on to the real destination.
You need to be aware of some consequences of this solution. Firstly, the crux of the solution is that the server end uses a specific port number, while the port number at the host end is selected. The firewall makes use of this and replaces the port number and IP addresses. ICMP datagrams (which includes the ubiquitous PING) do not have the same format as TCP and UDP datagrams. ICMP datagrams do not contain a port number, so there is no way to map ICMP datagrams at the firewall. Therefore, these functions are not supported.
Secondly, the firewall must allocate ports for transactions that occur for applications on other machines. The applications on the other machine knows when the transaction starts, and when the transaction is completed. How does the firewall know when the transaction starts and finishes. The start of the transaction is straightforward; when you receive the first datagram for the new port. The end of the transaction is more difficult. If you understood the protocol (eg ftp), you could determine the end by the data within the transaction. This would make the masquerading function dependent on all the protocols you are masquerading, which is not feasible. The other alternative is to use a timer. After a period without any data transfer on that port, assume the transaction is ended and the port can be released.
One possible problem with the timer mechanism is that some applications may actually use two ports. If the application initially opens both a control port and a data port, transfers a lot of data but does not use the control port again until the end of the application, the firewall may already have released the port it assigned for the control. When the firewall receives further data for the control port at the end of the application, the firewall would treat it as a new transaction and select a new port number. When this is received by the server, the server cannot relate it to the application it has been handling from the laptop. Therefore, the application is not successfully closed.
Thirdly, how do you initiate services in the opposite direction; ie. from the outside world, how do I telnet in through the firewall to the laptop? Since the firewall looks up in it's table to find how to map port numbers, what happens when the table is empty? In this case, the firewall cannot determine where the datagram is actually destined for, so it would be thrown away.
If you need to do provide incoming services within your network which are accessible from outside, then you have to setup the table in the firewall to specifically direct the port you want (eg 23, the telnet port) to the specific host you want to reach. If you have multiple hosts you want to telnet in to, you would need to reserve a different port number for each host, and use that specific port number to reach it. Set the firewall as the destination with the selected port, and the firewall will pass it on to your real destination.
Following is some explanation for compiling a kernel for masquerading. I am using kernel 2.0.27, but the procedure is the similar for other kernels. For kernels from 2.0.30 and on, IP masquerading is no longer a development function, so you no longer need to select code maturity options.
The default rules for the firewall are to accept all incoming, outgoing, and forwarding datagrams. The ipfwadm command is used to change the policy, and to configure masquerading.
In my specific case, I use the configuration below:
/sbin/ipfwadm -F -a m -S 192.168.0.0/24 -D 0.0.0.0/0 -W ppp0
This sets up all addresses on my local network 192.168.0.x to be masqueraded through the firewall when forwarding for applications (eg. telnetting to an external server, WWW, ftp).
Thanks to Keith Owens for pointing out that I should include the -W ppp0 parameter also. This ensures that masquerading is only done towards that interface. If I add another ethernet network (or some other network) to my server, there will be no masquerading applied within my own network. It is only when I leave my network to reach the internet that masquerading will be applied.
A couple of other examples of the ipfwadm command
/sbin/ipfwadm -F -a m -S 192.168.0.3/32 -D 0.0.0.0/0 -W ppp0 -P tcp
This is the same as the earlier command, except that the forwarding/masquerading is to be done only for the single host 192.168.0.3, rather than for all hosts on the network, and it is only done for TCP datagrams.
/sbin/ipfwadm -F -d m -S 192.168.0.3/32 -D 0.0.0.0/0 -W ppp0 -P tcp
This deletes the rule just added.
/sbin/ipfwadm -F -l -e
This prints out the forwarding rules for the firewall. The -e gets it to print out the extended information about the interface. All the example commands I have given specify the rules for forwarding. The ipfwadm command can also be used to set/modify the rules for input and output of the firewall, as well as the rules for forwarding without masquerading.
ipfwadm -O -a deny -S 192.168.0.2/32
The command adds a rule for the firewall output to deny everything from host mindy (192.168.0.2).
The rules are appended to the existing rules for the firewall. The firewall checks are performed in order, and the first check that matches is used. There is a default policy that is applied if there is no match in the specific rules.
There is quite reasonable information available about the ipfwadm command available. ipfwadm -h gives a summary of the commands, while man ipfwadm and man ipfw give quite reasonable detailed information about the configuration and use of the firewall. There is also good information (even if a little outdated) in the NET2-HOWTO.
Note that I have not covered a comparison of this technique for firewalling. I have never really setup this for firewalling, so I have no experience. I am using firewalling only as a means to provide masquerading. If you are after proper security in your network, the firewall should not contain the other servers that I have installed. Also, you would need to change the default policy to be deny, then specifically enable only the required cases.
For remote X, I have been using a function for long time called dxpc. This provides compression of the X-protocol of about 30%, which means that X applications load and run faster. dxpc works by redirecting the data for the X protocol through a proxy on each end of the link. This proxy examines the X data to be transferred, and applies compression to it. The compression should be better than generic compression techniques because dxpc knows the X protocol, and the compression routines are specific for the X protocol.
For the two proxies to communicate, a port has to be chosen for communication. All the data is then sent between the proxies using this specific port.
To setup X to run through the firewall, you simply have to configure the firewall to forward datagrams for the selected port to the selected machine (the laptop). I use the configuration below:
/sbin/ipfwadm -F -a m -S 192.168.0.3/32 8008 -D 0.0.0.0/0 8008 -W ppp0 -P tcp -b
This command again sets up masquerading at the firewall. In this case, the only source address that is accepted is 192.168.0.3 (because of the 32 bit mask). Again any destination is accepted. In this case, it is only TCP that is masqueraded, and the transfer is enabled both ways. After this, any TCP datagrams sent to port 8008 of the firewall from outside are transferred to port 8008 of the host 192.168.0.3. Any TCP datagrams sent from port 8008 of the host 192.168.0.3 are sent to port 8008 of the destination.
Then, I run a command on the remote X workstation to start up dxpc, and set the display environment variable. Note that the name specified in the command is the name allocated to the firewall by the terminal server for this call.
dxpc -f -p 8008
Finally, I run dxpc on the local machine with a small script. This script simply runs the command dxpc -f -p 8008 $1. The parameters for dxpc are:
-p port determines which port is to be used for communication
-f indicates to fork a process and run in the background
The $1 is for the name or IP address of the X client.
This initiates the dxpc communication between both ends via the selected port through the firewall. At the X-server end, you specify the host that will be run the client proxy. dxpc uses the presence of this parameter to determine that this is the server end being started. dxpc is started on the client machine first, and it sits waiting for connections. When it is started on the server, dxpc goes out to the client and establishes the connection between the proxies.
Note: If you read the documentation on dxpc, you will find a parameter that I had previously arranged for the author to include which allowed the X-server end to be started first. I found this was quite useful for my dial up script where I start my local end, then telnet to the remote machine (the client). When I log in to the client machine, my login script automatically checked if I was logged in directly, over a network, or over the terminal server. If the latter, it would invoke dxpc. With the firewall and masquerading though, for some reason this has never worked, so I have been starting it from the client end first.
I need 3 components to do this. The central component is the mail handler. I have installed smail on my system. The configuration script from the package manager offered a choice of different setups, one of which was suitable for my configuration, which was a host periodically connected to the internet. This mail handler is a daemon that listens for connections on the SMTP port (port 25).
On an internet connected server, mail would normally be sent to the server on that port from the source. In my case, I have not asked my ISP to send mail to me, so I need some other way to get the mail to my mail daemon. For this, I have installed the fetchmail. This provides a command which sends requests to a remote mail server to fetch my mail. The mail that is fetched is then sent to my mail handler on port 25, exactly as if it had been sent from another server on the internet.
Finally, I need a mail client that I use interactively to read the mail from local mail server. Any mail client that you use towards your ISP can be used for this.
I have installed a cron job which once a day initiates a connection
to my ISP. When the connection is up, fetchmail is run to get the
mail from the server on the ISP, download it, and send it to my local server.
fetchmail -p pop3 -u bcw pop.alphalink.com.au
This command fetches mail from the remote host pop.alphalink.com.au. It uses the POP3 protocol, and logs in user bcw on the remote system. The password is read out from a .fetchmailrc file in my home directory. When I get home, I start up the mail reader on my laptop which is directed towards my mail server, and read my mail.
Note that the file .fetchmailrc must NOT have group or world access, since it contains your password.
Of course, I would also like to be able to send outgoing mail when I
am not on line to my ISP too. When I send mail from my mail reader program,
it goes to smail, which sees that I am not on line currently, and queues
the mail in my system. The cron job that downloads my mail has another
This command uploads the queued mail to the mail server of my ISP, which is then sent out further on the internet.
If you aren't happy with the turnaround time for receiving and sending replies to mail, it is a fairly simple matter to either manually initiate an upload when you want, or alternatively, have the system check if there is queued mail at some time in the day or night, and automatically log in to the remote system to send the queued mail.
For the central news server, I am using cnews. At the time I was installing this, the combination of inn and nntpd appeared to be too difficult to configure, whereas cnews was more straightforward. At the time there were some problems with configuring cnews. There were no scripts to control the installation, so it was all done from reading the manuals, and editing the files. I had to modify the files as below:
There are many files within cnews that you could modify, but I had it working without changing too many, and that was good enough for me.. The first such file was /etc/news/nntp_access. This file defines what hosts and domains are allowed to access the newsgroups, and what access they are allowed. You can see that I had some difficulty getting this to work, since there are entries which are obviously all trying to do the same thing. The ones that I expected to work didn't, so I kept trying other combinations.
The file /etc/news/sys has a lot of example lines for another system. In this file, I only have the ME line that does anything, and it defines the groups I am willing to receive.
The final file is /var/lib/news/active. This file defines the newsgroups that are supported within the system when you show all groups under your newsreader. You can see in this file that the news program keeps track of the current news item number of the local system within this file.
When I was installing cnews, there was a problem that cnews required the directory /usr/tmp to exist, and it didn't. News failed to be served until this was rectified. The solution was to create this directory as a link to /var/tmp.
cnews handles the server part, and includes it's own version of the nntp daemon for receiving news from other internet hosts. Since I have not made arrangements with my ISP to get a news feed, I have installed suck. Suck goes up the ISP and downloads the new news for the groups specified. It then sends this news to the nntp daemon in exactly the same way as a an internet host would (port 119). My script, initiated daily by cron, that dials into my ISP also contains the following command:
The script sucklinux runs the suck command. Suck reads the data file sucknewssrc to determine what groups are to be read, and what news has already been read in each group. sucklinux copies the data file suck.linux from the last retrieval into the file sucknewsrc. It then runs suck with a parameter indicating the news server to fetch the news from, and outputting the data to a file. The script also does other checks to see if a previous suck has been completed, and keeps a backup copy of the suck data file. If a suck operation is not completed, suck will normally continue the operation from that point the next time it is run. At the end, the suck.linux file is updated from the result file suck.newrc.
Finally, the script postlinux sends the received news towards the news server using the command lpost. After the news is posted, the temporary file containing the news is deleted.
The configuration file suck.linuxfor suck defines the news groups to be read, and also contains information for suck about the last news item read from the server. If you set the last news item to 0, it will read all the available news in that group. If you set the news item to -1, that group is not read.
Finally, I need to upload any news items I write to the internet. The
daily cron job has an additional line to run a script rpost.alphalink
which uses a component of the suck package to send the queued news to the
remote news server. This contains parameters
-b filename to identify a batch file to be uploaded
-d indicating to delete the batch file when successfully completed
-p directory indicating a partial directory for the file
-f filter indicating a filter to do some modifications of the data prior to uploading
I have installed squid to provide good performance. squid is a www cache, so it saves copies of information downloaded so if I return to that some site (or someone else on my network goes there) a little later, they get good performance, and minimise the external traffic. This is not very important for a single user system, since www data is often cached by the browser. However, it is useful in the following cases:
squid can be configured to go through parent and peer caches if you have access to any, before trying to access the site directly. In most cases, you will find that ISPs have proxies for web access, and do not allow direct access to the internet. This is to save on costs for data transfer. squid acts as your own local proxy for web access, and can be setup to go via these remote proxies, even multiple ones. In my case, I set up squid to go to the web proxy at my work and my ISP for anything off my local network. The configuration is fairly straightforward in file /etc/squid.conf.
There are many parameters that can be set in this configuration file,
but they are all reasonably well documented in the file. The ones I have
http_port sets the port address at which the squid proxy listens for requests
cache_host sets the parent and peer proxies to go to
inside_firewall sets the domain within which squid will not go to parents and peers.
One difficulty is that if you start squid when you boot the server, you probably won't have an internet connection at that time. In that case, squid will not be able to contact the parent servers, and will stop trying to use them. When I dial in to my ISP or work, I need to restart squid (or alternatively, don't start it until I dial up an ISP).
Of course, if you have a small network with multiple users, or if you
are dabbling with HTML or JAVA, you would probably like to have access
to your own web server. Then you can test out your home page before you
upload it for the world to see.
I have installed apache recently as the www server on my home network. I haven't used it very much, so I won't go into detail about it. All I will say is that it appeared to be straightforward to install and get running, with minimal changes to the configuration file. Of course, I can't comment on whether the basic configuration is a good one.
For SMB (referred to as samba), the configuration file /etc/smb.conf
is read for configuration data. I have only used a simple configuration
here. The file simply lists the share name within square brackets (eg.
path is the location of the file system to be served
public indicates whether the access is generally open or restricted
writable indicates whether it is read-only or not
There are many more things that can be configured under SMB. I have basically setup access to my entire server from SMB. In a network with more users, access should be more restricted.
For NFS to serve files to linux, the configuration is controlled by the file /etc/exports. This file contains lines with the directory which is to be made available to the remote machines, and the clients that are to be provided access. In my case, I have provided access only to the 3 machines I will have as a standard part of my network.
DNS is a server that can exist on a network which hosts can use to translate
from a name to an IP address, and vice versa. If you specify that you want
to reach a server such as www.debian.org, and your host doesn't know the
IP address for that server, it can send a request to the DNS with the name
of the server you want, and the DNS will reply with the IP address.
On a small network, DNS is really unnecessary, because you can use the /etc/hosts file on each machine to resolve names to addresses. This /etc/hosts mechanism is fine if you only have a small number of hosts, but becomes unwieldy as the number of hosts you have to reach increases. If you add a new hosts (or need to reach another host), the /etc/hosts file on every machine that needs to access that new host would need to be updated.
I have installed DNS only for fun, to see what was involved. The configuration files are quite involved. You can read the NET2-HOWTO for more information. I have worked through according to the HOWTO, and a book I have on system administration for unix. Between these two, I have successfully configured DNS on my system. However, the configuration files are quite obtuse, and I feel that it would be better if someone with a good understanding of these files explained DNS in more detail. so I have not provided any further information. If you would like to see the configuration files I have used, you can contact me directly.
I guess you can protect your system from this sort of file expansion using quotas too. However, there can be other advantages to using partitions instead of one large partition with quotas. If a file system gets corrupted, you may only lose the contents of one partition, which may only be part of your system. Also, it is possible to protect your system a bit more by only having some partitions mounted. For example, the server is running 24 hours, and has no UPS. After a power failure, the system must restart and fsck the disk. If there several smaller partitions, and only some of them are mounted when the power failure occurs, then only those disks are in danger.
There are programs around that allow you to have filesystems not mounted, but as soon as you attempt to read or write to the file system, the program will intercept the request, and delay it while it mounts the drive. When the drive has been mounted, the request is performed.
I have used two programs to do this. Initially I used a program called automountd. automountd was very easy to install and configure, which is why I went for it. At the time, I looked at the amd program which has a lot of information in info files, and was totally overwhelmed. Overwhelmed to the point where I gave up for the time, and used automountd.
automountd is controlled by the configuration file /etc/automountd.conf.
This file contains lines with:
device to be automatically mounted
mountpoint to mount the device
Since then, I have gone back and looked at amd again. There is some simple information about amd in the NET-2-HOWTO which was helpful, and also some information from the amd package developer for debian which allowed me to get amd running (although not without some swearing and cursing and lockups).
Since automountd was working, why did I still want amd? There were two reasons. Firstly, I refuse to be beaten by too much documentation or complexity. Secondly, amd is better behaved. When amd mounts partitions, they are added to the normal mount tables, and are shown by the mount command. In contrast, automountd maintains it's own mount tables separate from the rest of the tables in the kernel. This means that, if for some reason you kill the automountd program, you are left with some mounted filesystems that you can't unmount. In contrast, if you kill amd, you can identify and manually unmount any filesystems that amd had mounted before being killed.
I have two configuration files. /etc/amd/amd.master
is the master file, and indicates:
-a directory is where amd does it's automounts to
-r indicates to that amd is to take over any already mounted filesystems that amd would normally handle when it is started
directory for which accesses are to be processed by amd
file which contains the configuration data for processing directives for that directory (eg /etc/amd/amd.root)
This file indicates:
name of a directory/file system under the automounted top level to be handled
type of filesystem (ufs is just a unix file system; there are other types too)
dev is the device for the file system
options and type are further options. For linux, the type should indicate the file system variant, such as msdos, vfat, and even iso9660. Notice that I am automounting the CD-rom drive.
There is a lot of information on amd in info files, but this was part of the problem. There appeared to be too many pages of information which you had to tie in together to understand configuring amd, and the information was written more as reference, than an example document.
Both of these capabilities are provided with a program called mgetty.
This program is a replacement for the fgetty program that manages terminals
on the system. When configured and connected to a serial line, the mgetty
program answers the incoming calls automatically, and examines the incoming
data to differentiate incoming fax, and data calls, and take appropriate
action for each. If you have a voice modem, it can also distinguish voice
calls and can do things like act as a voice mail system.
In the configuration file /etc/mgetty/mgetty.config, I did not need to make many changes.
I had to set the speed for the modem port to 38400, and to stop the program from switching back to speed 19200 automatically when receiving a fax. My modem is fixed to a constant speed towards the PC (actually 115200), and uses this speed too when receiving faxes.
The real issue is how to handle a fax when one is received. The mgetty program sends you an email, so you know when one has arrived, but you still need to be able to look at it. You should have a look at the documentation that comes with mgetty for a number of alternatives to choose what one suits you best.
At work, they were planning to introduce a function called DHCP (Dynamic Host Control Protocol). This allows a machine, when connected into a network, to be allocated an IP address, instead of having a fixed address. It can also provide information to the machine about what address to use for the local DNS and gateway. This seemed like a solution to the problem. If I have DHCP running on my laptop, and a DHCP server on each network, then I have the same configuration for both home and work, since my laptop no longer has a fixed IP address on any network. It is only when I connect my laptop to either network that the IP address and other data is provided, and it is released when it is removed from the network.
DHCP leases out IP addresses to hosts on the network. Periodically, the host has to renew the lease from the server. Otherwise, after a period of time, the server will revoke the lease. I have installed the dhcpd to provide the DHCP daemon on the server. The DHCP daemon waits for hosts to send DHCP messages requesting an IP address. When received, the DHCP server looks in it's configuration file to and it's existing lease files, and determines an address to allocate to the host. The host is informed of the address, and the DHCP daemon records the leased address in it's file of leases. There is other information the server also reads from it's configuration and supplies to the host. In fact, the server must be able to supply all the networking information for the host.
This makes it very easy to add additional PCs to my home network, since I can just plug in any PC and it will be automatically assigned an address, and configured for DNS, and the gateway.
Dhcpd is configured in file /etc/dhcpd.conf. This file contains a description for the subnet, and defines the configuration (eg DNS server, domain name, subnet mask) that is to be sent to any client that registers. There is also a section for a specific host which ensures that my laptop is always given the same IP address, which is named on my network. This ensures that I can run functions such as remote X through he firewall from that machine. This host specific part uses the unique NIC address to identify the host and allocate it the specified data.
Of course, something must be done for the client hosts. When the host is booted, it must contact a server to get the networking information. Under windows95, you simply remove the IP address, netmask, and gateway information. Under linux, you need to load the dhcpcd package which provides the client daemon. This contains a script which is executed at bootup to configure the network.
Remember way back that I want to do remote X through the firewall. To configure this, I had to set up where that port would go to through the firewall. I have now added a complication, since the IP address is allocated, not defined on the machine. I solve this by using options in DHCP to tie an IP address to a MAC (Media Access Control) address. This way, I can ensure that the laptop is always assigned the same IP address, as long as it is using the same network card.
I found a problem with the fact that I was booting the PC into both windows95 and linux. Initially I booted into linux, and was given the correct IP address. Then, I rebooted into windows95, and also had the same IP address. Great, everything works. However, the next time I booted into linux, I did not get the expected IP address.
The problem is that windows95 provides a 'UID' when requesting the IP address. This UID is a local identifier used on the machine to ensure that addresses are not mistakenly reused. In my opinion this should be unnecessary, since the network interface cards should have unique addresses anyway. So, the dhcpd program registered the network card address and UID of the windows95 machine, and offered it the IP address I wanted it to have. When I then rebooted the laptop to linux, and it requested an IP address from dhcpd, it only included the network interface address, and not the UID. The dhcpd looked at the NIC address, but also noted that the IP address defined for that NIC was allocated to a machine with a UID. Since no UID had been supplied, this address was not allocated, and instead dhcpd selected another free address. This was not what I wanted.
To get around this problem, I downloaded the source for dhpcd, and did a small patch in the dhcpd.c program to ignore the UID data if received. On my network (and probably in most peoples networks), the NIC address is sufficient to uniquely identify a host, so I only want to register the NIC address. After I compiled and installed the new dhcpd program, the laptop was always allocated the same IP address, irrespective of which OS it was running.
I had one other problem with DHCP. The laptop has a PCMCIA ethernet card, so I had the PCMCIA package installed. DHCP did allocate the IP address and provide the gateway and DNS address as it was supposed to. The script that came in the PCMCIA package to handle DHCP stores the information into a file in the directory /etc/dhcpc. The problem was that the file /etc/resolv.conf which keeps the IP address of the local DNS server for name resolution was not getting updated automatically by the script, so it was not getting names resolved. I had to make a change to the script to get the name resolution working automatically when the laptop is connected to the network. This script now modifies the /etc/resolv.conf file automatically with information about the DNS server provided by the DHCP server.
Programs are probably all stored on sunsite. For debian, the source for each package has been indicated below, as well as a reference to any useful information sources if known.
(besides man and READMEs)
|ipfwadm||Part of kernel source||NET-2-HOWTO
IP-masquerade mini HOWTO
|nfs||standard part of kernel||NET-2-HOWTO|
|automountd||To be Confirmed||-|