Ignoring questions of bandwidth, the Internet seems to be running fairly well using Internet Procotol version 4 (IPv4). Systems can connect, packets can route from one side of the world to the other over various hardware layers. So why introduce a new protocol?
IPv4 is a 20 year old design and nobody anticipated the growth of the Internet. The current and projected size has exposed some design problems with IPv4. Most problems are minor but two are major. If the major problems are not handled we can expect "the death of the Internet".
Dentists want a few computers and a printer or two for internal use with little or no connection to the outside world. They want to buy machines with network cards, run a small length of cable, plug the machines in and run. They do not want to set up Domain Name Servers, routers etc.
IPX is good at this task, IPv4 is too complicated.
The network administrator's nightmare. A container of 1,000 computers is sitting in stores. The supplier will unpack and cable the computers but you have to configure TCP/IP on all of them over the weekend.
IPv4 auto configuration is still not very stable. Dynamic Host Configuration Protocol (DHCP) helps to some extent but only within organisations.
The IPv4 standard allows links to have a Maximum Transmission Unit (MTU) of as low as 276 and some systems go even lower. Larger TCP and UDP packets are fragmented at the IP layer if the data path goes through these small links.
Experience has shown that 276 is too small a minimum value, some protocols are struggling to fit their data into this small path MTU. Also doing TCP/UDP fragmentation at the IP layer violates the protocol layers and introduces its own unique set of problems. Anybody who has said "I can connect to a site but big downloads stop halfway through" knows about path MTU problems.
Some sections of IPv4 are optional, e.g. type of service, security, record route, timestamp, source routing. Not all implementations support these options. Router manufacturers can speed up their benchmarks by omitting the optional features which further discourages the use of these functions.
Even when the features are supported they tend to have fixed sizes which may not be large enough. For example, record route only has room for 10 routers, not enough in today's environment.
The IPv4 packet header is messy with bit aligned data. It is variable sized and the variable options are not necessarily word aligned. All of this makes it slower to decode the header and access the data. Some RISC boxes have problems with unaligned data.
With 32 bits of addressing, a simplistic look at IPv4 says we can handle 3,720,314,628 addresses (ignoring the RFC1597 private networks). This is made up from :-
If every person in the world had a connection to the Internet, we would not have enough IPv4 addresses. It is even possible that each person would have many IP addresses, an intelligent home could have IP addresses for each device, perhaps down to the level of individual lights. This could require as many as 800-1,000 billion IP addresses. The use of RFC1597 addresses for private networks will help to some extent but ultimately we will run out of IPv4 addresses.
The problem is worse than the simple figures would imply. Most sites do not use anything like their full host range. Depending on which fudge factor you believe, current IP wastage could bring the effective set of IP addresses down as low as 200,000,000. Some estimates say this is the number of computers in the world right now.
Large ISP's and backbone routers have to know about the worldwide Internet topology. These routers are overloading with route changes. Unfortunately this is inherent in current IPv4 allocation procedures. Once a site has been assigned an IPv4 network, they consider it belongs to them. Initially the network will be assigned based on the site's current ISP but if the site changes its ISP, they tend to keep their IP address. This adds special case entries to the backbone routing tables, making them bigger and slower to scan.
For example, addresses in the range 203.x.x.x were assigned to Australia, initially to AARNET. Everybody went through AARNET so a single backbone route for 203.x.x.x pointing to the NASA/AARNET link handled all Australian addresses from overseas. Then more US/Australia links were added into other states, requiring more specific backbone routes for 203.x.x.x. Then other providers like Access One and Connect.Com came along, they needed backbone routes for each major provider. Then sites changed from one provider to another but kept their IP addresses, requiring exceptions to the already messy provider routes.
The backbone routers have to cope with 45,000-50,000 routes which are continually being updated, added, deleted and broadcast. Every packet which goes through a backbone router has to be compared against this big routing table in order to determine the next hop.
IPv6 requires that hosts and routers must support automatic configuration, it is not an option. Typically you program the routers with their addresses and networks then simply plug a host onto a network. The host will talk to the network to get its IP address and its routes. It is even possible to have a network with no routers or servers and still communicate.
Instead of letting the IP layer do path MTU discovery, it is the responsibility of TCP/UDP to work out the MTU under IPv6. The smallest allowable MTU for IPv6 networks is 576.
The IPv6 packet header is much simpler than the IPv4 one. It is now a fixed size with no optional fields. The packet header and body has been carefully aligned so packet decoding is faster.
IPv6 addresses are 128 bits long. Should handle the Internet for several decades to come.
The high order bits of the IPv6 address will be very carefully assigned to build a hierarchy of providers. Instead of the backbone routers having to know about individual networks around the globe, they only have to know about the other backbone systems. Once a packet reaches the correct provider, it is then handled by the lower routers. Each provider knows its own customers and the next level up.
© Keith Owens O. C. Software P/L 1997