The ability to network computers together is something that has been available for the past four decades. It wasn’t until the mid-1980s that attempts were made at connecting many computers from multiple countries together into one large network. The most successful attempt at this became what is now called, The Internet. The Internet started out as a United States Defense Department research tool in the early seventies for communicating between certain universities (like Stanford) that had Defense Department research programs and military facilities. The number of universities connected grew until the government pulled out in 1994 and made the Internet public. The Internet was the first (and best) of its kind—the development of it created TCP/IP, as well as SMTP, POP3, FTP, NNTP, and HTTP.
Unfortunately, a deep understanding of computer networking was required if you wanted to find or publish information on the Internet. Throughout the world, scientists and computer engineers began looking for a solution. There were three main features that the final solution had to have:
Platform independence. Since the world was comprised of many different operating systems, the solution had to include a universal way of transmitting data.
Separation between content and presentation. The computer used to create the content was almost always different from the computer that accessed the content. Therefore, the creator of the content had to provide a way to instruct the accessing computer how to display its content.
- Decentralization. There is no center to the Internet; every node is equal to every other (see Figure 1). TCP/IP is a routable protocol— that is, it is able to move data from one machine to another through any available route. This is what the Defense Department originally worked on—a noncentralized network that can route around network failures.