History of the Internet Part 3


 Transition to Widespread Infrastructure

Other networks and networking technologies were researched at the same time as the Internet technology was being experimentally tested and widely utilized within a subset of computer science academics. The utility of computer networking – particularly electronic mail – exhibited by DARPA and Department of Defense contractors on the ARPANET was not lost on other groups and disciplines, and by the mid-1970s, computer networks were sprouting up wherever money could be found. The United States Department of Energy (DoE) developed MFENet for its Magnetic Fusion Energy researchers, and the DoE's High Energy Physicists reacted by establishing HEPNet. NASA Space Physicists followed with SPAN, and with initial funding from the United States, Rick Adrion, David Farber, and Larry Landweber developed CSNET for the (academic and industrial) Computer Science community. The National Science Foundation is a non-profit organization dedicated to (NSF). AT&T's unrestricted distribution of the UNIX computer operating system generated USENET, which was based on UNIX's built-in UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman created BITNET, which linked academic mainframe systems in an "email as card pictures" paradigm.

With the exception of BITNET and USENET, these early networks (including ARPANET) were purpose-built – that is, they were intended for, and largely limited to, closed communities of scholars; as a result, there was little pressure for the individual networks to be compatible, and they were largely not. In addition, other technologies such as Xerox's XNS, DECNet, and IBM's SNA were being pursued in the business sector.8 It took the British JANET (1984) and the United States NSFNET (1985) initiatives to clearly declare their intention to serve the whole higher education community, regardless of discipline. Indeed, one of the requirements for a university in the United States to get NSF money for an Internet connection was that “... the connection be made available to ALL eligible users on campus.”

Dennis Jennings arrived from Ireland to manage the NSFNET program for a year at NSF in 1985. He collaborated with the community to assist NSF in making a crucial decision: TCP/IP would be required for the NSFNET initiative. When Steve Wolff took over the NSFNET program in 1986, he recognized the need for a wide-area networking infrastructure to support the broader academic and research community, as well as the need to develop a strategy for establishing such infrastructure on a basis that would eventually be independent of direct federal funding. To that purpose, policies and methods (described below) were implemented.

NSF also chose to sustain DARPA's existing Internet organizational infrastructure, which was organized hierarchically under the (then) Internet Activities Board (IAB). The public declaration of this choice was the co-authorship of RFC 985 (Requirements for Internet Gateways) by the IAB's Internet Engineering and Architecture Task Forces and the NSF's Network Technical Advisory Group, which formally ensured interoperability of DARPA's and NSF's pieces of the Internet. 

In addition to selecting TCP/IP for the NSFNET program, federal agencies made and executed a number of important policy decisions that formed the modern Internet.

  • The expense of common infrastructures, like trans-oceanic lines, was shared by federal agencies. They also worked together to enable “managed interconnection points” for interagency traffic; the Federal Internet Exchanges (FIX-E and FIX-W) developed for this purpose served as models for the Network Access Points and “*IX” facilities that are now common parts of Internet architecture.

  • The Federal Networking Council9 was created to manage this sharing. Through the Coordinating Committee on Intercontinental Research Networking, CCIRN, the FNC also collaborated with other international organizations, such as RARE in Europe, to organize Internet assistance for the global research community.

  • This exchange and collaboration across authorities on Internet-related concerns is not new. Farber, working for CSNET and the NSF, and DARPA's Kahn reached an unusual 1981 agreement that allowed CSNET traffic to use ARPANET infrastructure on a statistical and no-metered-settlements basis.

  • Subsequently, in a similar vein, the NSF pushed its regional (at first academic) NSFNET networks to seek commercial, non-academic clients, expand their facilities to service them, and capitalize on the ensuing economies of scale to decrease subscription prices for everybody.

  • NSF implemented an “Acceptable Use Policy” (AUP) on the NSFNET Backbone — the national-scale component of the NSFNET – that barred Backbone usage for purposes “not in favour of Research and Education.” The predictable (and intended) result of encouraging commercial network traffic at the local and regional levels while denying it access to national-scale transport was to stimulate the emergence and/or growth of “private,” competitive, long-haul networks like PSI, UUNET, ANS CO+RE, and (later) others. This process of privately-financed augmentation for commercial purposes began in 1988, with a series of NSF-initiated seminars at Harvard's Kennedy School of Government on "The Commercialization and Privatization of the Internet" – and on the internet's "com-priv" list.

  • In 1988, a National Research Council group chaired by Kleinrock and comprised of Kahn and Clark published a study commissioned by the National Science Foundation titled "Towards a National Research Network." This research influenced then-Senator Al Gore and ushered in high-speed networks, laying the networking groundwork for the future information superhighway.

  • In 1994, the National Research Council issued a study titled “Realizing the Information Future: The Internet and Beyond,” again headed by Kleinrock (and with Kahn and Clark as members). This paper, commissioned by NSF, was the document that defined a blueprint for the growth of the information superhighway and has had a lasting impact on how people think about it. It foresaw important concerns like intellectual property rights, ethics, price, education, architecture, and Internet regulation.

  • The National Science Foundation's privatization agenda culminated in April 1995 with the defunding of the NSFNET Backbone. The recovered money was (competitively) reallocated to regional networks in order to purchase national-scale Internet connectivity from the now numerous private long-haul networks.

The backbone had transitioned from a network created with routers from the research community (David Mills' "Fuzzball" routers) to commercial equipment. The Backbone had expanded from six nodes with 56 kbps links to 21 nodes with multiple 45 Mbps links throughout its eight and a half year lifespan. The Internet has grown to over 50,000 networks on all seven continents and in outer space, with roughly 29,000 networks in the United States.

Because of the NSFNET program's ecumenism and funding ($200 million from 1986 to 1995) – and the quality of the protocols themselves – by 1990, when the ARPANET was finally decommissioned10, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide, and IP was well on its way to becoming THE bearer service for the Global Information Infrastructure.

Documentation Role

The free and open access to essential information, particularly protocol specifications, has been critical to the Internet's fast expansion.

In the university research community, the beginnings of ARPANET and the Internet support the academic tradition of free publishing of ideas and findings. However, the old academic publishing cycle was too stiff and too sluggish for the dynamic interchange of ideas required to build networks.

S. Crocker (then at UCLA) took an important step in 1969 in developing the Request for Comments (or RFC) series of remarks. These memos were meant to be a quick and informal method to exchange ideas with other network researchers. Initially, RFCs were printed on paper and sent through snail mail. When the File Transfer Protocol (FTP) was introduced, the RFCs were converted into online files that could be viewed using FTP. Of course, the RFCs may now be found on the World Wide Web at dozens of locations across the world. The online directories were maintained by SRI in its capacity as Network Information Center. Jon Postel served as RFC Editor and oversaw the centralized administration of necessary protocol number allocations, positions he held until his death on October 16, 1998.

The RFCs had the effect of creating a positive feedback loop, with ideas or suggestions provided in one RFC generating another RFC with further ideas, and so on. A specification paper would be created after there was some level of agreement (or, at the very least, a consistent collection of concepts). A specification like this would then be used as the foundation for implementations by the various research teams.

RFCs have become more focused on protocol standards (the "official" specifications) over time, while informative RFCs that describe other techniques or give background information on protocols and technical difficulties remain. In the Internet engineering and standards community, RFCs are now regarded as "documents of record."

The unrestricted availability of RFCs (for free if you have any type of Internet connection) encourages the expansion of the Internet by allowing the actual specifications to be used as examples in college classrooms and by entrepreneurs creating new systems.

The email has played an important role in all aspects of the Internet, including the creation of protocol specifications, technical standards, and Internet engineering. The very first RFCs frequently conveyed to the rest of the community a set of ideas produced by researchers in one place. After email became popular, the authorship pattern shifted — RFCs were provided by joint authors who had a common point of view regardless of their geographical location.

The use of specialized email mailing lists in the creation of protocol standards has long been utilized and continues to be an essential technique. The IETF presently contains more than 75 working groups, each of which focuses on a distinct element of Internet engineering. Each of these working

groups has a mailing list where they may debate one or more draught papers that are in the works. When a draught document has gained consensus, it may be released as an RFC.

As the Internet's present fast expansion is fuelled by the awareness of its capacity to encourage information sharing, it is important to remember that the network's first function in information sharing was providing knowledge about its own design and operation via RFC papers. This one-of-a-kind approach for developing new network capabilities will be essential to the Internet's future growth.

Post a Comment

Post a Comment (0)

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !