The Studpid Network That We Know And Love



This text is an offshoot of a period of research on an obsolete network protocol, X.25, which was taken as a technical object of comparison to study the ways in which contemporary Internetworking works. Being informed by particular hands-on experiences with protocols, it is written as an accompanying text for a small practical exercise.

Legacy

In the early days of internetworking, before the establishment of the Internet Protocol suite (TCP/IP) as the global undisputed standard, different protocols were developed in competition with one another. X.25 was one such competitor and was for more than two decades the hegemonic networking technology. X.25 and TCP/IP emerged around the same period in response to similar questions, and still co-exist nowadays, representing alternative networking paradigms. However, X.25 is used in few specific cases and is been phased out for many years, hence it is often described as a legacy protocol in technical literature.

Legacy in this context is used as an adjective to refer to methods, software or hardware considered obsolete. The term is used to derogatorily point at the burden of maintaining older components of a system which can not be removed without breaking support for or compatibility with older versions, thus requiring extra work and care. Outside of technical parlance, used as a noun, legacy is defined as an inheritance, or as the outcomes of past events. The interplay between the technical parlance and the colloquial meaning of the word suggested a way to understand the long-term relations between systems that are decades apart from each other.

The text that follows is an attempt at activating the concept of legacy as an approach to look at and speak about technology. Legacy protocols, formats and systems are reluctantly dragged along into the present together with shiny black-boxed interfaces. This offers a continuity in which structures inherited from the past appear next to those of the present. As a result, legacies implicitly question current paradigms of innovation and progress and return technological artifacts to their complex historical dimension.

The potential of these legacies comes from the way they can be experienced from within, through the practice of use, shifting our ordinary punctual relation to a technical object. Any such object always involves certain long-term elements, which can be mobilized to obtain a temporal displacement in the functioning of the object, and in our understanding of its functioning.

Legacy is therefore a site to ground archaeological and genealogical approaches in the devices and systems we encounter in our everyday life. In practice, it often implies anomaly, deviance or mis-functioning of the normal operation of something. Nonetheless, the attempted use of X.25 was in itself a revealing experience of some fundamental differences between the TCP/IP protocol and its legacy other.

Network

The assumption that one could experience the X.25 protocol by reading technical manuals and setting up a connection was dispelled in realizing the complex interdependence between the protocol and the infrastructure it was meant to run on. Our understanding of the functioning of a networking protocol is influenced by our existing knowledge of the TCP/IP, so we just tried to use X.25 the way we would do as with TCP/IP. What we didn’t know, is how much the design choices of TCP/IP were fundamentally taken in opposition to the paradigms of networks like X.25.

At the time of the parallel development of the two protocols, the debate on internetworking was polarized between two constellations of agents, interests, industries and technologies.

This debate can be re-visited by accessing the Oral Histories of the Internet, told by the ‘protagonists’ of this history, many of whom are still alive, and most of which are involved in reconstructing an own personal account of the events. As a result one has to be wary that these individual histories are biased in many ways ranging from the heroic memories of the pioneers to the accounts of the ones that didn’t make it into the Internet Hall of Fame but want to reinstate their participation in the process. They are still an invaluable resource to try and track the evolution of the technical and political controversies.

In most Oral Histories, the two constellations are generally recalled in a stylized fashion, and we will summarize them, acknowledging that we gloss over many details. for the sake of simplicity. On one side X.25 was promoted by the national telephone monopolies trying to enter the nascent market of computer networking by applying their knowledge of networks, thus protecting their interests in the existing infrastructures. On the other, TCP/IP was funded by the US Department of Defense and supported by the emerging computer industries, stressing a model in which the terms for computer networking would not be dictated by the PTTs(Post, Telegraph and Telephone companies) and where computer networking could be built up from the start.

While presented as being in opposition, both these the telephony and computer industries worked on experimental methods of interconnecting computers over long distances, however they differed in methods and priorities based on the different requirements and economies for each industry. The predominant way that this contraposition can be read in the histories of the Internet is of a clash between computing culture and telephony culture. As a sub-text, the telephony world represents the status-quo and old way of doing things, based on national monopolies, their hierarchical and bureaucratic proceedings, their concerns for the wellbeing of national industries and technological sovereignty and therefore their political interference with the markets. On the other hand the emerging world of the computer industries and internetworking is depicted as based on (smaller scale) private enterprise, transnational collaboration, flat hierarchies and designs based on pure technical merit as opposed to any form of politics.

This account sounds suspicious today, for its linearity, its partitioning between political and apolitical, its accounts of market freedom, or the unclear role of the US Department of Defense, to name a few. Still, accepting this narrative as integral part of the legacy, offers an interesting angle to analyze the mutually productive relation between technical design choices and ideological positions, rather than limit ourselves to deconstruct the latter.

Bearing in mind this loose sketch of urgencies, agents and discourse around the development of the two paradigms, we can turn now to their conceptual and technical oppositions that partake in the diagram.

Paradigm

Both IP and X.25 are protocols for so-called packet switched networks. Packet switching is a technique to split up data into separate small chunks (packets) before transmission, so multiple connections can simultaneously travel over the same line. On arrival the packets are reassembled and the data reconstructed. The two differ most prominently in the understanding of how this packet switching should happen. The two approaches proposed were a connection-oriented paradigm called Virtual Circuit and a connectionless paradigm called Datagram.

X.25 has been the most important attempt based on the Virtual Circuit model of packet switching. The Virtual Circuit was proposed as a commercially viable method of using the existing infrastructure, unsurprisingly owned by the same parties that were funding the development. For this reason they imagined the existing and widespread telephone system as the basis for future data networks. In their model the switching through the network (routing) was handled by management centers. Management hear meant making sure that all packets in the transmission would flow over the same route and would arrive in the correct order. So it was the equipment of the telephone network, rather than that of one of the end-users, that would take care of correct transmission of data and the allocation of bandwidth for the connection. The operation of dedicated leased lines would be guaranteed by the PTTs, which would charge a fee for this service and retain a role of central management of the flows in the network.

The paradigm on which IP is modeled, instead, is the connectionless model named Datagram, that came out of experimental networking research. This model reconceptualized the network as an agglomeration of separated and interconnected computer networks. The packets, or datagrams contain their own addressing information, that routers in the network use to forward packets to the next device, repeating this process until the destination is reached. It is then the sending and receiving machines at the ends of the connection that are responsible for tasks such as ensuring delivery, order and timeout of packets. This allowed networks to be imagined without facing the status of the existing infrastructure, fictionalizing it as a series of ducts through which the data would flow without the network having to care for it, an approach dubbed network agnosticism. This approach implied that the computer industry would be the main influence on computer networking while the telephone companies should just provide the conduits for it.

What was sketched above however are just the theoretical models that guided the development of the two protocols. In practice, the development of both actually entailed a hybridization of the two models. In the case of the datagram-based IP, for example, the diffusion and success came by coupling it with the companion protocol TCP, which is itself a Virtual Circuit protocol. The Internet’s TCP/IP set is thus an application of both approaches.

In summary, the main contrast between the two conceptual models is in the way in which they each related to the network infrastructure. The Virtual Circuit model implied that the physical infrastructure was the most important element of the network. The Datagram model on the other hand, proposed abstracting the logical network away from its material base, implying that it doesn’t (or shouldn’t) matter. Even though both protocols had to compromise with the material conditions of the network, and both ended up as a hybrid of the two models, the debate on the two opposing conceptions of internetworking achieved its own agency, which still retains an influence today.

Narrative

The legacy of the contrapositions that we outlined in the previous sections is evident in a number of contemporary debates. A clear example is the controversy over the managment of certain flows of data over the Internet, which has been catalyzed by the concept of net neutrality. Net neutrality is assumed as an essential quality of the Internet, which has accompanied its development and should be protected. The general discourse, championed by digital rights organizations such as EFF ( https://www.eff.org/issues/net-neutrality ) and by companies such as Google ( https://www.google.com/takeaction/action/freeandopen/index.html ), is that a network should not discriminate the way that information is sent through the network on the basis of the content, if at all.

Leaving aside the extent to which certain types of management have already been in place for some time, the source of this concept as a foundational quality of the Internet can be tracked back to the debate that was just outlined. The idea of net neutrality is closely related to the idea of ’the dumb pipe’. This concept coming out of the formation of TCP/IP, fictionalizes a complete separation between the physical and logical levels of communication. It is generally taken to mean the localization of intelligence, in the sense of the interpretation of content, at the ends of the network, while the in-between flow of information happens through a network of neutral ducts.

The political significance of such design principles was kept implicit in the early times of the debate, which pretended to be limited to a technical confrontation among models and topologies. It was evident though to all the people involved what economies and world-views the two paradigms represented, due to manifest elements such as who was funding the research and pushing for adoption.

It is with the prevailing of the TCP/IP protocol and the global success of the Internet that discourse sedimented, equating some successful design choices of the protocol with certain ethical and political values.

A representative example is the 1998 article “Rise of the Stupid Network” by David Isenberg that describes how “the Internet, because it makes the details of network operation irrelevant, is shifting control to the end user”, and “end user devices would be free to behave flexibly because, in the Stupid Network the data is boss, bits are essentially free.” So “the Internet that we know and love is a “virtual network” – a “network of networks” – that is independent of wires and transport protocols.”

What the legacy of the DG/VC debate suggests is that these values should be read as traits that the TCP/IP protocol had at specific point in time, as a reaction to a competitor with another set of features, rather than as intrinsic qualities of TCP/IP themselves. It is therefore ironic but not surprising that the dissemination of these Internet narratives happened concurrently to the revision of the design choices to which these inherent qualities were ascribed, and that the reason for such compromises was precisely the increasing popularity of TCP/IP. The success of the protocol introduced material issues of scaling, of composition with the existing infrastructure and negotiation with its related economies.

This text was produced in the context of the Machine Research Workshop organized between Aarhus University and Constant VZW in October 2016 in Bruxelles: https://machineresearch.wordpress.com/. The bulk of the research for it was conducted during a residency at the Media Archeology Lab at University of Colorado Boulder.

References

  • Abbate, Janet. Inventing the Internet. 58839th edition. The MIT Press, 2000.
  • Bowker, Geoffrey C., Karen Baker, Florence Millerand, and David Ribes. “Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment.” In International Handbook of Internet Research, edited by Jeremy Hunsinger, Lisbeth Klastrup, and Matthew Allen, 97–117. Springer Netherlands, 2009.
  • cheek, cris, Braxton Soderman and Nicole Starosielski. “Network Archaeology,” Amodern Journal. Vol. 2 (Fall 2013).
  • DeNardis, Laura. Protocol Politics: The Globalization of Internet Governance. The MIT Press, 2014.
  • Després, Rémi. Oral history interview with Rémi Després by Valérie Schafer. Oral History, May 16, 2012. http://conservancy.umn.edu/handle/11299/155671.
  • Schwartz, Mischa. “X.25 Virtual Circuits – TRANSPAC IN France – Pre-Internet Data Networking.” IEEE Communications Magazine 48, no. 11 (November 2010): 40–46.
  • Galloway, Alexander R. Protocol: How Control Exists after Decentralization. MIT Press, 2004.
  • Pelkey, James. “CYCLADES Network and Louis Pouzin 1971 – 1972,” 2007. http://www.historyofcomputercommunications.info/Book/6/6.3-CYCLADESNetworkLouisPouzin1-72.html.
  • Pouzin, Louis. Oral history interview with Louis Pouzin by Andrew L. Russell. Oral History, April 2, 2012. http://conservancy.umn.edu/handle/11299/155666.
  • Pouzin, Louis. “Virtual Circuits vs. Datagrams: Technical and Political Problems,” 483–94. ACM, 1976. doi:10.1145/1499799.1499870.
  • Russell, Andrew L. Open Standards and the Digital Age: History, Ideology, and Networks. Cambridge University Press, 2014.