Machine Research was a publication, and one of the outcomes, of a PhD workshop co-organized by Aarhus University and Constant VzW. The workshop covered “research on machines, research with machines, and research as a machine. […] It aims to engage research and artistic practice that takes into account the new materialist conditions implied by nonhuman techno-ecologies.”
Aside from sharing research, discussing and writing essays, the workshop saw participants write several scripts that would function as filters and would re-interpret, re-contextualize or otherwise respond to the written essays in the newspaper.
As our essay, Martino Morandi and I co-authored an expanded
net_o_nets.py. The script would filter all the essays and then annotate each URL in the footnotes. The annotated URL would show what networks were traversed between the workshop location and the URL’s host. The full contribution of the README is below.
For more documentation of the event, see Constant Vzw’s warps & wefts.
You can grab the source code of net-o-nets.py on Constant Vzw’s gitlab. In that same code repository, one can also find the filters made by other participants, such as an Acronymizer by Dave Young and a Markov-Chain by Ann Mertens / Algolit.
Clicking on any link on the web sets in motion a request for information which travels from node to node, along a variable but predictable route, to reach the server that hosts the desired website. Once the server receives the request, its reply will flow back along roughly the same path to the browser. This exchange of information travels through just a few of the more than 50,000 different subnetworks that together constitute the Internet. The chosen route is determined by the Internet Service Providers that manage those subnetworks. The route depends on a series of conditions, including the geographical location of source and destination, the network traffic circumstances and the specific commercial deals between subnetworks — the so-called ‘peering agreements’.
Accessing any website or service is experienced as qualitatively the same by the browser user, independently of the path that the information packets will take. However, the geographical routes, the providers involved, and the infrastructure accessed can vary extremely from case to case.
This text is a README for net_o_nets.py, a post-processor of sorts which searches for information about what networks have been traversed to reach an external web resource. The resulting metadata is added next to the web-based citations, a process applied to the other texts in this journal. The aim is to include a few of the aforementioned situated aspects of networks, right next to the formal ubiquity and universality of a hyperlink. As the route taken to reach a resource always changes depending on the starting location, the metadata will vary accordingly. The link-analysis for this specific journal has been calculated from the Internet connection of the 25th floor of the Bruxelles World Trade Center, during the Machine Research Workshop hosted by Constant in October 2016.
cat original_text_file.txt | python net_o_nets.py > annotated_text_file.txt
http://example.com ( Proximus NV → RIPE Network Coordination Centre → Telia Company AB → MCI Communications Services, Inc. d/b/a Verizon Business )
The analysis of the route is performed using two fundamental tools which are commonly used to understand and diagnose computer networks: Traceroute and Whois.
Traceroute probes the routed path between your local network and a given destination and returns a list of points that constitute that path. This is shown by listing the Internet Protocol address of each router on the way. While this information might seem authoritative, it is also contingent on what each specific network allows to be measured and might thus be incomplete.
Whois is a tool to look up ownership information about an Internet resource, as a domain name, an IP address or an Autonomous System. In order to register and use such a resource, a private individual, company, or organization has to provide contact details to publicly accessible databases.
Whereas traceroute obtains the logical address of each node that forms our abstract path through the network, whois turns this information into a story of a network of networks, with different owners, material conditions and legacies. Using the two in conjunction reminds one of the aspects of ownership, power, and control that come with the participation on a network that is usually perceived as open and horizontal. At the same time, this simple move offers a ground to discuss network politics at an approachable scale, by looking at a specific moment, location, set of agents and operations.
freedom, autonomy, peerage, tiering
The entanglement of different networks that the Internet is composed of is based on the fundamental element of the IP protocol, which was designed for autonomous interoperation and dynamic restructuring of the network without a central management center. While on the first experimental inter-networks any machine on any network could directly address any other machine on any other network, the change of scale and complexity due to the global success of the Internet also meant the practical dismissal of flat hierarchies.
The different networks are currently articulated around the concept of ‘Autonomous System’, the subnetworks that compose the Internet, managed by one organization or company, and in which all communications follow the same routing table. This means that to reach an Internet resource, all the nodes in an Autonomous System agree on which network one packet has to hop next, to move towards a destination.
The current system admittedly keeps a degree of openness and horizontality. The routing tables are free to access, so that each AS can check the other AS’s routes and decide which ones are convenient to hop to, to assure efficient flows towards all possible destinations.
This technical cornerstone of the Internet, according to certain ideological readings, should guarantee an inherent freedom and openness of the network. We can genuinely acknowledge the free aspect, as long as it is understood in the sense of capitalist market freedom: horizontal participation in the Internet is open to all parties with the economic means to acquire the necessary infrastructures and sign peering-agreements with neighbouring networks.
Peering-agreements are a good example of the way horizontality and openness are perfectly compatible with inequality and de-facto hegemonies. While the word “peer” suggests an equality of sorts, in practice some peers are more equal than others. In order to ‘peer’, smaller networks have to pay transit fees to larger networks. This produces a hierarchy, which is referred to as the system of ‘tiered’ networks. At the top of the hierarchy are the networks which do not need to pay to interconnect with any other networks because of their size and geographic spread, the so-called “Tier 1” networks.
Tier 1 networks are interesting entities through which we can understand the legacy of past networks on the ones of today. While there is no definitive list of Tier-1 networks, most listings include the same set of companies. What stands out is that most of these companies are the heirs of the old national telecom monopolies in Europe, or of the AT&T monopoly in the U.S.. These firms gained this status due to their previous global activities and their historical role in interconnecting various continents: their status is a legacy of the times when these firms were part of colonial and imperial projects. Another thing that stands out is that there are no non-Western Tier 1 networks.
While probing the network, as one keeps returning to the same large transit networks in order to reach geographically disparate destinations, the ‘centrality’ of Tier-1 providers becomes noticeable.
This script is a simple example of the short diversions one can take from the uniformed experience of internetworked telecommunications, to remind ourselves of the material conditions and the power relations that are implicated in each and every use of the Internet.
__ another release of ___ __ WTFPL etc
| | ___ ___ ___ _ _ ___ ___| _| | | ___ ___ ___ ___ _ _
| |__| -_| .'| . | | | -_| | . | _| | |__| -_| . | .'| _| | |
|_____|___|__,|_ |___|___| |___|_| |_____|___|_ |__,|___|_ |
|___| 2k16 |___| |___|
Clicking on any of the external links in your text sets in motion a request
for information which travels along a specific route to reach the desired website.
The route travels across various networks, owned by the different service providers
that make up the internet.
The way the route will be chosen depends on your Internet Service Provider, as a
consequence of conditions ranging from your geographical location to the specific
commercial deals existing between networks - the so-called 'peering agreements'.
This filter adds the information of what networks are accessed to reach a specific
resource, producing a metadata of sorts which is added to your citations.
It is based on two tools that are used to take measurements of computer networks:
Traceroute and Whois.
Traceroute shows the routed path across the internet between your own network
and a given destination. This path is shown by listing the Internet Protocol
address of each router on the way.
Whois is a tool to look up ownership information about the Domain Name System.
In order to get a domain name one has to provide contact details to the domain
registrar - this information is publicly accessible.
Whereas traceroute shows the abstract logical adresses of each node in the network,
whois turns this information into a story of a network of networks, with different
owners, material conditions and legacies.
The result of this script will be different depending on the location from where it
If something doesn't work, blame the network not the script and try again.