The Political Economy
of Computing

Florian Glatz
10 min readMay 13, 2015

Taking technological change seriously in the social sciences.

F.G.

This article makes no specific claim about what the social sciences are and which disciplines do or do not fall under its name. The perspective I take is situated at the “intersection between information theory, cryptography, sociology, epistemology, politics and economics”[1].

What research has been conducted so far at this complex intersection of multi-layered systems that can tell us how to understand technology and technological change and how it relates to us as individuals and as a society?

The Political Economy Of Computing

One scholar whose exceptional work deserves highlighting is Janis Kallinikos. He calls his research perspective the political economy of computing, which

“takes into account the broader socioeconomic drivers of change in the relationship between humans and machines. Rather than focusing on the characteristics of the technology, on individual dealings with machines, or on organizational implications of technology, this perspective views technological change within the purview of sociohistorical developments, socioeconomic systems, legal and regulatory frameworks, environmental impacts, governance structures, and large–scale government policies and agendas.

Kallinikos’ concerns stretch mainly into two of the called upon topoi: that of sociohistorical developments, with a focus on the transformation of labor and the entailing social changes, and that of governance structures in socioeconomic systems.

Regulation Through Technology

In a paper contributing to his study of organizational theory and governance structures, Kallinikos defines regulation through technology as a “combination of strategies and principles deploying material objectification, functional simplification, closure and automation.

1. Functional Simplification
Regulation through technological means starts with the act of functional simplification, that is the re-engineering of human-centric processes that were formerly based on formal role systems in a way that allows for the introduction of technology as a mediation and control structure between the different parts of the re-engineered process which are left to human discretion. The guiding objective of technology-driven functional simplification is the placement of technological artifacts and control structures at the center of a process, where the former key players are pushed outwards into functionally discrete roles, that permit no technical room nor economic incentive for defection[2].

Another way to look at functional simplification is handed to us by Luhmann, who describes technology as a

“ […] functioning simplification in the medium of causality. We could also say that within the simplified area strict (functioning under normal circumstances, recurrent) couplings are established. This is, however, possible only if interference by external factors is to a large extent excluded. Technology can therefore also be understood as the extensive causal closure of an operational area”.

Whereas Kallinikos talks about “functional simplification”, Luhman talks about “functioning simplification”. Despite the slight difference which is owed to reasons irrelevant for the analysis at hand, both talk about technology as re-engineering and simplifying pre-existing processes. From the perspective of a process designer, the challenge of re-engineering is one of approximation and compromise, at least when aiming to recreate every aspect of the status quo antes[3].

2. Material Objectification
The technological artifacts and control structures around which processes become redesigned are what Kallinikos referres to as material objectification. According to him

“Material objectification takes predominantly the form of elaborate technical codification (such as software code and relations between data tokens)”.

The complexity of the software involved, must not be mistaken for a contradiction with the simplification described before. To the contrary, the complexity of software is hidden behind a causal closure, i.e. a boundary with a purposefully crafted interface towards the environment it has been placed in. Kallinikos calls this black boxing.

One interesting observation that is never explicitly mentioned neither by Luhmann nor by Kallinikos is the interdependence between material objectification and functional simplification. This is because networked communication systems as a base layer technology and rich software environments on top of it carry their own affordances and constraints. Those in turn are imprinted on the functionally redesigned and simplified processes that are built around technological artifacts and systems. It may be objected that this is a rather obvious statement to make — however I conjecture that it is a vital insight for future regulators[4].

3. Closure
As I discussed in terms of functional simplification, both Luhmann and Kallinikos regard technological artifacts as closed systems, impenetrable to external causes and effects. Closure denominates the boundary at which a technological system and the environment its embedded in meet and interface with each other.

Black boxing is another word for this phenomenon, though with a decidedly more political connotation. Recently, Frank Pasquale used the term in his book ‘The Black Box Society’ as a model to describe, more broadly, a veil of secrecy around the inner workings of the most successful firms and economic sectors in general, such as finance, search and other. At least part of that secrecy is, according to Pasquale, owed to “The Secret Judgments of Software”. More specifically he says that “authority is increasingly expressed algorithmically”.[5]

4. Automation
Automation refers to “technological operations [which] unfold in self- firing, chained sequences” (Kallinikos). Automation is necessary both for engineering a causal closure that removes any external (human) influence from within its boundary as well as the orchestration of cross-systems communications that is typical for a networked environment.

Kallinikos contrasts automation with the workings of prior, human-centric systems:

Formal roles systems are generally mapped onto the task segmentation and standardization upon which they bear […], but the two systems remain separate. By contrast, technological objectification is both exclusive and expansive. It tends to translate or replace altogether formal role systems with technological sequences. This may and often does imply the regulation of technology through higher order technologies, in a cascade or even hierarchy of technological systems.”

From Prescription to Possibility

Talking about governance structures, law and technology both become first-class citizens. Looking at current trends, however, it seems that technological governance is overtaking the legal domain. Oliver Goodenough, a contemporary advocate of software-driven legal innovation, put it in those terms at Stanford’s Future Law Conference:

Quite drastic, to say the least! But what exactly is this “something else” that is going to replace the “rule of law”?

Roger Brownsword at King’s College London put it as a transition from prescription as a mechanism of governance to one of possibilities:

“the law does not have to bear all, or even any, of the regulatory burden. […]
where code is used, the regulatory signal can change to one of possibility”

Kallinikos comes to the same conclusion:

“Software technology is fairly complex in technical terms but relies on the significant streamlining and simplification (occasionally referred to as reengineering) of the tasks and operations that constitute its target domain.
In this respect, functional or procedural simplification must be understood as an instance of a selection, accomplished out of much a broader set of possibilities.

Tying it all together then, we arrive at a comprehensive theory:

  1. Technology exists as a bounded structure in the medium of causality (Luhmann).
  2. The medium of causality is a set of possibilities (Kallinikos)
  3. The design of a technology-enabled process involving human actors is the selection of a subset from the set of all possibilities (Kallinikos)
  4. The dominant mode of “regulation” changes from normative prescription to the technique of pre-selecting a (narrow) set of possibilities from the range of all possible “moves” an actor could make (Brownsword).

From Market Behavior to Market Access

At this point, we are left with a multitude of possible directions in which we could conduct further research. One such direction would be to ask what the functional role of law becomes, when its responsibility in the regulation of behavior is increasingly waning.

An interesting perspective on this question comes from Hans Micklitz, Law Professor at the European University Institute, who conducts research with regards to the transformation of private law in Europe. According to him, the functional role of law becomes one of controlling market access. He calls this governance technique Intrusion and Substitution. Although Micklitz establishes his theory before the backdrop of EU private legal order vs. national legal orders of European member states, the parallel development of regulation through technology and the emergence of a European private legal order that encapsulates rules focused on controlling market access in sectors such as telecommunications, financial services, health and safety regulation, seems to be more than mere coincidence.

How can we find the links necessary to prove or disprove a definitive connection between the development of the European private legal order to one of market access rules and the emergence of technology as a preferred mechanism in governing market behavior?

An interesting parallel can be drawn to the copyright wars. Much during the millennium’s first decade, copyright law and its actual practice were representative of the clash between modern information technology and established principles of nature themselves. Today, the war is mostly over. The one critical characteristic of digital information, which caused havok ten years ago, was transformed into the zero marginal cost business model, which is by now common widsom. But what enabled business to adapt to the changed conditions with regards to copyrighted material in the digital information age?

The consumption of digital media happens in an electronic environment: an environment that is amenable to rule-based, hierarchical governance structures expressed through the means afforded by software and rich, networked execution environments.

So what has become of the law in the copyright discourse? Looking at Directive 2001/29/EC of the European Union ‘on the harmonisation of certain aspects of copyright and related rights in the information society’ from a technology-focused lens, Article 6 appears paradigmatic:

“Member States shall provide adequate legal protection against the circumvention of any effective technological measures, which the person concerned carries out in the knowledge, or with reasonable grounds to know, that he or she is pursuing that objective.”

Here, the role of law becomes one of a fall-back mechanism and basic protection, whereas the governance through the technology itself is left to the software maintainer.

To summarize: modern information technology first wreaked havoc in copyright, then transformed the industry into one where the zero marginal cost of reproduction was successfully exploited economically, by regulating the behavior of their contractual counter parties not through prescription but through possibilities. The role of statutory law became one of structural, base-level control.[6]

Multi-Stakeholder approach to Behavioral Regulation

So who sets the gears when it comes to this new mode of regulation through technology? For me, there are three main strands of thought to answer that question:

  1. All hail to our new platform overlords
    This view is very popular among young people, who grew up in the age of platforms such as Google, Facebook, Apple, Amazon and others. It is based on the belief that regulation in the digital realm has long since shifted completely to a few private digital monopolies. Michael Seemann writes greatly about this in his book.
  2. The Multi-Stakeholder approach to Internet Governance
    This view originates partially from academia and in other parts from a set of regular conferences that take places on an international level, e.g. the Internet Governance Forum and many others. The proponents of the multi-stakeholder approach argue, that the classical mode of governance, i.e. through national state regulation in the form of laws and administrative orders is passé when it comes to the internet. For them, Internet Governance is mainly concerned with critical infrastructure such as the Domain Name System. Although they do take the role of technology seriously, by definition, they disregard the software-based regulation that happens on top of the infrastructure they are concerned with. I argue that the upper layers are more important when it comes “to channel the conduct of their regulatees” (Brownsword).
  3. The State as a Platform Provider
    Others believe that the state, in order to remain the dominating regulator despite the transition to technology-driven systems, needs to develop regulatory means and reach in the digital sphere itself. In essence, the state has to become a platform provider itself and enforce its regulation through code.
  4. ( Everybody else )
    The (inofficial) fourth and most regrettable train of thought lives in complete ignorance of the observations made in this post and occupies mostly important political functions in any given institution.

All stated approaches are interesting and pose exciting research challenges on their own.

Follow me on twitter

Footnotes

[1] Aptly put by Vitalik Buterin.

[2] In a game theoretical understanding of the word. Here: deviating from the intended behavior from the perspective of the system’s designer.

[3] CodeX Fellow Harry Surden of Colorado Law School tries to capture (and overcome) the problems in re-engineering a human-centric process through computation and software in his essay the Variable Determinacy Hypothesis, in which he argues that “legal outcomes in certain contexts are determinate enough to be amenable to resolution by computers”.

[4] This is where Szabo’s notion of smart contracts fits in. Whether intentionally or not, Szabo based his idea of how to secure private relationships on public networks on a notion of public computational base-layer-services that have only recently started to be developed.

[5] We have to forgive Pasquale for falling into the trap of using the term “algorithm” in a rather unreflected way. In actuality, algorithms, in a technical sense, are just one aspect of complex software systems, which, because of the causal closure, are not sensibly amenable to dissection into functionally discrete parts.

[6] Interestingly, the last time the national German legislator tried to regulate market behavior in the copyright sphere, by passing a law that forced crawlers such as Google to pay for linking to content of news publishers, Google simply excluded those affected publishers from its search results. Even before those changes to Google’s search results went live, the publishers revoked their claim for compensation. This is quite an interesting demonstration of how far we have come in terms of technological governance vs. the factual and functional role of law.

--

--