Brey, P. (2009). Is information ethics culturally relative? in Eyob, E. (Ed.), Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions. (pp. 1-14). Hershey, NY: Information Science Reference.
Kapczynski, A. (2020). The Law of Informational Capitalism. Yale Law Journal, 129(5), 1460–1515.
Trottier, D. (2012). Interpersonal surveillance on social media. Canadian Journal Of Communication, 37(2), 319-332.
Provide a critical reaction to the texts exploring the juncture at which communication media technologies and ethics intersect. Each of the texts addresses the issue of ethics in the age of digital mediation with a different perspective in mind and in relation to a different venue. You may choose to focus your discussion on one of these venues as long as you provide direct references to all of the texts. In addition, you may also address the videos that were linked via this module. Your discussion should integrate the different arguments made in the essays into your own take on the subject of media technology ethics. Make sure to provide your own perspective and to develop your own voice as a writer. There are no definitive answers to the questions raised by the authors. The critique that you offer should join the conversation.
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Chapter I
Is Information Ethics
Culturally Relative?
Philip Brey
University of Twente, The Netherlands
Abstr act
In this chapter, I examine whether information ethics is culturally relative. If it is, different approaches
to information ethics are required in different cultures and societies. This would have major implications for the current, predominantly Western approach to information ethics. If it is not, there must be
concepts and principles of information ethics that have universal validity. What would they be? I will
begin the chapter by an examination of cultural differences in ethical attitudes towards privacy, freedom
of information, and intellectual property rights in Western and nonwestern cultures. I then analyze the
normative implications of these findings for doing information ethics in a cross-cultural context. I will
argue for a position between moral absolutism and relativism that is based on intercultural understanding
and mutual criticism. Such a position could be helpful in overcoming differences and misunderstandings
between cultures in their approach to information and information technologies.
INTR ODUCT ION
Information ethics1 has so far mainly been a
topic of research and debate in Western countries, and has mainly been studied by Western
scholars. There is, however, increasing interest
in information ethics in nonwestern countries
like Japan, China and India, and there have been
recent attempts to raise cross-cultural issues in
information ethics (e.g., Mizutani, Dorsey and
Moor, 2004; Ess, 2002; Gorniak-Kocikowska,
1996). Interactions between scholars of Western
and nonwestern countries have brought significant
differences to light between the way in which
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
they approach issues in information ethics. This
raises the question whether different cultures
require a different information ethics and whether
concepts and approaches in Western information
ethics can be validly applied to the moral dilemmas of nonwestern cultures. In other words, is
information ethics culturally relative or are there
concepts and principles of information ethics that
have universal validity? The aim of this essay is to
arrive at preliminary answers to this question.
MOR AL R ELAT IVIS M AND
INFOR MAT ION ET HICS
In discussions of moral relativism, a distinction
is commonly made between descriptive and
metaethical moral relativism. Descriptive moral
relativism is the position that as a matter of empirical fact, there is extensive diversity between the
values and moral principles of societies, groups,
cultures, historical periods or individuals. Existing
differences in moral values, it is claimed, are not
superficial but profound, and extend to core moral
values and principles. Descriptive moral relativism is an empirical thesis that can in principle
be supported or refuted through psychological,
sociological and anthropological investigations.
The opposite of descriptive moral relativism is
descriptive moral absolutism, the thesis that there
are no profound moral disagreements between
societies, groups, cultures or individuals. At
issue in this essay will be a specific version of
descriptive moral relativism, descriptive cultural
relativism, according to which there are major
differences between the moral principles of different cultures.
Much more controversial than the thesis of
descriptive moral relativism is the thesis of metaethical moral relativism, according to which
the truth or justification of moral judgments is
not absolute or objective, but relative to societies, groups, cultures, historical periods or individuals.2 Whereas a descriptive relativist could
make the empirical observation that one society,
polygamy is considered moral whereas in another
it is considered immoral, a metaethical relativist
could make the more far-reaching claim that the
statement “polygamy is morally wrong” is true or
justified in some societies while false or unjustified
in others. Descriptive relativism therefore makes
claims about the values that different people or
societies actually have whereas metaethical relativism makes claims about the values that they are
justified in having. Metaethical moral relativism is
antithetical to metaethical moral absolutism, the
thesis that regardless of any existing differences
between moral values in different cultures, societies, or individuals, there are moral principles that
are absolute or objective, and that are universally
true across cultures, societies or individuals. Metaethical moral absolutism would therefore hold
that the statement “polygamy is morally wrong”
is either universally true or universally false;
it cannot be true for some cultures or societies
but false for others. If the statement is true, then
societies that hold that polygamy is moral are in
error, and if it is false, then the mistake lies with
societies that condemn it.
The question being investigated in this essay is
whether information ethics is culturally relative. In
answering this question, it has to be kept in mind
that the principal aims of information ethics are not
descriptive, but normative and evaluative. That is,
its principal aim is not to describe existing morality regarding information but rather to morally
evaluate information practices and to prescribe
and justify moral standards and principles for
practices involving the production, consumption
or processing of information. A claim that information ethics is culturally relative is therefore a
claim that metaethical moral relativism is true for
information ethics. It is to claim that the ethical
values, principles and judgments of information
ethics are valid only relative to a particular culture,
presumably the culture in which they have been
developed. Since information ethics is largely a
product of the West, an affirmation of the cultural
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
relativity of information ethics means that its
values and principles do not straightforwardly
apply to nonwestern cultures.
But if the cultural relativity of information
ethics depends on the truth of metaethical relativism, does any consideration need to be given
to descriptive relativism for information ethics?
This question should be answered affirmatively.
Defenses of metaethical relativism usually depend
on previous observations that descriptive relativism is true. If descriptive relativism is false, it
follows that people across the world share a moral
framework of basic values and principles. But if
this is the case, then it seems pointless to argue
for metaethical moral relativism: why claim that
the truth of moral judgments is different for different groups if these groups already agree on
basic moral values? On the other hand, if descriptive relativism is true, then attempts to declare
particular moral principles of judgments to be
universally valid come under scrutiny. Extensive
justification would be required for any attempt to
adopt a particular moral framework (say, Western information ethics) as one that is universally
valid. In the next section, I will therefore focus
on the question whether there are good reasons to
believe that there are deep and widespread moral
disagreements about central values and principles
in information ethics across cultures, and whether
therefore descriptive cultural relativism is true for
information ethics.
T HE DESCR IPT IVE C ULT UR AL
R ELAT IVIT Y OF INFOR MAT ION–
R ELAT ED VALUES
In this section, I will investigate the descriptive
cultural relativity of three values that are the topic
of many studies in information ethics: privacy,
intellectual property and freedom of information.
Arguments have been made that these values
are distinctly Western, and are not universally
accepted across different cultures. In what fol-
lows I will investigate whether these claims seem
warranted by empirical evidence. I will also relate
the outcome of my investigations to discussions
of more general differences between Western and
nonwestern systems of morality.
How can it be determined that cultures have
fundamentally different value systems regarding
notions like privacy and intellectual property?
I propose that three kinds of evidence are relevant:
1.
2.
3.
Conceptual: The extent to which there are
moral concepts across cultures with similar
meanings. For example, does Chinese culture have a concept of privacy that is similar
to the American concept of privacy?
Institutional: The extent to which there
is similarity between codified rules that
express moral principles and codified statements that express moral judgments about
particular (types of) situations. For example,
are the moral principles exhibited in the
laws and written rules employed in Latin
cultures on the topic of privacy sufficiently
similar to American laws and rules that it
can be claimed that they embody similar
moral principles?
Behavioral: The similarity between customs
and behaviors that appear to be guided by
moral principles. This would include tendencies to avoid behaviors that are immoral
regarding a moral principle, tendencies to
show disapproval to those who engage in
such behaviors and to show disapproval to
those who do not, and tendencies to show
remorse or guilt when engaging in such
behaviors. For instance, if a culture has
a shared privacy principle that states that
peeking inside someone’s purse is wrong,
then it can be expected that most people try
not to do this, disapprove of those who do,
and feel ashamed or remorseful when they
are caught doing it?
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
It is conceivable that in a particular culture a
value or moral principle is widely upheld at the
behavioral level, but has not (yet) been codified
at the institutional and conceptual level. But this
is perhaps unlikely in cultures with institutions
that include extensive systems of codified rules,
which would include any culture with a modern
legal system. It is also conceivable that a moral
value or principle is embodied in both behavioral
customs and codified rules, but no good match
can be found at the conceptual level. In that case,
it seems reasonable to assume that the value or
principle at issue is embodied in the culture, but
different concepts are used to express it, making
it difficult to find direct translations.
A full consideration of the evidence for descriptive moral relativism along these three lines
is beyond the scope of this paper. I only intend to
consider enough evidence to arrive at a preliminary assessment of the cultural relativity of values
in contemporary information ethics.
Privacy
It has been claimed that in Asian cultures like
China and Japan, no genuine concept or value of
privacy exists. These cultures have been held to
value the collective over the individual. Privacy
is an individual right, and such a right may not
be recognized in a culture where collective interest tend to take priority over individual interests. Using the three criteria outline above, and
drawing from studies of privacy in Japan, China
and Thailand, I will now consider whether this
conclusion is warranted.
At the conceptual level, there are words in
Japanese, Chinese and Thai that refer to a private
sphere, but these words seem to have substantially different meanings than the English word
for privacy. Mizutani, Dorsey and Moor (2004)
have argued that there is no word for “privacy”
in traditional Japanese. Modern Japanese, they
claim, sometimes adopt a Japanese translation for
the Western word for privacy, which sounds like
“puraibashii”, and which is written in katakana,
which is the Japanese phonetic syllabary that is
mostly used for words of foreign origin. According to Nakada and Tamura (2005), Japanese does
include a word for “private,” “Watakusi”, which
means “partial, secret and selfish”. It is opposed
to “Ohyake”, which means “public”. Things that
are Watakusi are considered less worthy than
things that are Ohyake. Mizutani, Dorsey and
Moor (2004) point out, in addition, that there are
certainly behavioral customs in Japan that amount
to a respect for privacy. There are conventions that
restrict access to information, places or objects.
For example, one is not supposed to look under
clothes on public streets.
In China, the word closest to the English “privacy” is “Yinsi”, which means “shameful secret”
and is usually associated with negative, shameful
things. Lü (2005) claims that only recently that
“Yinsi” has also come to take broader meanings
to include personal information, shameful or not,
that people do not want others to know (see also
Jingchun, 2005 and McDougall and Hansson,
2002). This shift in meaning has occurred under
Western influences. As for institutional encoding
of privacy principles, Lü maintains that there
currently are no laws in China that protect an
individual’s right to privacy, and the legal protection of privacy has been weak and is still limited,
though there have been improvements in privacy
protection since the 1980s.
Kitiyadisai (2005), finally, holds that the concept of privacy does not exist in Thailand. She
claims that the Western word privacy was adopted
in the late nineteenth or early twentieth century
in Thailand, being transliterated as “pri-vade,”
but this word gained a distinctly Thai meaning,
being understood as a collectivist rather than an
individual notion. It referred to a private sphere
in which casual dress could be worn, as opposed
to a public sphere in which respectable dress had
to be worn. In the Thai legal system, Kitiyadisai
claims, there has not been any right to privacy
since the introduction of privacy legislation in
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
1997 and a Thai constitution, also in 1997, that
for the first time guarantees basic human rights.
Kitiyadisai argues, however, that Thai privacy
laws are hardly enacted in practice, and many
Thais remain unaware of the notion of privacy.
It can be tentatively concluded that the introduction of a concept of privacy similar to the
Western notion has only taken place recently in
Japan, China and Thailand, and that privacy legislation has only taken place recently. In traditional
Japanese, Chinese and Thai culture, which still
has a strong presence today, distinctions are made
that resemble the Western distinction between
public and private, and customs exist that may be
interpreted as respective of privacy, but there is
no recognized individual right to privacy.
Intellectual Property R ights
In discussing the cultural relativity of intellectual
property rights (IPR), I will limit myself to one
example: China. China is known for not having a developed notion of private or individual
property. Under communist rule, the dominant
notion of property was collective. All means of
production, such as farms and factories, were to
be collectively owned and operated. Moreover,
the state exercised strict control over the means
of production and over both the public and private
sphere. A modern notion of private property was
only introduced since the late 1980s. Milestones
were a 1988 constitutional revision that allowed
for private ownership of means of production and
a 2004 constitutional amendment that protects
citizens from encroachment of private property.
The notion of intellectual property has only
recently been introduced in China, in the wake of
China’s recent economic reforms and increased
economic interaction with the West. China is
currently passing IPR laws and cracking down
on violations of IPR in order to harmonize the
Chinese economic system with the rest of the
world. But as journalist Ben Worthen observes,
“[t]he average citizen in China has no need and
little regard for intellectual property. IPR is not
something that people grew up with … and the
percent of citizens who learn about it by engaging in international commerce is tiny.” Worthen
also points out that Chinese companies “have no
incentive to respect IPR unless they are doing
work for Western companies that demand it”
and that “since most of the intellectual property
royalties are headed out of China there isn’t a lot
of incentive for the government to crack down
on companies that choose to ignore IPR.”3 All
in all, it can be concluded that China’s value
system traditionally has not included recognition
of intellectual property rights, and it is currently
struggling with this concept.
Freedom of Information
Freedom of information is often held to comprise
two principles: freedom of speech (the freedom
to express one’s opinions or ideas, in speech or
in writing) and freedom of access to information.
Sometimes, freedom of the press (the freedom to
express oneself through publication and dissemination) is distinguished as a third principle. In
Western countries, freedom of information is often
defined as a constitutional and inalienable right.
Law protective of freedom of information are often especially designed to ensure that individuals
can exercise this freedom without governmental
interference or constraint. Government censorship
or interference is only permitted in extreme situations, pertaining to such things as hate speech,
libel, copyright violations and information that
could undermine national security.
In many nonwestern countries, freedom of
information is not a guiding principle. There
are few institutionalized protections of freedom
of information, there are many practices that
interfere with freedom of information, and a
concept of freedom of information is not part of
the established discourse in society. In such societies, the national interest takes precedence, and
an independent right to freedom of information
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
either is not recognized or is made so subordinate
to national interests that it hardly resembles the
Western right to freedom of information. These
are countries in which practices of state censorship are widespread; mass media are largely or
wholly government-controlled, the Internet, databases and libraries are censored, and messages
that do not conform to the party line are cracked
down upon.
Let us, as an example, consider the extent to
which freedom of information can be said to be
a value in Chinese society. Until the 1980s, the
idea of individual rights or civil rights was not a
well-known concept in China. Government was
thought to exist to ensure a stable society and a
prosperous economy. It was not believed to have
a function to protect individual rights against collective and state interests. As a consequence of
this general orientation, the idea of an individual
right to freedom of information was virtually
unknown. Only recently has China introduced
comprehensive civil rights legislation. In its 1982
constitution, China introduced constitutional
principles of freedom of speech and of the press.
And in 1997, it signed the International Convention on Economic, Social, and Cultural Rights,
and in 1998 the International Convention on Civil
and Political Rights (the latter of which it has not
yet ratified).
Even though the Chinese government has
recently come to recognize a right to freedom of
information, as well as individual human rights
in general, and has introduced legislation to this
effect, state censorship is still rampant, and the
principle of upholding state interest still tends to
dominate the principle of protecting individual
human rights. Internet censorship presents a
good example of this. Internet traffic in China
is controlled through what the Chinese call the
Golden Shield, and what is known outside mainland China as the Great Firewall of China. This
is a system of control in which Internet content is
blocked by routers, as well as at the backbone and
ISP level, through the “filtering” of undesirable
URLs and keywords. A long list of such “forbidden” URLs and keywords has been composed by
the Chinese State Council Information Office, in
collaboration with the Communist Party’s Propaganda Department. This system is especially
geared towards censorship of content coming
from outside mainland China (Human Rights
Watch, 2006).
R ights-C entered and Virtue-C entered
Morality
A recurring theme in the above three discussions
has been the absence of a strong tradition of individual rights in the cultures that were discussed
– those of China, Japan and Thailand – and the
priority that is given to collective and state interests. Only very recently have China, Japan and
Thailand introduced comprehensive human rights
legislation, which has occurred mainly through
Western influence, and there is still considerable
tension in these societies, especially in China
and Thailand, between values that prioritize the
collective and the state and values that prioritize
the individual.
Various authors have attempted to explain the
worldview that underlies the value system of these
countries. In Japan and Thailand, and to a lesser
extent China, Buddhism is key to an understanding
of attitudes towards individual rights. Buddhism
holds a conception of the self that is antithetical
to the Western conception of an autonomous
self which aspires to self-realization. Buddhism
holds that the self does not exist and that human
desires are delusional. The highest state that humans can reach is Nirvana, a state of peace and
contentment in which all suffering has ended. To
reach Nirvana, humans have to become detached
from their desires, and realize that the notion of
an integrated and permanent self is an illusion. In
Buddhism, the self is defined as fluid, situationdependent and ever-changing. As Mizutani et al.
and Kitiyadisai have noted, such a notion of the
self is at odds with a Western notion of privacy
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
and of human rights in general, notions which
presuppose a situation-independent, autonomous
self which pursues its own self-interests and which
has inalienable rights that have to be defended
against external threats.
In part through Buddhism, but also through
the influence of other systems of belief such as
Confucianism, Taoism and Maoism, societies
like those of China and Thailand have developed
a value system in which the rights or interests
of the individual are subordinate to those of the
collective and the state. To do good is to further
the interests of the collective. Such furtherance
of collective interests will generally also benefit
the individual. The task of government, then, is to
ensure that society as a whole functions well, in a
harmonious and orderly way, and that social ills are
cured, rather than the ills of single individuals. In
other words, government works for the common
good, and not for the individual good.
Only recently have countries like China and
Thailand come to recognize individual human
rights and individual interests next to collective
interests. But according to Lü (2005), the collectivist ethic still prevails:
Adapting to the demands of social diversity, the
predominant ethics now express a new viewpoint
that argues against the simple denial of individual
interests and emphasizes instead the dialectical
unification of collective interests and individual
interests: in doing so, however, this ethics points
out that this kind of unification must take collective
interests as the foundation. That is to say, in the
light of the collectivism principle of the prevailing
ethics, collective interests and individual interests
are both important, but comparatively speaking,
the collective interests are more important than
individual interests. (Lü, 2005, p. 12)
If this observation is correct, then the introduction of human rights legislation and property rights
in countries like China is perhaps not motivated
by a genuine recognition of inalienable individual
human rights, but rather a recognition that in the
current international climate, it is better to introduce human rights and property rights, because
such principles will lead to greater economic
prosperity, which is ultimately to the benefit of
the collective.
The dominant value systems prevalent in
China, Thailand and Japan are examples of
what philosopher David Wong (1984) has called
virtue-centered moralities. According to Wong,
at least two different approaches to morality can
be found in the world: a virtue-centered morality
that emphasizes the good of the community, and
a rights-centered morality that stresses the value
of individual freedom. Rights-centered morality
is the province of the modern West, although it
is also establishing footholds in other parts of the
world. Virtue-centered morality can be found
in traditional cultures such as can be found in
southern and eastern Asia and in Africa. Wong’s
distinction corresponds with the frequently made
distinction between individualist and collectivist
culture, that is found, amongst other, in Geert
Hofstede’s well-known five-dimensional model
of cultural difference (Hofstede, 1991). However,
this latter distinction focuses on social systems and
cultural practices, whereas Wong makes a distinction based in differences in moral systems.
In Wong’s conception of virtue-centered moralities, individuals have duties and responsibilities that stem from the central value of a common
good. The common good is conceived of in terms
of an ideal conception of community life, which
is based on a well-balanced social order in which
every member of the community has different duties and different virtues to promote the common
good. Some duties and virtues may be shared by
all members. The idea that human beings have
individual rights is difficult to maintain in this
kind of value system, because recognition of such
rights would have to find its basis in the higher
ideal of the common good. But it seems clear that
attributing rights to individuals is not always to
the benefit of the common good. The recognition
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
of individual property rights, for example, could
result in individual property owners not sharing
valuable resources that would benefit the whole
community. In virtue-centered moralities, the
ideal is for individuals to be virtuous, and virtuous
individuals are those individuals whose individual
good coincides with their contribution to the
common good. Individual goods may be recognized in such communities, but they are always
subordinate to the common good. Individuals
deserve respect only because of their perceived
contribution to the common good, not because
they possess inalienable individual rights.
C onclusion
The discussion of privacy, intellectual property
rights and freedom of information has shown that a
good case can be made for the descriptive cultural
relativity of these values. These values are central
in information ethics, as it has been developed
in the West. Moreover, it was argued that the
uncovered cultural differences in the appraisal
of these values can be placed in the context of a
dichotomy between two fundamentally different
kinds of value systems that exist in different societies: rights-centered and virtue-centered systems
of value. Information ethics, as it has developed
in the West, has a strong emphasis on rights, and
little attention is paid to the kinds of moral concerns that may exist in virtue-centered systems
of morality. In sum, it seems that the values that
are of central concern in Western information
ethics are not the values that are central in many
nonwestern systems of morality. The conclusion
therefore seems warranted that descriptive moral
relativism is true for information ethics.
MET AET HIC AL MOR AL R ELAT IVIS M
AND INFOR MAT ION ET HICS
In the first section of this article, it was argued
that descriptive moral relativism is a necessary
condition for metaethical moral relativism, but
is not sufficient to prove this doctrine. However,
several moral arguments exist that use the truth
of descriptive relativism, together with additional
premises, to argue for metaethical relativism. I
will start with a consideration of two standard
arguments of this form, which are found wanting,
after which I will consider a more sophisticated
argument.
T wo S tandard Arguments for
Metaethical R elativism
There are two traditional arguments for metaethical moral relativism that rely on the truth of
descriptive moral relativism (Wong, 1993). The
one most frequently alluded to is the argument
from diversity. This argument starts with the
observation that different cultures employ widely
different moral standards. Without introducing
additional premises, the argument goes on to
conclude that therefore, there are no universal
moral standards. This argument rests on what is
known in philosophy as a naturalistic fallacy, an
attempt to derive a norm from a fact, or an “ought”
from an “is”. The premise of the argument is
descriptive: there are different moral standards.
The conclusion is normative: no moral standard
has universal validity. No evidence has been
presented that the truth of the premise has any
bearing on the truth of the conclusion.
A second, stronger, argument for moral relativism is the argument from functional necessity,
according to which certain ethical beliefs in a
society may be so central to its functioning that
they cannot be given up without destroying the
society. Consequently, the argument runs, these
ethical beliefs are true for that society, but not
necessarily in another. However, this argument
is also problematic because it grounds the truth
of ethical statements in their practical value for
maintaining social order in a particular society.
Such a standard of justification for ethical statements is clearly too narrow, as it could be used to
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
justify the moral beliefs of societies whose beliefs
and practices are clearly unethical, for instance
fascist societies. If a society operates in a fundamentally unethical way, then the transformation
of some of its social structures and cultural forms
would seem acceptable if more ethical practices
are the result.
Wong’s and Harman’s Argument for
Metaethical R elativism
More convincing arguments for moral relativism
have been presented by David Wong (1984, 2006)
and Gilbert Harman (1996, 2000). Their argument runs, in broad outline, as follows. There are
deep-seated differences in moral belief between
different cultures. Careful consideration of the
reasons for these moral beliefs they have shows
that they are elements of different strategies to
realize related but different conceptions of the
Good. No good arguments can be given why one
of these conceptions of the Good is significantly
better than all the others. Therefore, these moral
beliefs are best explained as different but (roughly)
equally valid strategies for attaining the Good.
This is a much better argument than the previous two, since it puts the ball in the metaethical
absolutist’s court: he will have to come up with
proof that it is possible to provide good arguments
for the superiority of one particular conception of
the Good over all other conceptions. Metaethical
absolutists can respond to this challenge in two
ways. First, they may choose to bite the bullet
and claim that a rational comparison of different
conceptions of the Good is indeed possible. Different conceptions of the Good, they may argue,
rely on factual or logical presuppositions that
may be shown to be false. Alternatively, they
may argue that there are universally shared moral
intuitions about what is good, and these intuitions
can be appealed to in defending or discrediting
particular conceptions of the Good. For instance
an individual who believes that physical pleasure is
the highest good could conceivably be persuaded
to abandon this belief through exposure to arguments that purport to demonstrate that there are
other goods overlooked by him that are at least
as valuable. Such an argument could conceivably
rely on someone’s moral intuitions about the Good
that could be shown to deviate from someone’s
explicit concept of the Good.
Second, a mixed position could be proposed,
according to which it is conceded that individuals
or cultures may hold different conceptions of the
Good that cannot be rationally criticized (pace
metaethical relativism) but that rational criticism of individual moral beliefs is nevertheless
possible (pace metaethical absolutism) because
these beliefs can be evaluated for their effectiveness in realizing the Good in which service they
stand. After all, if moral beliefs are strategies to
realize a particular conception of the Good, as
Wong and Harman have argued, then they can
be suboptimal in doing so. A belief that Internet
censorship is justified because it contributes to
a more stable and orderly society can be wrong
because it may not in fact contribute to a more
stable and orderly society. Empirical arguments
may be made that Internet censorship is not necessary for the maintenance of social order, or even
that Internet censorship may ultimately work to
undermine social order, for example because it
creates discontentment and resistance.
In the existing dialogue between proponents
of rights-centered and virtue-centered systems
of morality, it appears that both these approaches
are already being taken. Western scholars have
criticized the organic conception of society that
underlies conceptions of the Good in many Asian
cultures, while Western definitions of the Good
in terms of individual well-being have been
criticized for their atomistic conception of individuals. Rights-based systems of morality have
been criticized for undervaluing the common
good, whereas virtue-based systems have been
criticized for overlooking the importance of the
individual good. In addition, both rights-centered
and virtue-centered systems of morality have
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
been criticized for not being successful by their
own standards. Western individualism has been
claimed to promote selfishness and strife, which
results in many unhappy individuals plagued
by avarice, poverty, depression and loneliness.
Western societies have therefore been claimed
to be unsuccessful in attaining their own notion of the Good, defined in terms of individual
well-being. Virtue-centered cultures have been
claimed to have difficulty in developing strong
economies that serve the common good, because
good economies have been argued to require private enterprise and a more individualist culture. In
addition, strong state control, which is a feature of
many virtue-centered cultures, has been argued
to lead to corruption and totalitarianism, which
also do not serve the common good.
In light of the preceding observations, it seems
warranted to conclude, pace metaethical absolutism, that rational criticism between different moral
systems is possible. It does not follow, however,
that conclusive arguments for universal moral
truths or the superiority of one particular moral
system over others are going to be possible. Critics of a particular moral system may succeed in
convincing its adherents that the system has its
flaws and needs to be modified, but it could well
be that no amount of criticism ever succeeds in
convincing its adherents to abandon core moral
beliefs within that system, however rational and
open-minded these adherents are in listening to
such criticism.
C onclusion
I have argued, pace metaethical relativism, that it
is difficult if not impossible to provide compelling
arguments for the superiority of different notions
of the Good that are central to different moral
systems, and by implication, that it is difficult to
present conclusive arguments for the universal
truth of particular moral principles and beliefs. I
have also argued, pace metaethical absolutism,
that is nevertheless possible to develop rational
10
arguments for and against particular moral values
and overarching conceptions of the Good across
moral systems, even if such arguments do not
result in proofs of the superiority of one particular
moral system or moral principle over another.
From these two metaethical claims, a normative position can be derived concerning the
way in which cross-cultural ethics ought to take
place. It follows, first of all, that it is only justified for proponents of a particular moral value
or principle to claim that it ought to be accepted
in another culture if they make this claim on the
basis of a thorough understanding of the moral
system operative in this other culture. The proponent would have to understand how this moral
system functions and what notion of the Good it
services, and would have to have strong arguments
that either the exogenous value would be a good
addition to the moral system in helping to bring
about the Good serviced in that moral system,
or that the notion of the Good serviced in that
culture is flawed and requires revisions. In the
next section, I will consider implications of this
position for the practice of information ethics in
cross-cultural settings.
INFOR MAT ION ET HICS IN A
CR OSS -C ULT UR AL C ONT EXT
It is an outcome of the preceding sections that
significant differences exist between moral systems of different cultures, that these differences
have important implications for moral attitudes
towards uses of information and information
technology, and that there are good reasons to
take such differences seriously in normative studies in information ethics. In this section, I will
argue, following Rafael Capurro, that we need an
intercultural information ethics that studies and
evaluates cultural differences in moral attitudes
towards information and information technology.
I will also critically evaluate the claim that the In-
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
ternet will enable a new global ethic that provides
a unified moral framework for all cultures.
Intercultural Information Ethics
The notion of an intercultural information ethics (IIE) was first introduced by Rafael Capurro
(2005, 2007), who defined it as a field of research
in which moral questions regarding information
technology and the use of information are reflected
on in a comparative manner on the basis of different cultural traditions. I will adopt Capurro’s
definition, but differ with him on what the central
tasks of an IIE should be. Capurro defines the
tasks of IIE very broadly. For him, they do not only
involve the comparative study of value systems
in different cultures in relation to their use of
information and information technology, but also
studies of the effect of information technology on
customs, languages and everyday problems, the
changes produced by the Internet on traditional
media, and the economic impact of the Internet
to the extent that it can become an instrument of
cultural oppression and colonialism.
I hold, in contrast, that studies of the effects of information technology in non-Western
cultures are more appropriately delegated to the
social sciences (including communication studies,
cultural studies, anthropology and science and
technology studies). An intercultural information
ethics should primarily focus on the comparative
study of moral systems. Its overall aim would be
to interpret, compare and critically evaluate moral
systems in different cultures regarding their moral
attitudes towards and behavior towards information and information technology.
This task for IIE can be broken down into
four subtasks, the first two of which are exercises
in descriptive ethics and the latter two of which
belong to normative ethics. First, IIE should engage in interpretive studies of moral systems in
particular cultures, including the systems of value
contained in the religious and political ideologies
that are dominant in these cultures. The primary
focus in such interpretive studies within the context of IIE should be on resulting moral attitudes
towards the use and implications of information
technology and on the moral problems generated
by uses of information technology within the
context of the prevailing moral system. Second,
IIE should engage in comparative studies of moral
systems from different cultures, and arrive at
analyses of both similarities and differences in
the way that these moral systems are organized
and operate, with a specific focus on the way in
which they have different moral attitudes towards
implications of information technology and on
differences in moral problems generated by the
use of information technology.
Third, IIE should engage in critical studies
in which the moral systems of particular cultures are criticized based on the insights gained
through the interpretive and comparative studies
alluded to above, particularly in their dealings
with information technology. Critical studies
may be directed towards criticizing moral values
and beliefs in cultures other than one’s own, and
proposing modifications in the culture’s moral
system and ways in which it should solve moral
problems, but may also involve self-criticism,
in which one’s own moral values and the moral
system of one’s own culture is criticized based
on insights gained from the study of alternative
moral systems. Fourth, IIE should engage in
interrelational studies that focus on the construction of normative models for interaction between
cultures in their dealings with information and
information technology that respect their different moral systems. Interrelational studies hence
investigate what moral compromises cultures can
make and ought to make in their interactions and
what shared moral principles can be constructed
to govern their interactions.
11
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
Global Ethics and the Information
R evolution
Some authors have argued that globalization and
the emergence of the Internet have created a global
community, and that this community requires its
own moral system that transcends and unifies the
moral systems of all cultures and nations that
participate in this global community. The ethics needed for the construction of such a moral
system has been called global ethics. The idea of
a global ethics or ethic was first introduced by
German theologian Hans Küng in 1990 and later
elaborated by him in a book (Küng, 2001). His
aim was to work towards a shared moral framework for humanity that would contain a minimal
consensus concerning binding values and moral
principles that could be invoked by members of
a global community in order to overcome differences and avoid conflict.
Krystyna Górniak-Kocikowska (1996) has
argued that the computer revolution that has taken
place has made it clear that a future global ethic
will have to be a computer ethic or information
ethic. As she explains, actions in cyberspace are
not local, and therefore the ethical rules governing such actions cannot be rooted in a particular
local culture. Therefore, unifying ethical rules
have to be constructed in cyberspace that can
serve as a new global ethic. Similar arguments
have been presented by Bao and Xiang (2006)
and De George (2006).
No one would deny that a global ethic, as
proposed by Küng, would be desirable. The construction of an explicit, shared moral framework
that would bind all nations and cultures would
evidently be immensely valuable. It should be
obvious, however, that such a framework could
only develop as an addition to existing local
moral systems, not as a replacement of them.
It would be a framework designed to help solve
global problems, and would exist next to the local moral systems that people use to solve their
local problems. In addition, it remains to be seen
12
if cross-cultural interactions over the Internet
yield more than a mere set of rules for conduct
online, a global netiquette, and will result in a
global ethic that can serve as a common moral
framework for intercultural dialogue and joint
action. Hongladarom (2001) has concluded, based
on empirical studies, that the Internet does not
create a worldwide monolithic culture but rather
reduplicates existing cultural boundaries. It does
create an umbrella cosmopolitan culture to some
extent, but only for those Internet users who engage
in cross-cultural dialogue, which is a minority,
and this umbrella culture is rather superficial.
Claims that the Internet will enable a new global
ethic may therefore be somewhat premature. In
any case, such intercultural dialogue online will
have to be supplemented with serious academic
work in intercultural information ethics, as well
as intercultural ethics at large.
C ONC LUS ION
It was found in this essay that very different moral
attitudes exist in Western and nonwestern countries regarding three key issues in information ethics: privacy, intellectual property, and freedom of
information. In nonwestern countries like China,
Japan and Thailand, there is no strong recognition
of individual rights in relation to these three issues.
These differences were analyzed in the context
of a difference, proposed by philosopher David
Wong, between rights-centered moralities that
dominate in the West and virtue-centered moralities that prevail in traditional cultures, including
those in South and East Asia. It was then argued
that cross-cultural normative ethics cannot be
practiced without a thorough understanding of
the prevailing moral system in the culture that is
being addressed. When such an understanding
has been attained, scholars can proceed to engage
in moral criticism of practices in the culture and
propose standards and solutions to moral problems. It was argued, following Rafael Capurro,
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
that we need an intercultural information ethics
that engages in interpretive, comparative and
normative studies of moral problems and issues
in information ethics in different cultures. It is
to be hoped that researchers in both Western and
nonwestern countries will take up this challenge
and engage in collaborative studies and dialogue
on an issue that may be of key importance to
future international relations.
Refe r enc es
Bao, X., & Xiang, Y. (2006). Digitalization and
global ethics. Ethics and Information Technology 8, 41–47.
Capurro, R. (2005). Privacy: An intercultural
perspective. Ethics and Information Technology
7(1), 37-47.
Capurro, R. (2007). Intercultural information
ethics. In R. Capurro, J. Frühbaure and T. Hausmanningers (Eds.), Localizing the Internet. Ethical issues in intercultural perspective. Munich:
Fink Verlag.
De George, R. (2006). Information technology,
Globalization and ethics. Ethics and Information
Technology 8, 29–40.
Ess, C. (2002). Computer-mediated colonization,
the renaissance, And educational imperatives
for an intercultural global village. Ethics and
Information Technology 4(1), 11–22.
Gorniak-Kocikowska, K. (1996). The computer
revolution and the problem of global ethics. Science and Engineering Ethics 2, 177–190.
Harman, G. (1996). Moral Relativism. In G. Harman and J.J. Thompson (Eds.), Moral relativism
and moral objectivity (pp. 3-64). Cambridge MA:
Blackwell Publishers
Harman, G. (2000). Is There a Single True Morality? In G. Harman, Explaining value: And other
essays in moral philosophy (pp. 77-99), Oxford:
Clarendon Press, (Orig. 1984)
Hofstede, G. (2001). Culture’s consequences.
Beverly Hills CA: Sage.
Hongladarom, S. (2001). Global Culture, Local
Cultures and the Internet: The Thai Example.
In C. Ess (Ed.), Culture, technology, communication: Towards an intercultural global village
(pp. 307–324). Albany NY: State University of
New York Press.
Human Rights Watch (2006). Race to the bottom. Corporate complicity in Chinese Internet
censorship. Human Rights Watch report 18(8).
Retrieved March 13, 2008, from http://www.hrw.
org/reports/2006/china0806/
Johnson, D. (2000). Computer ethics, 3rd ed, Upper Sadle River: Prentice Hall.
Jingchun, C. (2005). Protecting the right to privacy
in China. Victoria University of Wellington Law
Review 38(3). Retrieved March 13, 2008, from
http://www.austlii.edu.au/nz/journals/VUWLRev/2005/25.html.
Kitiyadisai, K. (2005). Privacy rights and protection: Foreign values in modern Thai context.
Ethics and Information Technology 7, 17–26.
Küng, H. (2001). A global ethic for global politics
and economics. Hong Kong: Logos and Pneuma
Press.
Lü, Yao-Huai (2005). Privacy and data privacy
issues in contemporary China. Ethics and Information Technology 7, 7–15.
McDougall, B., & Hansson, A. (eds.) (2002).
Chinese concepts of privacy. Leiden: Brill Academic Publishers.
Mizutani, M., Dorsey, J., & Moor, J. (2004). The
Internet and Japanese conception of privacy. Ethics and Information Technology 6(2), 121-128.
13
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
Copyright © 2009. IGI Global. All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable
copyright law.
Is Information Ethics Culturally Relative?
Nakada, M., & Tamura, T. (2005). Japanese conceptions of privacy: An intercultural perspective.
Ethics and Information Technology 7, 27–36.
Wong, D. (1984). Moral relativity. Berkeley, CA:
University of California Press.
Wong, D. (1993). Relativism. In P. Singer (ed.), A
companion to ethics (pp. 442-450). Cambridge
MA: Blackwell.
Wong, D. (2006). Natural moralities: A defense
of pluralistic relativism. Oxford: Oxford University Press.
Privacy: The extent to which individuals are
able to determine themselves whether personal
information about them is divulged or whether
they can maintain a personal space free from
interference by others
Endnot es
1
Key Te r ms
Cultural Values: Values shared by the members of a culture
Freedom of Information: The freedom,
without interference by others, to communicate
or have access to, information
Information Ethics: The study of ethical
issues in the use of information and information
technology
2
Intellectual Property Rights: Rights regarding the use and commercial exploitation of
creations of the mind, such as literary and artistic
works, symbols, names, images, and inventions
Moral Relativism: This names either the
position that as a matter of empirical fact, there is
extensive diversity between the values and moral
principles of societies, groups, cultures, historical
periods or individuals (descriptive moral relativism) or that the truth or justification of moral
judgments is not absolute or objective, but relative
to societies, groups, cultures, historical periods or
individuals (metaethical moral relativism)
14
3
By information ethics I mean the study of
ethical issues in the use of information and
information technology. Contemporary
information ethics is a result of the digital
revolution (or information revolution) and
focuses mainly on ethical issues in the
production, use and dissemination of digital
information and information technologies. It
encloses the field of computer ethics (Johnson, 2000) as well as concerns that belong
to classical information ethics (which was a
branch of library and information science),
media ethics and journalism ethics.
This doctrine is called metaethical rather
than normative because it does not make
any normative claims, but rather makes
claims about the nature of moral judgments.
Normative moral relativism would be the
thesis that it is morally wrong to judge or
interfere with the moral practices of societies, groups, cultures or individuals who
have moral values different from one’s own.
This is a normative thesis because it makes
prescriptions for behavior.
Worthen, B. (2006). Intellectual Property:
China’s Three Realities. CIO Blogs. Retrieved October, 2006, from http://blogs.cio.
com/intellectual_property_chinas_three_
realities.
EBSCO Publishing : eBook Collection (EBSCOhost) – printed on 10/29/2015 4:11 PM via LYNN UNIV
AN: 253939 ; Eyob, Ephrem.; Social Implications of Data Mining and Information Privacy : Interdisciplinary Frameworks and Solutions
Account: s8988524
AMY KAPCZYNSKI
The Law of Informational Capitalism
The Age of Surveillance Capitalism:
The Fight for a Human Future at the New Frontier of Power
BY SHOSHANA ZUBOFF
PUBLICAFFAIRS,
2019
Between Truth and Power:
The Legal Constructions of Informational Capitalism
BY JULIE E. COHEN
OXFORD UNIVERSITY PRESS,
2019
abstract. Over the past several decades, our capacity to technologically process and exchange data and information has expanded dramatically. An early sense of optimism about these
developments has given way to widespread pessimism, in the wake of a wave of revelations about
the extent of digital tracking and manipulation. Shoshana Zuboff’s book, The Age of Surveillance
Capitalism, has been hailed by many as the decisive account of the looming threat of private power
in the digital age. While the book offers important insights, Zuboff’s account is too narrow: it
fixates on technological threats to our autonomy and obscures the relationship between technology
and the problems of monopoly, inequality, and discriminatory hierarchy that threaten our democracy. Zuboff’s book also fails to appreciate the critical role that law plays in the construction and
persistence of private power. Julie Cohen’s book, Between Truth and Power: The Legal Constructions
of Informational Capitalism, gives us a much better framework to comprehend intensifying forms
of private power today and the role that law has played in supporting them. Drawing on Cohen’s
insights, I construct an account of the “law of informational capitalism,” with particular attention
to the law that undergirds platform power. Once we come to see informational capitalism as contingent upon specific legal choices, we can begin to consider how democratically to reshape it.
Though Cohen does not emphasize it, some of the most important legal developments—specifically, developments in the law of takings, commercial speech, and trade—are those that encase
private power from democratic revision. Today’s informational capitalism brings a threat not
merely to our individual subjectivities but to equality and our ability to self-govern. Questions of
data and democracy, not just data and dignity, must be at the core of our concern.
author. Professor of Law, Yale Law School. I thank Yochai Benkler, Marion Fourcade, and
David Grewal for their generous and insightful comments.
1460
the law of informational capitalism
book review contents
introduction
1462
i. the power and limits of surveillance capitalism
A. The Rise of Surveillance Capitalism
B. The Limits of Zuboff’s Account
1467
1467
1472
ii. private power in an age of informational capitalism
A. Capitalism and Its Laws
B. Informational Capitalism and the Rise of Platform Power
C. Neoliberalism and the Construction of Private Power
1480
1480
1485
1490
iii. the law of informational capitalism
A. How Law Empowers Informational Capitalists
B. The Encasement of Informational Capitalism
1496
1498
1508
conclusion
1514
1461
the yale law journal
129:1460
2020
introduction
Over the past several decades, a series of extraordinary technological developments has drastically expanded human capacities to store, exchange, and process data and information. Early attempts to understand this phenomenon were
often optimistic in tone. In our new information age, influential voices argued,
we could live more freely and with less scarcity, leveraging the nonrivalry of information, innate human tendencies to create,1 and the wisdom of crowds.2 Digital networks were celebrated for empowering sharing and new forms of creative
production,3 and information technologies were commonly described as enabling—if not guaranteeing—a more empowering workplace and higher living
standards for all.4
Today’s mood about these technological developments is decidedly darker,
filtered through a myriad of recent revelations. Facebook has experimented on
us to influence our emotional states.5 Cambridge Analytica sought to mobilize
1.
2.
3.
4.
5.
See, e.g., John Perry Barlow, A Declaration of the Independence of Cyberspace, ELECTRONIC
FRONTIER FOUND. (Feb. 8, 1996), https://www.eff.org/cyberspace-independence [https://
perma.cc/9DVN-R4F6]; John Perry Barlow, Selling Wine Without Bottles: The Economy of
Mind on the Global Net, ELECTRONIC FRONTIER FOUND., https://www.eff.org/pages/selling
-wine-without-bottles-economy-mind-global-net [https://perma.cc/ERN4-NP9W]; Eben
Moglen, The dotCommunist Manifesto (Jan. 2003), http://emoglen.law.columbia.edu
/publications/dcm.html [https://perma.cc/ZRW4-N4A3]; see also RICHARD STALLMAN, The
GNU Manifesto, in FREE SOFTWARE, FREE SOCIETY: SELECTED ESSAYS OF RICHARD STALLMAN
33, 41 (Joshua Gay ed., 2002) (arguing that computer programming, managed according to
free software principles, would allow us to take “a step toward the post-scarcity world, where
nobody will have to work very hard just to make a living”). All of these writers understood
these possibilities as contingent upon legal and policy choices,and therefore as possible rather
than inevitable. They argued that pervasively networked digital technologies would dramatically enhance freedom and production, if the new modalities of production that were emerging were freed from overaggressive assertions of intellectual property rights.
CLAY SHIRKY, HERE COMES EVERYBODY: THE POWER OF ORGANIZING WITHOUT ORGANIZATIONS (2008); JAMES SUROWIECKI, THE WISDOM OF CROWDS (2004).
YOCHAI BENKLER, THE WEALTH OF NETWORKS: HOW SOCIAL PRODUCTION TRANSFORMS
MARKETS AND FREEDOM 3 (2006); LAWRENCE LESSIG, THE FUTURE OF IDEAS: THE FATE OF
THE COMMONS IN A CONNECTED WORLD 120-41 (2001). These accounts, too, noted that the
potential for sharing and distributed production were contingent in important ways on law.
See, e.g., ERIK BRYNJOLFSSON & ANDREW MCAFEE, THE SECOND MACHINE AGE: WORK, PROGRESS, AND PROSPERITY IN A TIME OF BRILLIANT TECHNOLOGIES (2014).
See, e.g., Vindu Goel, Facebook Tinkers with Users’ Emotions in News Feed Experiment, Stirring
Outcry, N.Y. TIMES (June 29, 2014), https://www.nytimes.com/2014/06/30/technology
/facebook-tinkers-with-users-emotions-in-news-feed-experiment-stirring-outcry.html
[https://perma.cc/5NL6-2QYB].
1462
the law of informational capitalism
surreptitiously harvested Facebook data to influence elections.6 Trolls and bots—
some independent, others backed by governments—use social-media platforms
deliberately to sow discord and spread misinformation.7 Evidence has emerged
that click-driven social media may have polarizing effects.8 Employers are using
digital technologies to watch and manipulate workers.9 We have begun to worry
that these new capabilities are changing who we are—that our relationships,
sleep, concentration, and even our humanity are being unraveled by our compulsive relationships to computers, apps, and social networks.10
Enter The Age of Surveillance Capitalism, by Harvard Business School Professor emerita Shoshana Zuboff.11 A nearly 700-page indictment of the business
model of most top internet firms, it has been compared to the works of Adam
6.
See, e.g., Carole Cadwalladr & Emma Graham-Harrison, Revealed: 50 Million Facebook Profiles
Harvested for Cambridge Analytica in Major Data Breach, THE GUARDIAN (Mar. 17, 2018, 6:03
PM
EDT),
https://www.theguardian.com/news/2018/mar/17/cambridge-analytica
-facebook-influence-us-election [https://perma.cc/554Q-J3BK]; Paul Chadwick, How Many
People Had Their Data Harvested by Cambridge Analytica?, THE GUARDIAN (Apr. 16, 2018, 2:00
AM EDT), https://www.theguardian.com/commentisfree/2018/apr/16/how-many-people
-data-cambridge-analytica-facebook [https://perma.cc/P5VY-G6XW].
7.
See, e.g., Chris Baraniuk, How Twitter Bots Help Fuel Political Feuds, SCI. AM. (Mar. 27, 2018),
https://www.scientificamerican.com/article/how-twitter-bots-help-fuel-political-feuds
[https://perma.cc/YY6W-QNGT]; Amanda Robb, Anatomy of a Fake News Scandal, ROLLING
STONE (Nov. 16, 2017, 3:07 PM ET), https://www.rollingstone.com/politics/politics-news
/anatomy-of-a-fake-news-scandal-125877 [https://perma.cc/GMP5-G39P]; Tim Starks,
Laurens Cerulus & Mark Scott, Russia’s Manipulation of Twitter Was Far Vaster Than Believed,
POLITICO (June 5, 2019, 6:00 AM EDT), https://www.politico.com/story/2019/06/05/study
-russia-cybersecurity-twitter-1353543 [https://perma.cc/PL9Y-DT97].
8. See, e.g., YOCHAI BENKLER ET AL., NETWORK PROPAGANDA: MANIPULATION, DISINFORMATION,
AND RADICALIZATION IN AMERICAN POLITICS 281-86 (2018); Zeynep Tufekci, Opinion,
YouTube, the Great Radicalizer, N.Y. TIMES (Mar. 10, 2018), https://www.nytimes.com/2018
/03/10/opinion/sunday/youtube-politics-radical.html [https://perma.cc/KE3A-HZPN].
9. See Brishen Rogers, Worker Surveillance and Class Power, LAW & POL. ECON. (July 11, 2018),
https://lpeblog.org/2018/07/11/worker-surveillance-and-class-power
[https://perma.cc
/CCC9-M5PA].
10. See ADAM ALTER, IRRESISTIBLE: THE RISE OF ADDICTIVE TECHNOLOGY AND THE BUSINESS OF
KEEPING US HOOKED (2017); NICHOLAS CARR, THE SHALLOWS: WHAT THE INTERNET IS DOING TO OUR BRAINS (2010); SHERRY TURKLE, ALONE TOGETHER: WHY WE EXPECT MORE
FROM TECHNOLOGY AND LESS FROM EACH OTHER (2011); Nellie Bowles, A Dark Consensus
About Screens and Kids Begins to Emerge in Silicon Valley, N.Y. TIMES (Oct. 26, 2018),
https://www.nytimes.com/2018/10/26/style/phones-children-silicon-valley.html [https://
perma.cc/X94K-ELNM].
11. SHOSHANA ZUBOFF, THE AGE OF SURVEILLANCE CAPITALISM: THE FIGHT FOR A HUMAN FUTURE
AT THE NEW FRONTIER OF POWER (2019).
1463
the yale law journal
129:1460
2020
Smith, Max Weber, Karl Polanyi, Thomas Piketty, and Karl Marx.12 It also has
been dubbed the Silent Spring of the information age.13 Zuboff’s argument is
structured as an urgent call to action: we have entered a new era of “surveillance
capitalism,” she contends, that operates by “unilaterally claim[ing] human experience as free raw material for translation into behavioral data,” and processing
that data to “anticipate what you will do now, soon, and later.”14 Companies operating in this mode seek not just to predict but to “shape our behavior at scale.”15
Companies like Google and Facebook possess, she declares, a new species of “instrumentarian power,” the power to “shape[] human behavior toward others’
ends.”16 The result: an economic system that “will thrive at the expense of human nature and will threaten to cost us our humanity.”17
I describe the core of Zuboff’s account in Part I—and its virtues. The book is
extraordinarily acute in its grasp of the business models and aspirations of the
largest internet firms and describes in exquisite detail why they are deeply troubling. And while overwritten and overlong, the account is also strikingly accessible. In an age of information glut, where so much private power is accumulated
12.
13.
14.
15.
16.
17.
See, e.g., Sam Biddle, “A Fundamentally Illegitimate Choice”: Shoshana Zuboff on the Age of Surveillance Capitalism, INTERCEPT (Feb. 2, 2019, 8:00 AM), https://theintercept.com/2019/02
/02/shoshana-zuboff-age-of-surveillance-capitalism [https://perma.cc/8BH9-5YNJ] (noting that Zuboff’s book “is already drawing comparisons to seminal socioeconomic investigations like . . . Karl Marx’s ‘Capital’”); Nicholas Carr, Thieves of Experience: How Google and
Facebook Corrupted Capitalism, L.A. REV. BOOKS (Jan. 15, 2019), https://lareviewofbooks.org
/article/thieves-of-experience-how-google-and-facebook-corrupted-capitalism
[https://
perma.cc/DRG4-4YP5] (“Like another recent masterwork of economic analysis, Thomas
Piketty’s 2013 Capital in the Twenty-First Century, the book challenges assumptions, raises uncomfortable questions about the present and future, and stakes out ground for a necessary
and overdue debate.”); John Naughton, ‘The Goal Is to Automate Us’: Welcome to the
Age of Surveillance Capitalism, THE GUARDIAN (Jan. 20, 2019, 2:00 AM EST), https://
www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance
-capitalism-google-facebook [https://perma.cc/V3C7-GGNK] (noting that Zuboff’s “vast
. . . book is a continuation of a tradition that includes Adam Smith, Max Weber, Karl Polanyi
and—dare I say it—Karl Marx”).
Biddle, supra note 12 (likening Zuboff’s book to Rachel Carson’s Silent Spring, in that both are
“alarming exposé[s] about how business interests have poisoned our world”); Jeff vonKaenel,
Opinion, Big Tech vs. 7.5 Billion Earthlings, SACRAMENTO NEWS & REV. (Mar. 28, 2019),
https://www.newsreview.com/sacramento/big-tech-vs-7-5-billion/content?oid=27927700
[https://perma.cc/2BLX-K6ZD] (arguing that Zuboff’s book “provides a similar intellectual
framework from which to launch a tech regulation movement” as Rachel Carson’s Silent
Spring did “to launch the environmental movement”).
ZUBOFF, supra note 11, at 8.
Id.
Id.
Id. at 11-12.
1464
the law of informational capitalism
in secret, it is no small thing to break through the noise to articulate complex
problems and ideas. But by this same token, it is of real significance if the account
is partial or misleading. And in important ways, it is.
Zuboff is right that our autonomy and individuality are today at risk in new
ways. But she has little to say about the monopoly power of new platforms, or
about their role in reshaping labor markets and intensifying forms of inequality.
She ignores the fact that we are not all equally vulnerable to these new forms of
power. Part of the problem, as I will describe, is her relentless focus on individual
autonomy and her cheery attitude toward all forms of capitalism that are not
organized around surveillance. Given the manifesto-like quality of the book, it
is something of a shock when you realize that Zuboff’s dream is a world dominated by firms like Apple, instead of firms like Google.18 That view, once uncovered, has little appeal, nor does it help us think about many of the extraordinarily
important modes of private power facilitated by information technologies today.
Zuboff also claims at several points that surveillance capitalism is built on
“lawlessness.”19 In her account, markets in data exist beyond law and operate by
their own rules. It is not hard to see where she gets this view: dip into legal
scholarship and you will quickly learn that no one owns data.20 But the view that
the operations of Google and Facebook occur in a law-free zone—or even that
those companies would so desire—is wrong. It conceals the degree to which
these companies rely upon law for their power and the many legal decisions that
could be altered to enhance public power. If we are to intervene to democratize
the forms of private power Zuboff describes, we must understand how law helps
to construct them.
Fortunately, Julie Cohen has written a book that gives us a better, broader
framework through which to understand private power in the information age
and that also does superb work to trace how law has shaped (and been shaped
by) that power. In Between Truth and Power: The Legal Constructions of Informa-
18.
Zuboff celebrates the “unprecedented magnitude of Apple’s accomplishments,” which she attributes to the firm’s ability to “tap[] into a new society of individuals and their demand for
individualized consumption,” for example, by creating iTunes and the iPod. Id. at 30; see also
infra text accompanying note 26 (describing Zuboff’s enthusiasm for the “advocacy-oriented
capitalism” model that she associates with Apple).
19. See, e.g., ZUBOFF, supra note 11, at 103 (“A key element of Google’s freedom strategy was its
ability to discern, construct, and stake its claim to unprecedented social territories that were
not yet subject to law.”); id. at 104 (“[L]awlessness has been a critical success factor in the
short history of surveillance capitalism.”).
20. See, e.g., Lothar Determann, No One Owns Data, 70 HASTINGS L.J. 1, 5 (2018); Mark A. Lemley,
Private Property, 52 STAN. L. REV. 1545, 1547 (2000); Pamela Samuelson, Privacy as Intellectual
Property?, 52 STAN. L. REV. 1125, 1129 (2000).
1465
the yale law journal
129:1460
2020
tional Capitalism, Cohen argues that we live not in an age of “surveillance” capitalism—which trains our focus on dynamics of surveillance and behavioral control—but in an age of “informational capitalism”—which focuses our attention
on informationalism as a broader mode of development in the contemporary
political economy.21 Her broader framework captures transformations across a
much wider range of settings and calls attention not only to Zuboff’s instrumentarian power but also to rising platform power, monopoly power, and the power
that technology can give capital over workers and governments over the governed. She also shows how these changes are mediated at every moment by law:
for example, law has enabled de facto property regimes in both data and algorithms, although neither are formally property.22
Cohen’s account is complex and extremely dense: it demands patience of the
reader and requires some elaboration, which I undertake in Part II. The reward
is substantial: Cohen allows us sophisticated insight into the law and political
economy of the reigning productive paradigm.23 Building upon it in Part III, I
aim to sketch what we might call the “law of informational capitalism” and add
an account of the conceptual moves that have helped bring this law about.
How do allegedly assetless wonders like Uber and Airbnb mobilize capital
and extract profits? They rely upon laws that have been transformed to enable
the creation and accumulation of immaterial capital. These include changes
across a wide range of fields such as trade secrecy, contract, intermediary immunities, privacy, and the First Amendment. Historians of capitalism and neoliberalism have emphasized how both systems have enacted not just laws that
enable markets but also laws that protect them from democratic majorities that
might remake them. Informational capitalism, I show, is no different. Three
moves are critical here: the attempt to absorb trade secrets and data as forms of
property protected from “takings” and from government disclosure; the attempt
to insulate the activities of data brokers and software companies by claiming that
they are purveyors of speech protected by the First Amendment; and the attempt
to insulate markets from domestic control by internationalizing key components
of the law of informational capitalism. All three could be mobilized to make the
21.
JULIE E. COHEN, BETWEEN TRUTH AND POWER: THE LEGAL CONSTRUCTIONS OF INFORMATIONAL CAPITALISM 5 (2019); see infra Part II; see also infra text accompanying note 40 (defining surveillance capitalism more fully); infra text accompanying notes 132-144 (defining informational capitalism more fully). Cohen relies on Manuel Castells’s influential definition of
informational capitalism. See MANUEL CASTELLS, THE RISE OF THE NETWORK SOCIETY 21 n.31
(2d ed. 2010).
22. COHEN, supra note 21, at 44.
23. Cohen’s account, as I will describe, self-consciously joins a field of emerging “law and political
economy” scholarship. See infra text accompanying note 127.
1466
the law of informational capitalism
building of democratic power over informational capitalism difficult, and all
three will be the terrain of significant struggle as such efforts unfold.
These three forms of encasement are evidence that informational capitalism
brings a threat not merely to our individual subjectivities but to our ability to
self-govern. Questions of data and democracy, not just data and dignity, must
be at the core of our concern today. By mapping the law of data capitalism as a
series of doctrines, statutes, and underlying logics, we can begin to see how law,
legal thought, and technical systems have worked together to enable substantial
new forms of private power. We can also explore the levers we have to tame
them.
Legal scholarship has an important role to play here. Cohen’s book shows
that law and legal thought have played key facilitating roles in these developments. The wave of legal changes required to enable today’s extreme concentrations of private power were ushered in by ideas and tropes distinctive to our neoliberal era.24 The “open access” movement, and the publicly minded
intellectual-property scholars who influenced its shape (of which I am one), as I
will describe, did not escape the gravity exerted by this era. Though we did not
wish it, our ideas have helped consolidate, or at least have not adequately contested, these vast new forms of private power. Today, we need a new departure
for legal scholarship in this domain and a more serious engagement with the
political economy of data, grounded in the recognition that data is a social relation—an artifact not only of human cognition but also of legal structures.25
i. the power and limits of surveillance capitalism
A. The Rise of Surveillance Capitalism
Zuboff develops her definition of surveillance capitalism substantially
through a close analysis of one trailblazing firm: Google. In the beginning,
Google was just an ordinary capitalist firm working under a model that Zuboff
calls “advocacy-oriented capitalism.” This was a virtuous form, exemplified by
Apple and its iPod, that fused digitization and capitalism to better serve users
and provide a more individuated, less “mass” consumer experience.26 Google
was born in this era, and originally followed its pattern: it used our online “data
24.
See infra text accompanying note 126. For a definition of neoliberalism, see infra Section II.C.
See infra Section III.A.
26. ZUBOFF, supra note 11, at 29-30.
25.
1467
the yale law journal
129:1460
2020
exhaust” to turn its “search engine into a recursive learning system that constantly improved search results.”27 Our online traces were, it realized, a “broad
sensor of human behavior,” which when combined and analyzed could yield extraordinary insights.28 By feeding data about website links, click-through rates,
and revealed interests into its Page Rank algorithm, Google could provide us
with more accurate search results.29 This lesson was soon applied to other “product innovations such as spell check, translation, and voice recognition.”30
To understand the story here, it helps to know a little about recent technological developments in data processing. Decades of developments in computer
processing power and the connection of processors in digital networks have allowed information to be gathered and exchanged in new ways. Over time, advances in computer processing speed and storage, the development of pervasively distributed sensors, and advances in machine-learning techniques have
ushered in what some call the “second machine age,” enabling quantitative shifts
in how we know and act.31 While early computers were very good at rule-following, they were poor at pattern recognition and adapting to changing environments.32 Machine-learning techniques now allow machines to “learn” by extracting patterns from massive datasets. While this is decidedly less than real
intelligence, it has enabled significant new forms of technological power. It allowed computers to master humankind’s most difficult strategy game, Go,
which has more possible moves than atoms in the observable universe.33 It is
what companies hope will soon allow computers to outstrip radiologists in interpreting mammograms.34 It is also the technology behind self-driving cars,
Google’s Page Rank, and Google Translate.
It was in 2002, Zuboff argues, that everything changed. This was the year
that Google discovered what she calls “behavioral surplus”—forms of data useful
27.
28.
29.
30.
31.
32.
33.
34.
Id. at 68.
Id.
Id. at 69.
Id. at 68.
BRYNJOLFSSON & MCAFEE, supra note 4, at 11-12; see also VIKTOR MAYER-SCHÖNBERGER & KENNETH CUKIER, BIG DATA: A REVOLUTION THAT WILL TRANSFORM HOW WE LIVE, WORK, AND
THINK 6-7 (2013) (describing similar developments as the “big data” revolution).
BRYNJOLFSSON & MCAFEE, supra note 4, at 16-18.
ANDREW MCAFEE & ERIK BRYNJOLFSSON, MACHINE, PLATFORM, CROWD: HARNESSING OUR
DIGITAL FUTURE 2-6 (2017).
See, e.g., Ziad Obermeyer & Ezekiel J. Emanuel, Predicting the Future—Big Data, Machine
Learning, and Clinical Medicine, 375 NEW ENG. J. MED. 1216, 1218 (2016); Alejandro RodriguezRuiz et al., Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison with 101 Radiologists, 111 J. NAT’L CANCER INST. 916 (2019).
1468
the law of informational capitalism
for something other than improving products and services. In Google’s case, the
purpose was to generate its first profitable business model: the sale of behaviorally targeted ads.35 Google’s profits today are almost exclusively from such advertising36 and the market it has constructed to sell these ads is mind-boggling.
In order to maximize the value of the ad space it sells, Google mobilizes its vast
troves of data to profile each user with increasing granularity. In early versions,
this meant evaluating previous websites we had visited, ads we had seen before,
and feedback on how we reacted to them to try to predict whether we would be
lured in by a particular product or ad.37
The truth is, we do not know exactly what inputs Google uses these days,
any more than we can accurately describe its data holdings. But Zuboff pieces
together an outline of the evolution through patents, press releases, statements
by employees, and news coverage. As the advertising model became more embedded at Google, the company realized that better predictions led to better
click-through rates, and this generated a demand for ever-more comprehensive
data on Google users (which is effectively all of us, since Google captures about
92% of worldwide search engine traffic and 95% of searches on mobile phones).38
It then matched its data holdings with a virtual auction house, enabling bidders
to consider how likely the user is to click on the ad. Google now conducts trillions of these auctions simultaneously, every day.39
It is by generalizing from Google’s trajectory that Zuboff derives her definitions of surveillance capitalism. In this new mode, people are not users whom
companies seek to serve but “objects from which raw materials are extracted and
expropriated.”40 Our data is then fed into “prediction factories” to monetize a
guess about our desires and what we will do—or can be subtly pressed to do—
next. The need for ever-more data to increase the accuracy of these predictions
has led data hunters from the online world to the offline world. As Zuboff puts
it, “If Google is a search company, why is it investing in smart-home devices,
wearables, and self-driving cars? If Facebook is a social network, why is it developing drones and augmented reality?”41 In fact, she argues, they are driven “to
35.
ZUBOFF, supra note 11, at 75-76.
Id. at 93.
37. Id. at 80.
38. Search Engine Market Share Worldwide, Sept. 2018-2019, STATCOUNTER (Feb. 2020), http://
gs.statcounter.com/search-engine-market-share [https://perma.cc/288G-J7VP].
39. ZUBOFF, supra note 11, at 82.
40. Id. at 94.
41. Id. at 129.
36.
1469
the yale law journal
129:1460
2020
hunt and capture raw material,” and the key move today is off the internet.42 She
quotes Google’s former CEO, Eric Schmidt:
The internet will disappear. There will be so many IP addresses . . . so
many devices, sensors, things that you are wearing, things that you are
interacting with, that you won’t even sense it. It will be part of your presence all the time. Imagine you walk into a room and the room is dynamic.43
First, companies tracked our searches. Then they correlated that tracking
with data gathered from our browsers and phones. Companies learned that apps
were powerful sensors too, gleaning information about our habits, our health,
our friends.44 Apps frequently access one another’s information, and Google and
Facebook can do the same. With GIS and mobile payment systems attached to
our devices, integrating other datasets with the data on our phones generated
still more value for advertisers.45
As Zuboff describes, there is now a wave of new products “from smart vodka
bottles to internet-enabled rectal thermometers, and quite literally everything in
between” that are designed to sense our activities and transmit this behavioral
data for unknowable future uses.46 Companies tout their “interactive denim”
that can detect your “contextual activity, health and emotional state.”47 The market in healthcare apps has exploded, providing a means to combine data that
users provide with other information on their phones to generate profiles rich
with highly sensitive information.48 Internet-enabled Smart TVs—present in almost half of U.S. homes as of 2017—constantly track what users are watching
and relay it back to enable targeted advertising.49
42.
Id.
43. Id. at 199.
44. Id. at 249.
45. Id. at 134.
46. Id. at 238.
47. Id. at 246-47.
48. Id. at 248; see Lori Andrews, A New Privacy Paradigm in the Age of Apps, 53 WAKE FOREST L.
REV. 421, 426 (2018).
49. Sapna Maheshwari, How Smart TVs in Millions of U.S. Homes Track More Than What’s
on Tonight, N.Y. TIMES (July 5, 2018), https://www.nytimes.com/2018/07/05/business
/media/tv-viewer-tracking.html [https://perma.cc/D2PE-A7E4].
1470
the law of informational capitalism
All of this, Zuboff argues, enables “instrumentarianism,” a new form of
power that allows firms to control us in machinic ways.50 She quotes a software
engineer: “The new power is action . . . . [S]ensors can also be actuators . . . . It’s
no longer simply about ubiquitous computing. Now the real aim is ubiquitous
intervention, action, and control. The real power is that now you can modify realtime actions in the real world.”51
One expression of this is what Zuboff calls the “uncontract”—the ability of
firms, through software, to enforce contractual terms immediately and nonnegotiably.52 When combined with new surveillance power, this ability to act at a
distance takes on some undeniably inhumane qualities. Your car can be disabled,
perhaps at a stoplight or on your way to the hospital, if you fail to make a car
payment.53 Health-insurance companies can ask that you comply with exercise
regimes and use sensors and digital networks to be sure you do. Smart machines
inserted into our daily lives have ways of governing us—not by manipulating our
minds but by defining our options in binary code. In a simple way, this was the
point of Lawrence Lessig’s influential 1999 book Code and Other Laws of Cyberspace.54 Code could work like a kind of law, Lessig wrote, because it could create
the parameters of action.55 And as Mireille Hildebrandt recently pointed out,
technological regulation is different from legal regulation in several important
ways: it is not democratically authored; it rules out disobedience in a technical,
material sense; and it is often practically impossible to contest because its operations are largely invisible and beyond the reach of any court.56
This kind of algorithmic control, though, is of secondary concern for Zuboff.
It is psychic control that she most fears. Today, Zuboff argues, firms can “herd,”
“tune,” and “condition” us via digital action.57 She offers two signal examples.
One is Pokémon Go, the game craze that led millions of people to reorganize
how they moved through physical space (sometimes visiting stores that paid to
50.
51.
52.
53.
54.
55.
56.
57.
ZUBOFF, supra note 11, at 8 (defining instrumentarian power as power that “knows and shapes
human behavior toward others’ ends,” and that “works its will through the automated medium of an increasingly ubiquitous computational architecture of ‘smart’ networked devices,
things, and spaces”).
Id. at 293.
Id. at 218-21.
Id. at 215; see also Rebecca Crootof, The Internet of Torts: Expanding Civil Liability Standards to
Address Corporate Remote Interference, 69 DUKE L.J. 583 (2019) (describing how advances in
technology allow companies to remotely interfere with products).
LAWRENCE LESSIG, CODE AND OTHER LAWS OF CYBERSPACE (1999).
Id. at 77-78.
MIREILLE HILDEBRANDT, SMART TECHNOLOGIES AND THE END(S) OF LAW 12 (2015).
ZUBOFF, supra note 11, at 294.
1471
the yale law journal
129:1460
2020
serve as a destination) in an effort to catch virtual creatures.58 The other is Facebook’s study showing it could increase voter turnout by manipulating messages
on the feeds of millions of users.59 As this illustrates, and as Google’s chief economist argued long ago, the ubiquity of computer interfaces has enabled new
forms of corporate experimentation.60 With continuous data flows, networks,
and surveillance capacities, companies like Google and Facebook now can run
millions of secret tests a day to optimize their profitability. This creates significant new potential for these firms to probe us and alter what we do. Zuboff’s
account is at its most chilling when she quotes executives themselves. Here, for
example, is the CEO of an “emotion scanning” firm: “[I]n the future, we’ll assume that every device just knows how to read your emotions.”61 Once our devices know our emotions, they will be able, Zuboff assures us, to fine-tune their
ability to profit from us and to keep us jacked into the networks like so many
brains in a Matrix vat. At the core of Zuboff’s concern is not surveillance or capitalism, but this new threat of a machine-driven “collectivism.”62
Zuboff’s account is important. It offers an extraordinary and vivid account
of the power that private actors, mobilizing vast and interactive troves of data,
may soon have to influence our behavior, and it shows how the business models
of some of the world’s most dynamic and valuable firms are fundamentally misaligned with our interests in control over our lives. She lays bare dynamics that
are deliberately obscured from most of us, and convincingly situates the rise of
this power in incentives and business models, as well as the weak regulatory culture that has characterized the last two decades (particularly in the United
States). But the book, ultimately, is not the guide to the problem of private power
in the digital age that we need today.
B. The Limits of Zuboff’s Account
Zuboff makes a convincing case that technologies of the present facilitate a
new paradigm of private power—one that appropriates our data for profit, dominated by a few companies that create significant new threats to our autonomy.
But is it really the case that surveillance, the sale of predictions about human
58.
Id. at 315-16.
Id. at 299.
60. Id. at 64.
61. Id. at 289.
62. Zuboff also spends pages on the strange rhetoric of Alex Pentland, an influential engineer
who apparently would like to organize all of human life for everyone else, and whom she treats
as emblematic of Silicon Valley as a terrifying “collectivist” whole. See id. at 416-19, 426-44.
59.
1472
the law of informational capitalism
behaviors, and engineering of behavioral responses will be the core value proposition in the global economy going forward?63 As others have pointed out, Zuboff declares rather than defends the claim.64
The existing evidence suggests that until behavioral advertising becomes
much more sophisticated, it will have at most a small impact on behavior.65 In
example after example, celebrated AI-driven projects also have fallen short of
their billing.66 There is good reason to treat with caution claims made by executives and engineers about the vast new powers that lie just around the bend. Still,
63.
Id. at 11 (arguing that ownership of the means of behavioral modification will be “the fountainhead of capitalist wealth and power in the twenty-first century”).
64. See Evgeny Morozov, Capitalism’s New Clothes, BAFFLER (Feb. 4, 2019), https://thebaffler.com
/latest/capitalisms-new-clothes-morozov [https://perma.cc/B2CY-R46T].
65. See BENKLER ET AL., supra note 8, at 276-79. The most significant study purporting to show a
positive effect of behavioral advertising showed that the effect size on behavior (in that study,
for instance, purchasing after clicking through an ad) was very small. Id. at 277 (reporting
results of S.C. Matz et al., Psychological Targeting as an Effective Approach to Digital Mass Persuasion, 114 PROC. NAT’L ACAD. SCI. U.S.A. 12714, 12715-16 (2017)). For example, for about
every 7,700 people targeted, only one additional purchase was achieved. Id. In an election,
even if every voter could be targeted, effects of this size would impact “a few hundred voters
across an entire state.” BENKLER ET AL., supra note 8, at 278. Recent assessments suggest that
even these small effect sizes may be overestimates, given certain methodological complexities.
See, e.g., Dean Eckles, Brett R. Gordon & Garrett A. Johnson, Field Studies of Psychologically
Targeted Ads Face Threats to Internal Validity, 115 PROC. NAT’L ACAD. SCI. U.S.A. E5254, E5254
(2018) (highlighting limitations of the experimental methodology used, including that it did
not randomize subjects to different ads and so may have been picking up confounders, such
as differences in age or gender correlated with the personality types they were keyed to); Byron Sharp, Nick Danenberg & Steven Bellman, Psychological Targeting, 115 PROC. NAT’L ACAD.
SCI. U.S.A. E7890, E7890 (2018) (noting that the Matz study showed positive results in only
two of the five experiments, and did not control for the differing creative quality of advertisements, and suggesting that the impact of the psychologically targeted advertisement was the
result of “the creative quality of these ads . . . not their targeting”). Companies spend substantial sums on online marketing, of course, and this might count as evidence that it has an
impact. For an argument that this investment reflects agency problems and the difficulty of
producing good effects data (given, for example, selection effects), see Jesse Frederick & Maurits Martijn, The New DotCom Bubble Is Here: It’s Called Online Advertising, CORRESPONDENT
(Nov. 6, 2019), https://thecorrespondent.com/100/the-new-dot-com-bubble-is-here-its
-called-online-advertising/13228924500-22d5fd24 [https://perma.cc/RCA7-T2MY].
66. The latest reports are that self-driving cars are far further off than recently predicted by companies and may never materialize in the form they were promised. See Neal E. Boudette, Despite High Hopes, Self-Driving Cars Are ‘Way in the Future,’ N.Y. TIMES (July 17, 2019), https://
www.nytimes.com/2019/07/17/business/self-driving-autonomous-cars.html
[https://
perma.cc/U3BF-YRWX]. Obstacles are both technical and sociolegal. See, e.g., Michael A. Alcorn et al., Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses
of Familiar Objects 1 (Apr. 18, 2019) (unpublished manuscript), https://arxiv.org/pdf
/1811.11553.pdf [https://perma.cc/9KTP-R93V] (describing problems that current image
classifiers have in recognizing “out-of-distribution” poses and events, such as a school bus
1473
the yale law journal
129:1460
2020
it would be foolish to dismiss the concerns Zuboff raises, because studies of the
impact of behavioral marketing are still few and limited, and because these powers will grow as analytics, digital profiles, and processing become more sophisticated.
The bigger problem with Zuboff’s account is that her fixation on threats to
our autonomy screens out broader and arguably more important problems of
private power in the information age—for example, the ways in which network
effects feed platform power, informationalism generates winner-take-all dynamics, and digital technology has impacted labor.
This is in part a product of Zuboff’s underlying attitude toward capitalism.
As Evgeny Morozov has pointed out in an insightful review, Zuboff’s favored
alternative to capitalism is not socialism, but “advocacy-oriented” capitalism,
which deploys technologies to better improve services.67 This is capitalism as
flipped on its side as opposed to a front view of the same object); Jeremy Kahn, To Get Ready
for Robot Driving, Some Want to Reprogram Pedestrians, BLOOMBERG: HYPERDRIVE (Aug. 16,
2018, 6:00 AM EST), https://www.bloomberg.com/news/articles/2018-08-16/to-get-ready
-for-robot-driving-some-want-to-reprogram-pedestrians [https://perma.cc/E9TJ-X5VA]
(discussing the difficulties created by unpredictable pedestrian interactions with self-driving
cars). Google’s celebrated “Flu Tracker,” a common example of the supremacy of big data and
AI over conventional modes of scientific knowing, failed so spectacularly in 2013 that it was
shut down. See David Lazer et al., The Parable of Google Flu: Traps in Big Data Analysis,
343 SCI. MAG. 1203, 1203 (2014), https://science.sciencemag.org/content/sci/343/6176
/1203.full.pdf [https://perma.cc/KPA3-CUGD]; David Lazer & Ryan Kennedy, What We Can
Learn from the Epic Failure of Google Flu Trends, WIRED (Oct. 1, 2015, 7:00 AM), https://
www.wired.com/2015/10/can-learn-epic-failure-google-flu-trends [https://perma.cc/S8BS
-GHXR]. IBM’s recent Watson project with Memorial Sloan Kettering Hospital was supposed to revolutionize cancer care by ingesting real-world data and the expertise of worldclass doctors to hone treatment recommendations. In reality, the program had to be built on
a backbone of synthetic data because of a raft of problems with data interoperability and quality. See Casey Ross & Ike Swetlitz, IBM’s Watson Supercomputer Recommended ‘Unsafe and
Incorrect’ Cancer Treatments, Internal Documents Show, STAT (July 25, 2018), https://
www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments
[https://perma.cc/DJ6G-9A9F]. Years after …