Page 1 of 20 Print Publication Date: Jul 2017 Subject: Law, IT and Communications Law Online Publication
Question:
![image text in transcribed](https://s3.amazonaws.com/si.experts.images/answers/2024/06/666afd0d42914_629666afd0d207b3.jpg)
Page 1 of 20
Print Publication Date: Jul 2017 Subject: Law, IT and Communications Law
Online Publication Date: Dec 2016 DOI: 10.1093/oxfordhb/9780199680832.013.45
Gregory N. Mandel
The Oxford Handbook of Law, Regulation and Technology
Edited by Roger Brownsword, Eloise Scotford, and Karen Yeung
Abstract and Keywords
This introductory chapter to Part III examines whether there are generalizable lessons
concerning law and its regulation of technology that we can learn from past experience
with the law reacting to technological evolution. I suggest three insights from historical
interactions between law and technological change: (1) pre-existing legal categories may
no longer apply to new law and technology disputes; (2) legal decision makers should be
mindful to avoid letting the marvels of a new technology distort their legal analysis; and
(3) the types of legal disputes that will arise from new technology are often unforeseeable. These lessons are applicable across a wide range of technologies, legal fields, and
contexts to aid in determining current and future legal responses to technological development.
Keywords: law, technology, regulation, governance, Internet, DNA typing, genetically modified, synthetic biology
1. Introduction
THE most fundamental questions for law and the regulation of technology concern
whether, how, and when the law should adapt in the face of technological evolution. If legal change is too slow, it can create human health and environmental risks, privacy and
other individual rights concerns, or it can produce an inhospitable background for the
economy and technological growth. If legal change is too fast or ill-conceived, it can lead
to a different set of harms by disrupting settled expectations and stifling further technological innovation. Legal responses to technological change have significant impacts on
the economy, the course of future technological development, and overall social welfare.
Part III focuses on the doctrinal challenges for law in responding to technological change.
Sometimes the novel legal disputes produced by technological advances require new legislation or regulation, a new administrative body, or revised judicial understanding. In
other situations, despite potentially significant technological (p. 226) evolution, the kinds
of disputes created by a new technological regime are not fundamentally different from
Page 2 of 20
previous issues that the law has successfully regulated. Determining whether seemingly
new disputes require a changed legal response, and if so what response, is a difficult
challenge.
Technological evolution impacts every field of law, often in surprising ways. The chapters
in this Part detail how the law is reacting to technological change in areas as disparate as
intellectual property, constitutional law, tax, and criminal law. Technological change raises new questions concerning the legitimacy of laws, individual autonomy and privacy,
deleterious effects on human health or the environment, and impacts on community or
moral values. Some of the many examples of new legal disputes created by technological
change include: Whether various means of exchanging information via the Internet constitute copyright infringement? Should a woman be able to choose to get an abortion
based on the gender of the foetus? Can synthetic biology be regulated in a manner that
allows a promising new technology to grow while guarding against its unknown risks?
These and other legal issues created by technological advance are challenging to evaluate. Such issues often raise questions concerning how the law should respond in the face
of uncertainty and limited knowledge. Uncertainty not just about the risks that a new
technology presents, but also about the future path of technological development, the potential social effects of the technology, and the legitimacy of various legal responses.
These challenges are exacerbated by the reality that the issues faced often concern technology at the forefront of scientific knowledge. Such technology usually is not only incomprehensible to the average person, but may not even be fully understood by scientific experts in the field. In the face of this uncertainty and limited understanding, generally lay
legislative, executive, administrative, and judicial actors must continue to establish and
rule on laws that govern uncharted technological and legal waters.
This is a daunting challenge, and the chapters in Part III describe how these legal developments and decisions are playing out in myriad legal fields, as well as make insightful
recommendations concerning how the law could function better in such areas. This introductory chapter attempts to bring the varied experiences from different legal fields together to interrogate whether there are generalizable lessons about law and technology
that we can learn from past experiences, lessons that could aid in determining current
and future legal responses to technological development.
In examining legal responses to technological change across a variety of technologies, legal fields, and time, there are several insights that we can glean concerning how legal actors should (and should not) respond to technological change and the legal issues that it
raises. These insights do not provide a complete road map for future responses to every
new law and technology issue. Such a guide would be impossible considering the diverse
scope of technologies, laws, and the manner in which they intersect in society. But the
lessons suggested here can provide a number (p. 227) of useful guidelines for legal actors
to consider when confronting novel law and technology issues.
Page 3 of 20
The remainder of this chapter scopes out three lessons from past and current experience
with the law and the regulation of technology that I suggest are generalizable across a
wide variety of technologies, legal fields, and contexts (Mandel 2007). These three
lessons are:
(1) pre-existing legal categories may no longer apply to new law and technology disputes;
(2) legal decision makers should be mindful to avoid letting the marvels of new technology distort their legal analysis; and
(3) the types of legal disputes that will arise from new technology are often unforeseeable.
These are not the only lessons that can be drawn from experience with law and technology, and they are not applicable across all situations, but they do represent a start. Critical
for any discussion of general lessons for law and the regulation of technology, I suggest
that these guidelines are applicable across a wide variety of technologies, even those that
we do not conceive of presently.1
2. Pre-existing Legal Categories May No Longer
Apply
Evidence that lessons from previous experience with law and technology can apply to
contemporary issues is supported by examining the legal system's reaction to a variety of
historic technological advances. Insights from past law and technology analysis are germane today, even though the law and technology disputes at issue in the present were entirely inconceivable in the periods from which these lessons are drawn.
Perhaps the most important insight to draw from the history of legal responses to technological advance is that a decision maker must be careful when compartmentalizing new
law and technology disputes into pre-existing legal categories. Lawyers and judges are
trained to work in a system of legal categorization. This is true for statutory, regulatory,
and judicial-made common law, and in both civil law and common law jurisdictions. Categorization is vital both for setting the law and for enabling law's critical notice function.
Statutes and regulations operate by categorization. They define different types of legal
regulation and which kinds of action are governed by such regulation. (p. 228) Similarly,
judge-made common law operates on a system of precedent that depends on classifying
current cases according to past categories. This is true whether the laws in question involve crystal rules that seek to define precise legal categories (for example, a speed limit
of 100 kilometres per hour) or provide muddy standards that present less clear boundaries, but nevertheless define distinct legal categories (for example, a reasonableness
standard in tort law) (Rose 1988).
Page 4 of 20
In many countries, law school is significantly devoted to teaching students to understand
what legal categories are and how to recognize and define them. Legal practice primarily
involves categorization as well: attorneys in both litigation and regulatory contexts argue
that their clients' actions either fall within or outside of defined legal categories; attorneys in transactional practice draft contracts that define the areas of an agreement and
what is acceptable within that context; and attorneys in advisory roles instruct their
clients about what behaviour falls within or outside of legally accepted definitions. Law is
about placing human actions in appropriate legal boxes.
Given the legal structure and indoctrination of categorization, it is not surprising that a
typical response to new legal issues created by technological evolution is to try to fit the
issue within existing legal categories. Although such responses are entirely rational, given the context described above, they ignore the possibility that it may no longer make
sense to apply staid categories to new legal issues. While law can be delineated by category, technology ignores existing definitions. Technology is not bound by prior categorization, and therefore the new disputes that it creates may not map neatly onto existing legal boundaries. In order to understand a new law and technology issue one must often
delve deeper, examining the basis for the existing system of legal categorization in the
first instance. Complementary examples from different centuries of technological and legal development illustrate this point.
2.1 The Telegraph
Before Wi-Fi, fibre optics, and cell phones, the first means of instantaneous long-distance
communication was the telegraph. The telegraph was developed independently by Sir
William Fothergill Cooke and Charles Wheatstone in the United Kingdom and by Samuel
Morse in the United States. Cooke and Wheatstone established the first commercial telegraph along the Great Western Railway in England. Morse sent the world's first long-distance telegraph message on 24 May 1844: 'What Hath God Wrought' (Burns 2004). Telegraph infrastructure rose rapidly, often hand in hand with the growth of railroads, and in
a short time (on a nineteenth-century technological diffusion scale) both criss-crossed Europe and America and were in heavy use.
(p. 229) Unsurprisingly, the advent of the telegraph also brought about new legal disputes.
One such issue involved contract disputes concerning miscommunicated telegraph messages. These disputes raised issues concerning whether the sender bore legal responsibility for damages caused by errors, whether the telegraph company was liable, or whether
the harm should lie where it fell. At first glance, these concerns appear to present standard contracts issues, but an analysis of a pair of cases from opposite sides of the United
States shows otherwise.2
Parks v Alta California Telegraph Co (1859) was a California case in which Parks contracted with the Alta California Telegraph Company to send a telegraph message. Parks had
learned that a debtor of his had gone bankrupt and was sending a telegraph to try to attach the debtor's property. Alta failed to send Parks's message in a timely manner, caus
Page 5 of 20
ing Parks to miss the opportunity to attach the debtor's property with priority over other
creditors. Parks sued Alta to recover for the loss.
The outcome of Parks, in the court's view, hinged on whether a telegraph company was
classified as a common carrier, a traditionally defined legal category concerning transportation companies. Common carriers are commercial enterprises that hold themselves
out to the public as offering the transport of persons or property for compensation. Under
the law, common carriers are automatically insurers of the delivery of the goods that they
accept for transport. If Alta was a common carrier, it necessarily insured delivery of
Parks's message, and it would be liable for Parks's loss. But, if Alta was not a common
carrier, it did not automatically insure delivery of the message, and it would only be liable
for the cost of the telegraph.
The court held that telegraph companies were common carriers. The court explained
that, prior to the advent of telegraphs, companies that delivered goods also delivered letters. The court reasoned, '[t]here is no difference in the general nature of the legal obligation of the contract between carrying a message along a wire and carrying goods or a
package along a route. The physical agency may be different, but the essential nature of
the contract is the same' (Parks 1859: 424). Other than this relatively circular reasoning
about there being 'no difference' in the 'essential nature', the court did not further explain the basis for its conclusion.
In the Parks court's view, '[t]he rules of law which govern the liability of Telegraph Companies are not new. They are old rules applied to new circumstances' (Parks 1859: 424).
Based on this perspective, the court analogized the delivery of a message by telegraph to
the delivery of a message (a letter) by physical means, and because letter carriers fell into the pre-existing legal category of common carriers, the court classified telegraph companies as common carriers as well. As common carriers, telegraph companies automatically insured delivery of their messages, and were liable for any loss incurred by a failure
in delivery.
About a decade later, Breese v US Telegraph Co (1871) concerned a somewhat similar
telegraph message dispute in New York. In this case, Breese contracted with the US Telegraph Company to send a telegraph message to a broker to buy $700 worth of gold. The
message that was received, however, was to buy $7,000 in gold, (p. 230) which was purchased on Breese's account. Unfortunately, the price of gold dropped, which led Breese to
sue US Telegraph for his loss. In this case, US Telegraph's telegraph transmission form
included a notation that, for important messages, the sender should have the message
sent back to ensure that there were no errors in transmission. Return resending of the
message incurred an additional charge. The form also stated that if the message was not
repeated, US Telegraph was not responsible for any error.
The Breese case, like Parks, hinged on whether a telegraph company was a common carrier. If telegraph companies were common carriers, US Telegraph was necessarily an insurer of delivery of the message, and could not contractually limit its liability as it attempted
to do on its telegraph form. The Breese court concluded that telegraph companies are not
Page 6 of 20
common carriers. It did not offer a reasoned explanation for its conclusion, beyond stating that the law of contract governs, a point irrelevant to the issue of whether telegraph
companies are common carriers.
Though the courts in Parks and Breese reached different conclusions, both based their decisions on whether telegraph companies were common carriers. The Parks court held that
telegraph companies were common carriers because the court believed that telegraph
messages were not relevantly different from previous methods of message delivery. The
Breese court, on the other hand, held that telegraph messages were governed by contract, not traditional common carrier rules, because the court considered telegraph messages to be a new form of message delivery distinguishable from prior systems.
Our analysis need not determine which court had the better view (a difficult legal issue
that if formally analyzed under then-existing law would turn on the ephemeral question of
whether a telegraph message is property of the sender). Rather, comparison of the cases
reveals that neither court engaged in the appropriate analysis to determine whether telegraph companies should be held to be common carriers, and that neither court engaged
in analysis to consider whether the historic categorization of common carriers, and the liability rules that descended from such categorization, should continue to apply in the
context of telegraph companies and their new technology.
New legal issues produced by technological advance often raise the question of whether
the technology is similar enough to the prior state of the art such that the new technology
should be governed by similar, existing rules, or whether the new technology is different
enough such that it should be governed by new or different rules. This question cannot be
resolved simply by comparing the function of the new technology to the function of the
prior technology. This was one of the errors made by both the Parks and Breese courts.
Legal categories are not developed based simply on the function of the underlying technology, but on how that function interacts in society. Thus, rather than asking whether a
new technology plays a similar role to that of prior technology (is a telegraph like a letter?), a legal decision maker must consider the rationale for the (p. 231) existing legal categories in the first instance (Mandel 2007). Only after examining the basis for legal categories can one evaluate whether the rationale that established such categories also applies to a new technology as well. Legal categories (such as common carrier) are only that
legal constructs. Such categories are not only imperfect, in the sense that both rules
and standards can be over-inclusive and under-inclusive, but they are also context dependent. Even well-constructed legal categories are not Platonic ideals that apply to all situations. Such constructs may need to be revised in the face of technological change.
The pertinent metric for evaluating whether the common carrier category should be extended to include telegraph companies is not the physical activity involved (message delivery) but the basis for the legal construct. The rationale for common carrier liability, for
instance, may have been to institute a least-cost avoider regime and reduce transaction
costs. Prior to the advent of the telegraph, there was little a customer could do to insure
the proper delivery of a package or letter once conveyed to a carrier. In this context, the
Page 7 of 20
carrier would be best informed about the risks of delivery and about the least expensive
ways to avoid such risks. As a result, it was efficient to place the cost of failed delivery on
the carrier.
Telegraphs changed all this. Telegraphs offered a new, easy, and cheap method for self-insurance. As revealed in Breese, a sender could now simply have a message returned to
ensure that it had been properly delivered. In addition, the sender would be in the best
position to know which messages are the most important and worth the added expense of
a return telegraph. The advent of the telegraph substantially transformed the efficiencies
of protection against an error in message delivery. This change in technology may have
been significant enough that the pre-existing legal common carrier category, developed in
relation to prior message delivery technology, should no longer apply. Neither court considered this issue.
The realization that pre-existing legal categorization may no longer sensibly apply in the
face of new technology appears to be a relatively straightforward concept, and one that
we might expect today's courts to handle better. Chalking this analytical error up to archaic legal decision-making, however, is too dismissive, as cases concerning modern message delivery reveal.
2.2 The Internet
The growth of the Internet and email use in the 1990s resulted in a dramatic increase in
unsolicited email messages, a problem which is still faced today. These messages became
known as 'spam', apparently named after a famous Monty Python skit in which Spam (the
canned food) is a disturbingly ubiquitous menu item. Although email spam is a substantial
annoyance for email users, it is an even greater problem for Internet service providers.
Internet service providers are forced to make (p. 232) substantial additional investments
to process and store vast volumes of unwanted email messages. They also face the
prospect of losing customers annoyed by spam filling their inboxes. Though figures are
hard to pin down, it is estimated that up to 90 per cent of all email messages sent are
spam, and that spam costs firms and consumers as much as $20 to $50 billion annually
(Rao and Reiley 2012).
Private solutions to the spam problem in the form of email message filters would eventually reduce the spam problem to some degree, especially for consumers. A number of jurisdictions, particularly in Europe, also enacted laws in the 2000s attempting to limit the
proliferation of spam in certain regards (Khong 2004). But in the early days of the Internet in the 1990s, neither of these solutions offered significant relief.
One Internet service provider, CompuServe, attempted to ameliorate their spam issues by
bringing a lawsuit against a particularly persistent spammer. CompuServe had attempted
to electronically block spam, but had not been successful (an early skirmish in the ongoing technological battle between Internet service providers and spam senders that continues to the present day). Spammers operated more openly in the 1990s than they do now.
CompuServe was able to identify a particular mass-spammer, CyberPromotions, and
Page 8 of 20
brought suit to try to enjoin CyberPromotions' practices (CompuServe Inc v Cyber Promotions Inc 1997).
CompuServe, however, had a problem for their lawsuit: they lacked a clear legal basis for
challenging CyberPromotions' activity. CyberPromotions' use of the CompuServe email
system as a non-customer to send email messages to CompuServe's Internet service
clients did not create an obvious cause of action in contract, tort, property, or other area
of law. In fact, use of CompuServe clients' email addresses by non-clients to send messages, as a general matter, was highly desirable and necessary for the email system to operate. CompuServe would have few customers if they could not receive email messages
from outside users.
Lacking an obvious legal avenue for relief, CompuServe developed a somewhat ingenious
legal argument. CompuServe claimed that CyberPromotions' use of CompuServe's email
system to send spam messages was a trespass on CompuServe's personal property (its
computers and other hardware) in violation of an ancient legal doctrine known as trespass to chattels. Trespass to chattels is a common law doctrine prohibiting the unauthorized use of another's personal property (Kirk v Gregory 1876; CompuServe Inc v Cyber
Promotions Inc 1997). Trespass to chattels, however, was developed at a time when property rights nearly exclusively involved tangible property.
An action for trespass to chattels requires (1) physical contact with the chattel, (2) that
the plaintiff was dispossessed of the chattel permanently or for a substantial period of
time, and (3) that the chattel was impaired in condition, quality, or value, or that bodily
harm was caused (Kirk v Gregory 1876; CompuServe Inc v Cyber Promotions Inc 1997).
Application of the traditional trespass to chattels elements to email spam is not straightforward. Spam does not appear to physically (p. 233) contact a computer, dispossess a
computer, or harm the computer itself. Framing their argument to match the law, CompuServe contended that the electronic signals by which email was sent constituted physical contact with their chattels, that the use of bandwidth due to sending spam messages
dispossessed their computer, and that the value of CompuServe's computers was diminished by the burden of CyberPromotions' spamming. The court found CompuServe's
analogies convincing and held in their favour.
While the court's sympathy for CompuServe's plight is understandable, the CompuServe
court committed the same error as the courts in Parks and Breeseit did not consider the
basis for legal categorization in the first instance before extending the legal category to
new disputes created by new technology. The implications of the CompuServe rationale
make clear that the court's categorization is problematic. Under the court's reasoning, all
unsolicited email, physical mail, and telephone calls would constitute trespass to chattels,
a result that would surprise many. This outcome would create a common law cause of action against telemarketers and companies sending junk mail. Although many people
might welcome such a cause of action, it is not legally recognized and undoubtedly was
not intended by the CompuServe court. This argument could potentially be extended to
advertisements on broadcast radio and television. Under the court's reasoning, individu
Page 9 of 20
als could have a cause of action against public television broadcasters (such as the BBC
in the United Kingdom or ABC, CBS, and NBC in the United States) for airing commercials by arguing that public broadcasts physically contact one's private television through
electronic signals, that they dispossess the television in similar regards to spam dispossessing a computer, and that the commercials diminish the value of the television. The
counter-argument that a television viewer should expect or implicitly consents to commercials would equally apply to a computer user or service provider expecting or implicitly consenting to spam as a result of connecting to the Internet.
A primary problem with the CompuServe decision lies in its failure to recognize that differences between using an intangible email system and using tangible physical property
have implications for the legal categories that evolved historically at a time when the Internet did not exist. As discussed above, legal categories are developed to serve contextdependent objectives and the categories may not translate easily to later-developed technologies that perform a related function in a different way. The dispute in CompuServe
was not really over the use of physical property (computers), but over interference with
CompuServe's business and customers. As a result, the historic legal category of trespass
to chattels was a poor match for the issues raised by modern telecommunications. A legal
solution to this new type of issue could have been better served by recognizing the practical differences in these contexts.
Courts should not expect that common law, often developed centuries past, will always be
well suited to handle new issues for law in the regulation of technology. (p. 234) Pre-existing legal categories may be applicable in some cases, but the only way to determine this
is to examine the basis for the categories in the first instance and evaluate whether that
basis is satisfied by extension of the doctrine. This analysis will vary depending on the
particular legal dispute and technology at issue, and often will require consideration of
the impact of the decision on the future development and dissemination of the technology
in question, as well as on the economy and social welfare more broadly.
Real-world disputes and social context should not be forced into pre-existing legal categories. Legal categories are simply a construct; the disputes and context are the immutable reality. If legal categories do not fit a new reality well, then it is the legal categories that must be re-evaluated.
3. Do Not Let the Technology Distort the Law
A second lesson for law and the regulation of technology concerns the need for decision
makers to look beyond the technology involved in a dispute and to focus on the legal issues in question. In a certain sense, this concern is a flipside of the first lesson, that existing legal categories may no longer apply. The failure to recognize that existing legal categories might no longer apply is an error brought about in part by blind adherence to existing law in the face of new technology. This second lesson concerns the opposite prob
Page 10 of 20
lem: sometimes decision makers have a tendency to be blinded by spectacular technological achievement and consequently neglect the underlying legal concerns.
3.1 Fingerprint Identification
People v Jennings (1911) was the first case in the United States in which fingerprint evidence was admitted to establish identity. Thomas Jennings was charged with murder in a
case where a homeowner had confronted an intruder, leading to a struggle that ended
with gunshots and the death of the homeowner. Critical to the state's case against Jennings was the testimony of four fingerprint experts matching Jennings's fingerprints to
prints from four fingers from a left hand found at the scene of the crime on a recently
painted back porch railing.
(p. 235) The fingerprint experts were employed in police departments and other law enforcement capacities. They testified, in varying manners, to certain numbers of points of
resemblance between Jennings's fingerprints and the crime scene prints, and each expert
concluded that the prints were made by the same person. The court admitted the fingerprint testimony as expert scientific evidence. The bases for admission identified in the
opinion were that fingerprint evidence was already admitted in European countries, reliance on encyclopaedias and treatises on criminal investigation, and the experience of
the expert witnesses themselves.
Upon examination, the bases for admission were weak and failed to establish the critical
evidentiary requirement of reliability. None of the encyclopaedias or treatises cited by the
court actually included scientific support for the use of fingerprints to establish identity,
let alone demonstrated its reliability. Early uses of fingerprints starting in India in 1858,
for example, included using prints to sign a contract (Beavan 2001). In a similar vein, the
court identified that the four expert witnesses each had been studying fingerprint identification for several years, but never mentioned any testimony or other evidence concerning
the reliability of fingerprint analysis itself. This would be akin to simply stating that experts had studied astrology, ignoring whether the science under study was reliable. Identification of a number of points of resemblance between prints (an issue on which the expert testimony varied) provides little evidence of identity without knowing how many
points of resemblance are needed for a match, how likely it is for there to be a number of
points of resemblance between different people, or how likely it is for experts to incorrectly identify points of resemblance. No evidence on these matters was provided.
Reading the Jennings opinion, one is left with the impression that the court was simply
'wowed' with the concept of fingerprint identification. Fingerprint identification was perceived to be an exciting new scientific ability and crime-fighting tool. The court, for instance, provided substantial description of the experts' qualifications and their testimony,
despite its failure to discuss the reliability of fingerprint identification in the first instance. It is not surprising, considering the court's amazement with the possibility of fingerprint identification, that the court deferred to the experts in admitting the evidence
Page 11 of 20
despite a lack of evidence of reliability and the experts' obvious self-interest in having the
testimony admitted for the first timethis was, after all, their new line of employment.
The introduction of fingerprint evidence to establish identity in European courts, on
which the Jennings courts relies, was not any more rigorous. Harry Jackson became the
world's first person to be convicted based on fingerprint evidence when he was found
guilty of burglary on 9 September 1902 and sentenced to seven years of penal servitude
based on a match between his fingerprint and one found at the scene of the crime (Beavan 2001). The fingerprint expert in the Jackson case testified that he had examined thousands of prints, that fingerprint patterns remain the same (p. 236) throughout a person's
life, and that he had never found two persons with identical prints. No documentary evidence or other evidence of reliability was introduced.
With respect to establishing identification in the Jackson case itself, the expert testified to
three or four points of resemblance between the defendant's fingerprint and the fingerprint found at the scene and concluded, 'in my opinion it is impossible for any two persons to have any one of the peculiarities I have selected and described'. Several years later, the very same expert would testify in the first case to rely upon fingerprint identification to convict someone of murder that he had seen up to three points of resemblance between the prints of two different people, but never more than that (Rex v Stratton and Another 1905). The defendant in Jackson did not have legal representation, and consequently there was no significant cross-examination of the fingerprint expert. As in the Jennings
case in the United Sates, the court in Stratton appeared impressed by the possibility and
science of fingerprint identification and took its reliability largely for granted. One striking example of the court's lack of objectivity occurred when the court interrupted expert
testimony to interject the court's own belief that the ridges and pattern of a person's fingerprints never change during a lifetime.
3.2 DNA Identification
Almost a century after the first fingerprint identification cases, courts faced the introduction of a new type of identification evidence in criminal cases: DNA typing. State v Lyons
(1993) concerned the admissibility of a new method for DNA typing, the PCR replicant
method. DNA typing is the technical term for 'DNA fingerprinting', a process for determining the probability of a match between a criminal defendant's DNA and DNA obtained
at a crime scene.
Despite almost a century gap separating the Jennings/Jackson/Stratton and Lyons
opinions, the similarity in deficiencies between the courts' analyses of the admissibility of
new forms of scientific evidence are remarkable. In Lyons, the court similarly relies on
the use of the method in question in other fields as a basis for its reliability in a criminal
case. The PCR method had been used in genetics starting with Sir Alec Jeffreys at the
University of Leicester in England, but only in limited ways in the field of forensics. No
evidence was provided concerning the reliability of the PCR replicant method for identification under imperfect crime scene conditions versus its existing use in pristine laborato
Page 12 of 20
ry environments. The Lyons court also relied on the expert witness's own testimony that
he followed proper protocols as evidence that there was no error in the identification and,
even more problematically, that the PCR method itself was reliable. Finally, like the experts in Jennings, Jackson, and Stratton the PCR replicant method expert had a vested interest in the test being considered reliablethis was his line of employment. In each case
(p. 237) the courts appear simply impressed and excited by the new technology and what
it could mean for fighting crime. The Lyons decision includes not only a lengthy description of the PCR replicant method process, but also an extended discussion of DNA, all of
which is irrelevant to the issue of reliability or the case.
In fairness to the courts, there was an additional similarity between Jennings/Jackson/
Stratton and Lyons: in each case, the defence failed to introduce any competing experts
or evidence to challenge the reliability of the new technological identification evidence.
For DNA typing, this lapse may have been due to the fact that the first use of DNA typing
in a criminal investigation took place in the United Kingdom to exonerate a defendant
who had admitted to a rape and murder, but whose DNA turned out not to match that
found at the crime scene (Butler 2005). In DNA typing cases, defence attorneys quickly
learned to introduce their own experts to challenge the admissibility of new forms of DNA
typing. These experts began to question proffered DNA evidence on numerous grounds,
from problems with the theory of DNA identification (such as assumptions about population genetics) to problems with the method's execution (such as the lack of laboratory
standards or procedures) (Lynch and others 2008). These challenges led geneticists and
biologists to air disputes in scientific journals concerning DNA typing as a means for identification, and eventually to the US National Research Council convening two distinguished panels on the matter. A number of significant problems were identified concerning methods of DNA identification, and courts in some instances held DNA evidence inadmissible. Eventually, new procedures were instituted and standardized, and sufficient data was gathered such that courts around the world now routinely admit DNA evidence.
This is where DNA typing as a means of identification should have begunwith evidence
of and procedures for ensuring its reliability.
Ironically, the challenges to DNA typing identification methods in the 1990s actually led
to challenges to the century-old routine admissibility of fingerprint identification evidence
in the United States. The scientific reliability of forensic fingerprint identification was a
question that still had never been adequately addressed despite its long use and mythical
status in crime-solving lore. The bases for modern fingerprint identification challenges included the lack of objective and proven standards for establishing that two prints match,
the lack of a known error rate and the lack of statistical information concerning the likelihood that two people could have fingerprints with a given number of corresponding features. In 2002, a district court judge in Pennsylvania held that evidence of identity based
on fingerprints was inadmissible because its reliability was not established (United States
v Llera-Plaza 2002). The court did allow the experts to testify concerning the comparison
between fingerprints. Thus, experts could testify to similarities and differences between
two sets of prints, but were not permitted to testify as to their opinion that a particular
print was or was not the print of a particular person. This holding caused somewhat of an
Page 13 of 20
uproar and the United States government filed a motion to reconsider. The (p. 238) court
held a hearing on the accuracy of fingerprint identification, at which two US Federal Bureau of Investigation agents testified. The court reversed its earlier decision and admitted
the fingerprint testimony.
The lesson learned from these cases for law and the regulation of technology is relatively
straightforward: decision makers need to separate spectacular technological achievements from their appropriate legal implications and use. When judging new legal issues
created by exciting technological advances, the wonder or promise of a new technology
must not blind one from the reality of the situation and current scientific understanding.
This is a lesson that is easy to state but more difficult to apply in practice, particularly
when a technologically lay decision maker is confronted with the new technology for the
first time and a cadre of experts testifies to its spectacular promise and capabilities.
4. New Technology Disputes Are Unforeseeable
The final lesson offered here for law and the regulation of technology may be the most
difficult to implement: decision makers must remain cognizant of the limited ability to
foresee new legal issues brought about by technological advance. It is often inevitable
that legal disputes concerning a new technology will be handled under a pre-existing legal scheme in the early stages of the technology's development. At this stage, there usually will not be enough information and knowledge about a nascent technology and its legal
and social implications to develop or modify appropriate legal rules, or there may not
have been enough time to establish new statutes, regulations, or common law for managing the technology.
As the examples above indicate, there often appears to be a strong inclination towards
handling new technology disputes under existing legal rules. Not only is this response
usually the simplest approach administratively, there are also strong psychological influences that make it attractive as well. For example, availability and representativeness
heuristics lead people to view a new technology and new disputes through existing
frames, and the status quo bias similarly makes people more comfortable with the current legal framework (Gilovich, Griffin, and Kahneman 2002).
Not surprisingly, however, the pre-existing legal structure may prove a poor match for
new types of disputes created by technological innovation. Often there will be gaps or
other problems with applying the existing legal system to a new technology. The regulation of biotechnology provides a recent, useful set of examples.
(p. 239) 4.1 Biotechnology
Biotechnology refers to a variety of genetic engineering techniques that permit scientists
to selectively transfer genetic material responsible for a particular trait from one living
species (such as a plant, animal, or bacterium) into another living species. Biotechnology
Page 14 of 20
has many commercial and research applications, particularly in the agricultural, pharmaceutical, and industrial products industries.
As the biotechnology industry developed in the early 1980s, the United States government determined that bioengineered products in the United States generally would be
regulated under the already-existing statutory and regulatory structure. The basis for this
decision, established in the Coordinated Framework for Regulation of Biotechnology
(1986), was a determination that the process of biotechnology was not inherently risky,
and therefore that only the products of biotechnology, not the process itself, required
oversight.
This analysis proved questionable. As a result of the Coordinated Framework, biotechnology products in the United States are regulated under a dozen statutes and by five different agencies and services. Experience with biotechnology regulation under the Coordinated Framework has revealed gaps in biotechnology regulation, inefficient overlaps in regulation, inconsistencies among agencies in their regulation of similarly situated biotechnology products, and instances of agencies being forced to act outside their areas of expertise (Mandel 2004).
One of the most striking examples of the limited capabilities of foresight in this context is
that the Coordinated Framework did not consider how to regulate genetically modified
plants, despite the fact that the first field tests of genetically modified plants began in
1987, just one year after the Coordinated Framework was promulgated. This oversight
was emblematic of a broader gap in the Coordinated Framework. By placing the regulation of biotechnology into an existing, complex regulatory structure that was not designed with biotechnology in mind, the Coordinated Framework led to a system in which
the US Environmental Protection Agency (EPA) was not involved in the review and approval of numerous categories of genetically modified plants and animals that could have
a significant impact on the environment. In certain instances, it was unclear whether
there were sufficient avenues for review of the environmental impacts of the products of
biotechnology by any agency. Similarly, it was unclear whether any agency had regulatory
authority over transgenic animals not intended for human food or to produce human biologics, products that have subsequently emerged.
There were various inconsistencies created by trying to fit biotechnology into existing
boxes as well. The Coordinated Framework identified two priorities for the regulation of
biotechnology by multiple agencies: that the agencies regulating genetically modified
products 'adopt consistent definitions' and that the agencies implement scientific reviews
of 'comparable rigor' (Coordinated Framework for Regulation of Biotechnology 1986: 23,
302-303). As a result of constraints created (p. 240) by primary reliance on pre-existing
statutes, however, the agencies involved in the regulation of biotechnology defined identical regulatory constructs differently. Similarly, the US National Research Council concluded that the data on which different agencies based comparable analyses, and the scientific stringency with which they conducted their analyses, were not comparably rigorous,
contrary to the Coordinated Framework plan.
Page 15 of 20
Regulatory overlap has also been a problem under the Framework. Multiple agencies
have authority over similar issues, resulting in inefficient duplication of regulatory resources and effort. In certain situations, different agencies requested the same information about the same biotechnology product from the same firms, but did not share the information or coordinate their work. In one instance, the United States Department of
Agriculture (USDA) and the EPA reached different conclusions concerning the risks of the
same biotechnology product. In reviewing the potential for transgenic cotton to cross
with wild cotton in parts of the United States, the USDA concluded that '[n]one of the relatives of cotton found in the United States ... show any definite weedy tendencies' (Payne
1997) while the EPA found that there would be a risk of transgenic cotton crossing with
species of wild cotton in southern Florida, southern Arizona, and Hawaii (Environmental
Protection Agency 2000).
The lack of an ability to foresee the new types of issues created by technological advance
created other problems with the regulation of biotechnology. For example, in 1998 the
EPA approved a registration for StarLink corn, a variety of corn genetically modified to be
pest-resistant. StarLink corn was only approved for use as animal feed and non-food industrial purposes, such as ethanol production. It was not approved for human consumption because it carried transgenic genes that expressed a protein containing some attributes of known human allergens.
In September 2000, StarLink corn was discovered in several brands of taco shells and later in many other human food products, eventually resulting in the recall of over three
hundred food products. Several of the United States' largest food producers were forced
to stop production at certain plants due to concerns about StarLink contamination, and
there was a sharp reduction in United States corn exports. The owner of the StarLink registration agreed to buy back the year's entire crop of StarLink corn, at a cost of about
$100 million. It was anticipated that StarLink-related costs could end up running as high
as $1 billion (Mandel 2004).
The contamination turned out to be caused by the reality that the same harvesting, storage, shipping, and processing equipment are often used for both human and animal food.
Corn from various farms is commingled as it is gathered, stored, and transported. In fact,
due to recognized commingling, the agricultural industry regularly accepts about 2 per
cent to 7 per cent of foreign matter in bulk shipments of corn in the United States. In addition, growers of StarLink corn had been inadequately warned about the need to keep
StarLink corn segregated from other corn, leading to additional commingling in grain elevators.
(p. 241) Someone with a working knowledge of the nation's agricultural system would
have recognised from the outset that it was inevitable that, once StarLink corn was approved, produced, and processed on a large-scale basis, some of it would make its way into the human food supply. According to one agricultural expert, '[a]nyone who understands the grain handling system ... would know that it would be virtually impossible to
keep StarLink corn separate from corn that is used to produce human food' (Anthan
Page 16 of 20
2000). Although the EPA would later recognize 'that the limited approval for StarLink was
unworkable', the EPA failed to realize at the time of approval that this new technology
raised different issues than they had previously considered. Being aware that new technologies often create unforeseeable issues is a difficult lesson to grasp for expert agencies steeped in an existing model, but it is a lesson that could have led decision makers to
re-evaluate some of the assumptions at issue here.
4.2 Synthetic Biology
The admonition to be aware of what you do not know and to recognize the limits of foresight is clearly difficult to follow. This lesson does, however, provide important guidance
for how to handle the legal regulation of new technology. Most critically, it highlights the
need for legal regimes governing new technologies that are flexible and that can change
and adapt to new legal issues, both as the technology itself evolves and as our understanding of it develops. It is hardly surprising that we often encounter difficulties when
pre-existing legal structures are used to govern technology that did not exist at the time
the legal regimes were developed.
Synthetic biology provides a prominent, current example through which to apply this
teaching. Synthetic biology is one of the fastest developing and most promising emerging
technologies. It is based on the understanding that DNA sequences can be assembled together like building blocks, producing a living entity with a particular desired combination of traits. Synthetic biology will likely enable scientists to design living organisms unlike any found in nature, and to redesign existing organisms to have enhanced or novel
qualities. Where traditional biotechnology involves the transfer of a limited amount of genetic material from one species to another, synthetic biology will permit the purposeful
assembly of an entire organism. It is hoped that synthetically designed organisms may be
put to numerous beneficial uses, including better detection and treatment of disease, the
remediation of environmental pollutants, and the production of new sources of energy,
medicines, and other valuable products (Mandel and Marchant 2014).
Synthetically engineered life forms, however, may also present risks to human health and
the environment. Such risks may take different forms than the risks presented by traditional biotechnology. Unsurprisingly, the existing regulatory (p. 242) structure is not necessarily well suited to handle the new issues anticipated by this new technology. The regulatory challenges of synthetic biology are just beginning to be explored. The following
analysis focuses on synthetic biology governance in the United States; similar issues are
also being raised in Europe and China (Kelle 2007; Zhang, Marris, and Rose 2011).
Given the manner in which a number statutes and regulations are written, there are fundamental questions concerning whether regulatory agencies have regulatory authority
over certain aspects of synthetic biology under existing law (Mandel and Marchant 2014).
The primary law potentially governing synthetic biology in the United States is the Toxic
Substances Control Act (TSCA). TSCA regulates the production, use, and disposal of hazardous 'chemical substances'. It is unclear whether living microorganisms created by syn
Page 17 of 20
thetic biology qualify as 'chemical substances' under TSCA, and synthetic biology organisms may not precisely fit the definition that the EPA has established under TSCA for
chemical substances. Perhaps more significantly, EPA has promulgated regulations under
TSCA limiting their regulation of biotechnology products to intergeneric microorganisms
'formed by the deliberate combination of genetic material...from organisms of different
taxonomic genera' (40 CFR 725.1(a), 725.3 (2014)). EPA developed this policy based on
traditional biotechnology. Synthetic biology, however, raises the possibility of introducing
wholly synthetic genes or gene fragments into an organism, or removing a gene fragment
from an organism, modifying that fragment, and reinserting it. In either case, such organisms may not be 'intergeneric' under EPA's regulatory definition because they would not
include genetic material from organisms of different genera. Because EPA's biotechnology
regulations self-define themselves as 'establishing all reporting requirements [for] microorganisms' (40 CFR 725.1(a) (2014)), non-'intergeneric' genetically modified microorganisms created by synthetic biology currently would not be covered by certain central TSCA requirements.
Assuming that synthetic biology organisms are covered by current regulation, synthetic
biology still raises additional issues under the extant regulatory system. For example,
field-testing of living microorganisms that can reproduce, proliferate, and evolve presents
new types of risks that do not exist for typical field tests of limited quantities of more traditional chemical substances. In a separate vein, some regulatory requirements are triggered by the quantity of a chemical substance that will enter the environment, a standard
that makes sense when dealing with traditional chemical substances that generally
present a direct relationship between mass and risk. These assumptions, however, break
down for synthetic biology microbes that could reproduce and proliferate in the environment (Mandel and Marchant 2014).
It is not surprising that a technology as revolutionary as synthetic biology raises new issues for a legal system designed prior to the technology's conception. Given the unforeseeability of new legal issues and the unforeseeability of new technologies that create
them, it is imperative to design legal systems that themselves can evolve and adapt. Although designing such legal structures presents a significant (p. 243) challenge, it is also
a necessary one. More adaptable legal systems can be established by statute and regulation, developed through judicial decision-making or implemented via various 'soft law'
measures. Legal systems that are flexible in their response to changing circumstances
will benefit society in the long run far better than systems that rigidly apply existing constructs to new circumstances.
5. Conclusion
The succeeding chapters of Part III investigate how the law in many different fields is responding to myriad new legal requirements and disputes created by technological evolution. Despite the indescribably diverse manners of technological advance, and the correspondingly diverse range of new legal issues that arise in relation to such advance, the le
Page 18 of 20
gal system's response to new law and technology issues reveals important similarities
across legal and technological fields. These similarities provide three lessons for a general theory of the law and regulation of technology.
First, pre-existing legal categories may no longer apply for new law and technology disputes. In order to consider whether existing legal categories make legal and social sense
under a new technological regime, it is critical to interrogate the rationale behind the legal categorization in the first instance, and then to evaluate whether it applies to the new
dispute.
Second, legal decision makers must be mindful to avoid letting the marvels of new technology distort their legal analysis. This is a particular challenge for technologically lay legal decision makers, one that requires sifting through the promise of a developing technology to understand its actual characteristics and the current level of scientific knowledge.
Third, the types of new legal disputes that will arise from emerging technologies are often unforeseeable. Legal systems that can adapt and evolve as technology and our understanding of it develops will operate far more successfully than blind adherence to pre-existing legal regimes.
As you read the following law-and-technology case studies, you will see many instances of
the types of issues described above and the legal system's struggles to overcome them.
Though these lessons do not apply equally to every new law and technology dispute, they
can provide valuable guidance for adapting law to a wide variety of future technological
advances. In many circumstances, the contexts in which the legal system is struggling the
most arise where the law did not recognize or respond to one or more of the teachings
identified. A legal system that realizes the unpredictability of new issues, that is flexible
and adaptable, and that recognizes that new issues produced by technological advance
may not fit well into pre-existing (p. 244) legal constructs, will operate far better in managing technological innovation than a system that fails to learn these lessons.
Acknowledgements
I am grateful to Katharine Vengraitis, John Basenfelder, and Shannon Daniels for their
outstanding research assistance on this chapter.
References
Anthan G, 'OK Sought for Corn in Food' (Des Moines Register, 26 October 2000) 1D
Beavan C, Fingerprints: The Origins of Crime Detection and the Murder Case that
Launched Forensic Science (Hyperion 2001)
Breese v US Telegraph Co [1871] 48 NY 132
Page 19 of 20
Burns F, Communications: An International History of the Formative Years (IET 2004)
Butler J, Forensic DNA Typing: Biology, Technology, and Genetics of STR Markers
(Academic Press 2005)
CompuServe Inc v Cyber Promotions, Inc [1997] 962 F Supp (S D Ohio) 1015
Coordinated Framework for Regulation of Biotechnology [1986] 51 Fed Reg 23, 302
Environmental Protection Agency, 'Biopesticides Registration Action Document' (2000)
brad3_enviroassessment.pdf> accessed 7 August 2015 Gilovich T, Griffin D and Kahneman D, Heuristics and Biases: The Psychology of Intuitive Judgment (CUP 2002) Kelle A, 'Synthetic Biology & Biosecurity Awareness in Europe' (Bradford Science and Technology Report No 9, 2007) (p. 245) Khong D, 'An Economic Analysis of SPAM Law' [2004] Erasmus Law & Economics Review 23 Kirk v Gregory [1876] 1 Ex D 5 Lynch M and others, Truth Machine: The Contentious History of DNA Fingerprinting (University of Chicago Press 2008) Mandel G, 'Gaps, Inexperience, Inconsistencies, and Overlaps: Crisis in the Regulation of Genetically Modified Plants and Animals' (2004) 45 William & Mary Law Review 2167 Mandel G, 'History Lessons for a General Theory of Law and Technology' (2007) 8 MJLST 551 Mandel G and Marchant G, 'The Living Regulatory Challenges of Synthetic Biology' (2014) 100 Iowa Law Review 155 Parks v Alta California Telegraph Co [1859] 13 Cal 422 Payne J, USDA /APHIS Petition 97-013-01p for Determination of Nonregulated Status for Events 31807 and 31808 Cotton: Environmental Assessment and Finding of No Significant Impact (1997) www.aphis.usda.gov/biotech/dec_docs/9701301p_ea.HTM> accessed 1 February 2016 People v Jennings [1911] 252 Ill 534 Rao J and Reiley D, 'The Economics of Spam' (2012) 26 J Econ Persp 87 Rex v Stratton and Another [1905] 142 C C C Sessions Papers 978 (coram Channell, J) Page 20 of 20 Rose C, 'Crystals and Mud in Property Law' (1988) 40 SLR 577 State v Lyons [1993] 863 P 2d (Or Ct App) 1303 United States v Llera-Plaza [2002] Nos CR 98-362-10, CR 98-362-11, 98-362-12, 2002 WL 27305, at *517-518 (E D Pa 2002), vacated and superseded, 188 F Supp 2d (E D Pa) 549 Zhang J, Marris C and Rose N, 'The Transnational Governance of Synthetic Biology: Scientific Uncertainty, Cross-Borderness and the "Art" of Governance' (BIOS working paper no. 4, 2011) Further Reading Brownsword R and Goodwin M, Law and the Technologies of the Twenty-First Century (CUP 2012) Leenes R and Kosta E, Bridging Distances in Technology and Regulation (Wolf Legal Publishers 2013) Marchant G and others, Innovative Governance Models for Emerging Technologies (Edward Elgar 2014) 'Towards a General Theory of Law and Technology' (Symposium) (2007) 8 Minn JL, Sci & Tech 441-644 Notes: (1.) Portions of this chapter are drawn from Gregory N Mandel, 'History Lessons for a General Theory of Law and Technology' (2007) 8 Minn JL, Sci, & Tech 551; Portions of section 4.2 are drawn from Gregory N Mandel & Gary E Marchant, 'The Living Regulatory Challenges of Synthetic Biology' (2014) 100 Iowa L Rev 155. (2.) For discussion of additional contract issues created by technological advance, see Chapter 3 in this volume. Gregory N. Mandel Gregory N. Mandel, Temple University-Beasley School of Law