Colloquium on Innovation Policy
Professors Helen Nissenbaum and Katherine Strandburg
W 2:10-4pm FH 210 and Th 5-6:50pm VH 208
Spring 2017, New York University
About the course:
This year's Colloquium examines the legal and policy challenges raised by society's increasing reliance on so-called "big data" in a broad range of public and private endeavors, such as targeted advertising, assessment of credit worthiness, healthcare and law enforcement. We will explore issues such as privacy, equity, reliability, innovation and transparency from a variety of perspectives -- societal, legal, ethical, political, and humanistic. Students will read and discuss relevant materials, attend expert background lectures, participate in colloquium sessions where leading scholars will present works-in-progress, attend an interdisciplinary conference, write independent research papers and present their research to one another at class poster sessions.
1. Weapons of Math Destruction, by Cathy O'Neil (1/26 guest lecturer), available online from Amazon, Barnes & Noble, and other sources and at many bookstores.
2. All other required reading materials will be posted on NYU Classes or easily accessible online.
Supplemental: Academic Legal Writing, by Eugene Volokh, available online from Amazon and Barnes & Noble. This book is a useful guide to writing law review articles and seminar papers.
1. All assignments should be submitted in Microsoft Word (or any other format editable in Word). Please use the following naming convention for submitted assignments: LastNameAssignmentType.ext (e.g. StrandburgTopic.docx).
2. Comments/questions for colloquium speakers should be posted under "Forums" on NYU Classes. All other assignments should be emailed to both instructors and to Nicole Arzt. Assigned comments on classmates' drafts should also be emailed to the authors.
3. Late submissions will not be accepted without a prior grant of an extension. Extensions will be granted only in cases of serious hardship, such as medical or family emergency. Incompletes will not be given!
Research papers will count for 70% of the grade, divided as follows: first draft (10%), final paper (50%), poster presentation (10%). The remaining 30% will be based on class participation, comments/questions for colloquium speakers and comments on classmates' drafts.
Overview of Course Schedule (See detailed schedule below):
Introductory sessions (1/18, 1/25, 1/26 and 2/1): After an initial overview session conducted by Profs. Strandburg and Nissenbaum, three distinguished experts will give guest lectures to provide background on big data technology and applications.
Colloquium presentation weeks (see detailed schedule below): The heart of the course is a series of Thursday afternoon work-in-progress presentations by an interdisciplinary slate of guest scholars. Colloquium students will be joined at these presentations by interested faculty, researchers, students and practitioners. Speakers' draft papers, and other background readings, will be posted on NYU Classes at least one week in advance. The Wednesday class before each colloquium presentation will be devoted to discussing related background readings.
Student Poster Sessions (4/19 and 4/20): We will hold two poster sessions at which students will present their research papers and discuss them with classmates. Half the class will present posters at each session. Paper drafts will be posted on NYU Classes prior to the poster sessions to further facilitate discussion. Refreshments will be provided.
Algorithms and Explainability Conference (4/27 and 4/28): This interdisciplinary conference, organized by NYU's Information Law Institute, will provide cutting edge perspectives on the question of whether and to what extent decision makers who employ "big data" algorithms should be required to provide understandable explanations to those affected. Students are required to attend the conference for at least two hours -- and are invited and encouraged to attend as many sessions as possible!
Cancelled Classes (1/19, 2/2, 2/9 (Snow Day), 2/23, 3/22, 3/23, 4/26 and 4/27): This is a 3 credit class, which is scheduled for 4 hours per week, giving us 7 "extra" class sessions. Students are encouraged to use these "days off" to work on their research papers. Class is also canceled on 4/27 during the Algorithms and Explainability Conference.
Requirements and assignments:
Attendance and Participation: Attending class sessions and participating in class discussion are crucial parts of the learning experience for this class and are mandatory.
Reading Assignments: Readings for the four introductory class sessions are provided in the detailed schedule below. Background reading assignments and speakers' draft papers for each Colloquium week will be posted at least one week in advance.
Research Paper: Each student will write an independent research paper on a topic related to "big data." Each student must submit a topic choice, a tentative thesis state¬ment, a preliminary outline and bibliography, a complete first draft and a final paper. Substantial writing requirement papers must be at least 10,000 words long, excluding footnotes. Others should be at least 6000 words long, excluding footnotes.
Poster Sessions: Students must prepare 3' x 4' posters summarizing their research papers and present them during one of two poster sessions. Students must also attend the other poster session and engage actively with their classmates' presentations. More details about the posters and poster sessions will be provided in class.
Questions for Colloquium Speakers: Prior to each of the seven work-in-progress presentations, each student must post three questions/comments about the speaker's paper on NYU Classes. Drafting these questions/comments helps students to prepare for the colloquium sessions. Speakers also find them helpful when revising their papers. Questions/comments for colloquium speakers must be posted by 3pm on the Thursday of the speaker's presentation.
Written Comments on Two Classmates' Drafts: Each student will be assigned to read two classmates' paper drafts and submit brief written comments, which will be shared with the authors of the drafts. Comments should be constructive and serious (and will not be anonymous). These comments are helpful to the paper authors, of course. Commenting on another student’s draft is also a very valuable educational experience.
Important dates and deadlines:
2/8 Paper Topic due by midnight
2/22 Tentative thesis statement, preliminary outline and bibliography due by midnight
3/29 First drafts due by midnight
4/12 Comments on two classmates' drafts due by midnight
(First draft returned with professor comments)
4/20 or 21 Poster sessions
5/16 Final paper due by 5pm
|Jan 18 ||Introductory Session|
|1. Executive Office of the President, Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights (May 2016)
|2. Danielle Keats Citron, Technological Due Process, 85 WASH. U. L. REV. 1249 (2008), Read Part I
|3. J. Lane, V. Stodden, S. Bender, and H. Nissenbaum (Eds.) PRIVACY, BIG DATA AND THE PUBLIC GOOD: FRAMEWORKS FOR ENGAGEMENT (CUP 2015)
|a. Ch. 1: Katherine Strandburg, Monitoring, Datafication, and Consent: Legal Approaches to Privacy in the Big Data Context, Read Secs. I, II, IV.A
|b. Ch. 2: Solon Barocas and Helen Nissenbaum. Big Data's End Run Around Anonymity and Consent
|Foster Provost, Professor of Data Science and Information Systems, NYU Stern School of Management|
|1. Solon Barocas et al., Data & Civil Rights: Technology Primer
|2. Foster Provost and Tom Fawcett, Data Science and its Relationship to Big Data and Data-Driven Decision Making, 1 BIG DATA 51 (February 2013)
|Jan 26 ||Guest Lecture|
|Cathy O'Neil, mathematician and author |
|1. Cathy O'Neil, WEAPONS OF MATH DESTRUCTION (Crown Random House 2016)
Read Ch. 1 and two of Chs. 4-9 (your choice)
|Feb 1 ||Guest Lecture|
|Duncan Watts, Microsoft Research |
|1. Duncan Watts, The Myth of Common Sense: Why the Social World is Less Obvious Than It Seems (Sept. 29, 2011 blog post at freakonomics.com)|
|2. Travis Martin et al., Exploring Limits to Prediction in Complex Social SystemsRead: Secs. 1 & 6, skim Secs. 2-5
|3. Duncan Watts, Special People, Ch.4 from Everything is Obvious: How Common Sense Fails Us (Crown 2012)
|4. Duncan Watts, Computational Social Science: Exciting Progress and Future Directions, Frontiers of Engineering (Winter 2013), pp. 5-10.
Paper topic due
|1. Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1 (2014), https://ssrn.com/abstract=2376209.
|2. Jenna Burrell, How the Machine 'Thinks': Understanding Opacity in Machine Learning Algorithms, Big Data & Soc. 1, Jan.-Jun.2016, http://journals.sagepub.com/doi/abs/10.1177/2053951715622512?platform=hootsuite&
|3. Zachary C. Lipton, The Mythos of Model Interpretability, https://arxiv.org/pdf/1606.03490v2.pdf.
1. Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, "Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation," https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2903469
|Feb 15 |
|1. Review Ch. 2: Solon Barocas and Helen Nissenbaum. Big Data's End Run Around Anonymity and Consent|
|2. H. Nissenbaum (1996) "Accountability in a Computerized Society," Science and Engineering Ethics, 2:1, 25-42 (available on NYU Classes)
|3. Simon Chesterman, Do Driverless Cars Dream of Electric Sheep?, The Straits Times (Sept. 3, 2016), available at https://ssrn.com/abstract=2833701.
|4. Jason Millar & Ian Kerr, Delegation, Relinquishment and Responsibility: The Prospect of Expert Robots, available at https://ssrn.com/abstract=2234645.
|Feb 16 ||Colloquium Presentation|
Julia Angwin, ProPublica
Title: Algorithmic Accountability
Abstract: Machines are making a lot of decisions that used to be made by humans. Machines now help us make individual decisions, such as which news we read and the ads we see. They also make societal decisions, such as which neighborhoods get a heavier police presence and which receive more attention from political candidates. Journalist Julia Angwin talks about the challenges of holding machines accountable for their decisions.
Prep for Poon Presentation on 3/2
|1. Poon Abstract and Note to Students, available on NYU Classes|
|2. Heilemann, John. 2000. "The Truth, The Whole Truth, and Nothing But The Truth." in Wired Magazine, https://www.wired.com/2000/11/microsoft-7/
|3. Kurt Eichenwald, "Microsoft's Lost Decade," Vanity Fair July 24, 2012, http://www.vanityfair.com/news/business/2012/08/microsoft-lost-mojo-steve-ballmer
|4. Richard Waters, "Can Microsoft Get Back into the Game with AI?," The Big Read, Financial Times Nov. 28, 2016, available on NYU Classes
|5. Adam Davidson, "Why are Corporations Holding Trillions?," New York Times Magazine Jan. 20, 2016, http://www.nytimes.com/2016/01/24/magazine/why-are-corporations-hoarding-trillions.html
|6. Moody's Investor Services, "Moody's: US non-financial corporates' cash pile increases to $1.68 trillion, tech holding the lead," Press Release May 20, 2016, https://www.moodys.com/research/Moodys-US-corporate-cash-pile-led-by-tech-sector-to--PR_357576
|Feb 23 ||TENTATIVE THESIS STATEMENT, PRELIMINARY OUTLINE AND PRELIMINARY BIBLIOGRAPHY DUE|
Solon Barocas, Microsoft Research
Paper: Taking Explanation Seriously in Law and Machine Learning
Abstract: From scholars seeking to "unlock the black box" to regulations requiring "meaningful information about the logic" of automated decisions, recent discussions of machine learning - and algorithms more generally - have turned toward a call for explanation. Champions of explanation charge that algorithms must reveal their basis for decision-making and account for their determinations. But by focusing on explanation as an end in itself, rather than a means to a particular end, critics risk demanding the wrong thing. What one wants to understand when dealing with algorithms - and why one would want to understand it—can vary widely. Often, the goals that motivate calls for explanation could be better served by other means. Worse, ensuring that algorithms can be explained in ways that are understandable to humans may come at the cost of another value that scholars, regulators, and critics hold dear: accuracy. Reducing the complexity of a model, for example, may render it more readily intelligible, but also hamper its performance. For explanation to serve its intended purpose and to find its appropriate place among a number of competing values, its champions need to consider what they hope it to achieve and what explanations actually offer.
Martha Poon, Research Affiliate, The Committee on Global Thought, Columbia University
Paper: Borrowing to Pivot-What can Microsoft tell us about the financing of the cloud?
Abstract: Microsoft Corporation is a global giant, but even giants have to keep pace. After making its fortune in a humbler era of shrink-wrapped software, and maturing during the browser wars, the company will now pursue a data-driven business model. In June 2016, as part of a strategy to merge enterprise software with digital automation and machine learning, it announced that it would buy out professional social media site LinkedIn in a $26bn dollar all-cash transaction. Microsoft is borrowing to pivot. To finance the acquisition it sold $19.7bn in investment-grade bonds, backed by a corporate cash reserve of over $105bn. This paper will show that the tech sector is building up its ambitious cloud-computing infrastructure in a unique financial environment, in which a handful of tech corporations command an unprecedented hoard of cash as well as access to abundant, low-interest, debt financing. The research unearths a kind of Faustian pact between the global money markets' insatiable need to produce tradable credit assets, a global multinational's need to sustain shareholder value, and what some are calling 'surveillance capitalism'. The paper argues that Silicon Valley's potent new business model is a silent beneficiary of the post-2008 credit contraction. What if the cloud is a result of dynamics in financial markets, and not just of technical innovation? How would this challenge our approach to policy and regulation?
Prep for Hildebrandt Presentation
|1. Mireille Hildebrandt, Law as Information in the Era of Data-Driven Agency (Chorley Lecture 2015), The Modern Law Review (79) 1, 1-30, http://onlinelibrary.wiley.com/doi/10.1111/1468-2230.12165/abstract
|1. Chapters 7 and 8 of Smart Technologies and the End(s) of Law, Cheltenham: Edward Elgar 2015, 133-158, 159-185
Mireille Hildebrandt, Professor of Law, Vrije Universiteit Brussel; Professor of Smart Environments, Data Protection and the Rule of Law at the Science Faculty, Radboud University, Nijmegen, The Netherlands
Paper: From Law as Information to Law as Computation
Abstract: The idea of computational law or jurimetrics stems from a previous wave of artificial intelligence. It was based on an algorithmic understanding of law, celebrating logic as the sole ingredient for proper legal argumentation. However, as Holmes noted, the life of the law is experience rather than merely logic. Machine learning, which determines the current wave of artificial intelligence, is built on machine experience. The resulting computational law may be far more successful in terms predicting the content of positive law. In this lecture we will discuss the assumptions of law and the rule of law and confront them with those of computational systems. This should inform the extent to which artificial legal intelligence provides for responsible innovation in legal decision making.
|Mar 15 ||Spring break|
|Mar 16 ||Spring break|
|Mar 29 ||Prep for Evans Presentation|
FIRST DRAFT DUE
Barbara Evans, George Butler Research Professor, University of Houston Law Center
|Apr 5 ||Prep for Connelly Presentation|
Matthew J. Connelly, Professor of International and Global History, Columbia University
Prep for Joh Presentation
Elizabeth Joh, Professor of Law, U.C. Davis School of Law
Paper: Police as Data Creators
Abstract: Police investigate crime, but they also create the very data that forms our understanding about crime and criminal law. If police are data creators, the how and why of that data creation has direct consequences on not just our knowledge about crime, but also on the programs today that store, process, and reuse vast quantities of data for policing. Surprisingly, this aspect of policing receives little attention in any systematic way. In an age of big data, however, the ability of the police to create, shape, ignore, and produce data demands fresh attention because of its profound consequences elsewhere in the criminal justice system. This article explains the significance of police as data creators, explains some of the potential harms of this creation, and suggests ways that data creation by the police can be subjected to greater scrutiny by the public.
Student Poster Session
|Apr 20 ||Student Poster Session|
|Apr 27 ||Algorithms and Explainability Conference|
|Apr 28 ||Algorithms and Explainability Conference|
|May 16 ||FINAL PAPERS DUE AT 5PM|