System Error:

Where Big Tech Went Wrong and How We Can Reboot

Rob Reich, Mehram Sahami, and Jeremy M. Weinstein

Remember when digital technology and the internet were our favorite things? When free Facebook accounts connected us with our friends, and the internet facilitated democracy movements overseas, including the Arab Spring? So do the authors of this comprehensive book, who note (p 237) that now social networks are viewed as "a place for disinformation and the manipulation of public health and elections." On digital technology, "We shifted from a wide-eyed optimism about technology's liberating potential to a dystopian obsession with biased algorithms, surveillance capitalism, and job-displacing robots."

This transition has not escaped the notice of the students and faculty of Stanford University, the elite institution most associated with the rise (and sustainment) of Silicon Valley. The three authors of this book teach a popular course at Stanford on the ethics and politics of technological change, and this book effectively brings their work to the public. Rob Reich is a philosopher who is associated with Stanford's Institute for Human-Centered Artificial Intelligence as well as their Center for Ethics in Society. Mehran Sahami is a computer science professor who was with Google during the startup years. Jeremy Weinstein is a political science professor with experience in government during the Obama administration.

The book is breathtakingly broad, explaining the main technical and business issues concisely but not oversimplifying, and providing the history and philosophy for context. It accomplishes all this in 264 pages, but also provides 36 pages of notes and references for those who want to dive more deeply into some topics. The most important section is doubtless the last chapter dealing with solutions, which may be politically controversial but are well supported by the remainder of the book.

Modern computer processors have enormous computational power, and a good way to take advantage of that is to do optimization, the subject of the first chapter. Engineers love optimization, but not everything should be done as quickly and cheaply as possible! Optimization requires the choice of some quantifiable metric, but often available metrics do not exactly represent the true goal of an organization. In this case, optimizers will choose a proxy metric which they feel logically or intuitively should be correlated with their goal.

This has been controversial in recent years in biology, as drug approvals are sought based on biological markers rather than life extension, for example. The authors describe the problems in technology which result when the wrong proxy is selected, and then excessive optimization drives that measure to the exclusion of other possibly more important factors. For example, social media companies which try to increase user numbers to the exclusion of other factors may experience serious side effects, such as the promotion of toxic content.

After that discussion on the pros and cons of optimization, the book dives into the effects of optimizing that ultimate metric, money. Venture capitalists (VCs) have been around for years, but recent tech booms have swelled their numbers, as successful entrepreneurs have cashed out and become VCs themselves. The methodology of Objectives and Key Results (OKR), originally developed by Andy Grove of Intel, became popular among the VCs of Silicon Valley, whose client firms, including Google, Twitter, and Uber, adopted it. OKR enabled most of the employees to be evaluated against some metric which management believed captured the essence of their job, so naturally the employees worked hard to optimize this quantity. Again, such a narrow view of the job has led to significant unexpected and sometimes unwanted side effects.

To maximize a company's earnings potential, rapid growth can take advantage of network effects, even if it is necessary to "move fast and break things," or skimp on quality assurance and bug fixing before releasing products. Facebook so rapidly obtained such a critical mass of users that even Google was unable to compete by launching Google+.

The big tech companies are threatened by legislation designed to mitigate some of the harm they have created, such as loss of privacy, or reduced pay and benefits for gig workers. They have hired a great many lobbyists, and even overtly entered the political process where possible. In California, when Assembly Bill 5 reclassified many independent contractors as employees, the affected tech companies struck back with Proposition 22 to overturn the law. An avalanche of very expensive promotion of Prop 22 resulted in its passage by a large margin.

It's well known that very few politicians have a technical background, and the authors speculate that this probably contributes to the libertarian leaning prominent in the tech industry. The authors go back in history as far as the Triangle Shirtwaist Fire of 1911 to show how regulation has lagged technology and industrial practice. An interesting chapter addresses the philosophical question of whether democracy is up to the task of governing, or whether government by experts, or Plato's "philosopher kings" would be better.

Part II of the book is the longest, addressing the fairness of algorithms, privacy, automation and human job replacement, and free speech. The authors point out some epic algorithm failures, such as Amazon being unable to automate resume screening to find the best candidates, and Google identifying Black users as gorillas. The big advances in deep learning neural nets result from clever algorithms plus the availability of very large databases, but if you've got a database showing that you've historically hired 95% white men for a position, training an algorithm with that database is hardly going to move you into a future with greater diversity. Even more concerning are proprietary black-box algorithms used in the legal system, such as for probation recommendations. Why not just let humans have the last word, and be advised by the algorithms? The authors remind us that one of the selling points of algorithmic decision making is to remove human bias; returning the humans to power returns that bias as well. Defining fairness is yet another ethical and philosophical question.

The authors give a good overview of privacy, which is a right of Europeans, but not of Americans. Americans say they are very concerned with their privacy, but their actions belie that position, as they willingly surrender privacy online for very little in return. Is the solution to that conundrum legal privacy mandates for everyone? The European Union seems to think so, as they have passed the General Data Protection Regulation, while California has passed a similar California Consumer Privacy Act, although it is a little too soon to evaluate their effect.

The automation chapter is entitled "Can humans flourish in a world of smart machines?" and it covers many philosophical and ethical issues after providing a valuable summary of the current state of "AI." Although machines are able to defeat humans in games like chess, go, and even Jeopardy, more useful abilities such as self-driving cars are not yet to that level. The utopian predictions of AGI (artificial general intelligence, or strong AI) in which the machine can set its own goals in a reasonable facsimile of a human seem quite far off, but that doesn't take us completely off the hook, ethically. The current AI (weak AI) is able to perform many tasks usefully, and automation is already displacing some human labor. The authors discuss the economics, ethics, and psychology of automation, as human flourishing involves more than financial stability. The self-esteem associated with gainful employment is not a trivial thing. The chapter raises many more important issues than can be mentioned here.

The chapter on free speech also casts a wide net. Free speech as we experience it on the internet is vastly different from the free speech of yore, standing on a soap box in the public square. The sheer volume of speech today is incredible, and the power of the social media giants to edit it or ban individuals is also great. Disinformation, misinformation, and harassment is rampant, and polarization is increasing. There is a shortage of good research on the extent to which we can blame social media companies and other internet publishers for perceived negative outcomes. The authors point out that some research suggests that polarization has increased the most among people who use the internet the least. (This supports the decision of the Proceedings of the National Academy of Sciences to very recently publish an article by Bak-Coleman et al entitled "Stewardship of Global Collective Behavior" calling the situation critical and proposing additional research.)

Direct incitement of violence, child pornography, and video of terrorist attacks are taken down as soon as the internet publishers are able, but hate speech is more difficult to define and detect. Can AI help? As with most things, AI can detect the easier cases, but is not effective with the more difficult ones. From a regulatory standpoint, section 230 of the Communications Decency Act of 1996 (CDA 230) immunizes the platforms from legal liability due to the actions of users. Repealing or repairing CDA 230 is a popular cry from both the left and the right. The authors make a good case that "it is realistic to think that we can pursue some commonsense reforms."

The final part of the book is relatively short, but very important, as the authors address the question "Can Democracies Rise to the Challenge?" Earlier in the book the authors have touched on various European and US regulatory regimes which attempt to reign in the tech giants. Here, that story is augmented with the interesting case history of Taiwan in which Audrey Tang, a developer with experience in Silicon Valley, took on the job of "digital minister," providing digital tools to enhance civic participation and economic development. The highly successful response of Taiwan to Covid-19, originating in nearby China, is notable, although a big part of their success was due to trust in government, something lacking here.

The authors discuss the history of medicine in the US as another example of government regulation. A 1910 report exposed deplorable conditions in medical schools, which resulted in the establishment of minimum standards in medical education as well as medical licensing. The Nuremberg, Germany trials after World War II exposed the brutal experiments of 23 Nazi doctors on Jewish prisoners, resulting in the 1947 Nuremberg Code regulating experiments on humans. The Tuskegee Experiment, which unethically withheld treatment from African Americans, began in 1932 and was exposed in 1972. That resulted in a major expansion of regulations concerning medical research.

Digital technology does not have as long a history as medicine, so few efforts have been made to regulate it. The authors mention the Association For Computing Machinery (ACM) Software Engineering Code of Ethics, but point out that there are no real penalties for violation besides presumably being expelled from the ACM. Efforts to license software engineers have not borne fruit to date.

The authors argue that the path forward requires progress on several fronts. First, discussion of values must take place at the early stages of development of any new technology. Second, professional societies should renew their efforts to increase the professionalism of software engineering, including strengthened codes of ethics. Finally, computer science education should be overhauled to incorporate this material into the training of technologists and aspiring entrepreneurs.

The authors conclude with the recent history of attempts to regulate technology, and the associated political failures, such as the defunding of the congressional Office of Technology Assessment. It will never be easy to regulate powerful political contributors who hold out the prospect of jobs to politicians, but the authors make a persuasive case that it is necessary. China employs a very different authoritarian model of technical governance, which challenges us to show that democracy works better.

This volume is an excellent reference on the very active debate on the activities of the tech giants and their appropriate regulation. It discusses many of the more concerning events of the recent past and provides good arguments for some proposed solutions. We need to be thinking and talking about these issues, and this book is a great conversation starter.


Page last updated November 28, 2022
Home - Satellites - Family - Religion - Politics - Book Reviews - Software Development