An Ex-Google Engineer Is Founding a Religion to Worship AI. He’s Decades Too Late.


The headlines on this one could almost write themselves. Anthony Levandowski, the disgraced former Google engineer whose copying of trade secrets led Waymo (Alphabet’s self-driving car company) to file a lawsuit against Uber for $1.86 billion, founded an organization called “Way of the Future” back in 2015. Its purpose, according to state filings, was to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

At first glance, the idea seems utterly preposterous. But I’d argue Levandowski’s mistake isn’t his dubious attempt to position a digital deity as a substitution for the decidedly more analog versions of conventional religions. It’s in failing to realize that he’s decades too late. People already do so.

Ever since the dawn of modern computing, computers have been viewed and portrayed as offering better-than human capabilities in many respects. The original meaning of the word “computer” dates to 1613 and meant “one who computes.” Human computers were used to create trigonometry and logarithms at the end of the 19th century, as well as to research fluid dynamics and meteorology. As digital computers became more powerful in the mid-20th century, the human definition was supplanted by the idea that a computer was a thing that computed rather than a person. As Betty Jennings, one of the women who worked on ENIAC in the early 1940s, observed, “ENIAC (considered one of the first, if not the first, electronic, general-purpose computers) calculated the trajectory faster than it took the bullet to travel.”

A few years later, the UNIVAC I mainframe computer successfully predicted a blowout election for Eisenhower in 1952, even when human pollsters called the election for Adlai Stevenson. Part of what’s significant about that event is the computer called the race hours ahead of time, but CBS refused to believe it and didn’t run UNIVAC’s projections, despite having previously stated they would do so. CBS staged footage and announced the computer had predicted the odds of an Eisenhower victory at 8-7, when the actual predicted odds were 100-1. Once it became clear UNIVAC was within 1 percent of the right projection, the broadcast network had to admit the computer had been right and its own experts wrong.

The idea digital computers were vastly superior to humans in certain respects was beginning to take root by the early 1950s at the very latest. Levandowski’s belief that AI could fulfill such a role began decades before the rise of the internet, or even the invention of the personal computer.

Science fiction of the time period, however, still expressed considerable unease about various aspects of technology. While some characters, like Isaac Asimov’s R. Daneel Olivaw, were portrayed positively, the original Star Trek featured androids or computers as primary or secondary antagonists in 16 percent of its episodes (13 episodes total). Perhaps more significantly, only one episode portrayed a sentient (or sentience-mimicking) machine in a neutral light–“The City on the Edge of Forever.” In every other case, androids, robots, and advanced computer systems were depicted as adversaries.

While few of the above depictions painted AI as explicitly evil, it was common to show artificial intelligence that had broken down for various reasons, or had been trapped in a malignant failure modes like perverse instantiation–defined as an AI remaining faithful to the goals of its creators, but accomplishing those goals through destructive behavior its creators never intended. McCoy’s greatest and most consistent criticism of Spock is that he’s out of touch with his feelings and a glorified meat-based computer with legs. At no point in the original show is an artificial life-form depicted completely positively, or without considerable (and well-deserved) apprehension by the Enterprise crew.

CityImage

The one sentient machine that wasn’t an antagonist on TOS was a poorly constrained space-donut.

Star Trek is a useful point of reference here because the show and its spinoffs span such a large time period. By the TNG era, artificial life forms were not unilaterally presented as threats. Data and his quest to understand humanity and himself more fully are portrayed positively, and contrasted with the actions of other artificial characters, like Data’s older, and more less stable ‘brother’, Lore. Data’s rights as an individual, his ability to command a starship, and his own impassioned defense of an artificial life form no one else recognizes as being alive are all depicted with nuance.

By the time Voyager was on the air, an episode that dealt with the perverse instantiation of an entire species of artificial life-forms was more unusual than episodes exploring how artificial and organic life could work together and learn from each other. That’s by no means a bad thing, but the difference in these stories reflected societal shifts in how computers and AI were perceived. Those changes–and indeed, the perception that algorithms are somehow neutral evaluative criteria that can rise above the unconscious biases or prejudices of humanity — have created an unconscious deference to algorithmic decision-making that may not count as “worship,” but definitely represent some degree of extended faith.

Wired has done some excellent reporting on how algorithms are being used to determine whether criminals are at-risk of flight, with no allowed insight into how they work or weight their conclusions, and how gender bias (PDF) can creep into image recognition software. Last year, Microsoft’s chatbot Tay was derailed by Nazi rhetoric–and to some extent, to respond independently in similar vernacular–in less than a day.

TayTweets

To be fair, some of what Tay tweeted was just plain weird.

Malfunctioning algorithms have stripped the wrong people of their drivers licenses, and terminated the health benefits of thousands of seniors and low-income residents of California. These problems aren’t new; back in the late 1980s, an investigation into the use of algorithms to select the students who would be accepted into St. George’s Medical School in the UK (PDF) found the algorithm hadn’t prevented racial or gender bias, but had instead encoded the bias of its own designers into its supposedly neutral decisions. But so long as AI and neural networks remained the stuff of ivory towers and academic research, the mainstream public had little reason to care.

In the last decade, however, we’ve seen an explosion in the use of algorithms in everything from determining what news you’ll see in your Facebook feed to whether you’re a good loan risk based on who your social media friends are. Target has predictive algorithms that “know” you’re pregnant before your family does, even if you’d have preferred to keep that information private. Companies like Facebook, Google, and Twitter bend over backwards to paint themselves as neutral content platforms. Under the surface, a Facebook algorithmic advertising system was offering ad buys to people who identified as a “Jew hater,” or were interested in topics like “how to burn jews,” and “why jews ruin the world.” To an algorithm looking to match advertisers and ad targets, “Jew hater” is as valid a category as “Wal-Mart shopper.” And YouTube suffered significant advertising losses earlier this year when mainstream advertisers discovered their ads were running alongside extremist content.

DidHolocaustHappen

It was actually worse than this. I know, because I specifically checked after seeing people claiming it was true on social media. Simply searching for “The Holocaust” returned top results from white supremacist and Nazi sites. Humans can differentiate between content types and sources. Algorithms aren’t nearly as good at it.

Similarly, Google agreed to make a rare manual adjustment to its own search results after news that the search engine’s top results for queries about the Holocaust pointed to a prominent Nazi website. Google’s engineers may have every intention in the world of creating a neutral ranking algorithm. But the algorithms and automated systems that govern our social interactions, dating ads, product recommendations, risk profiles, and the ads we see aren’t neutral. They’re designed to surface specific content, to encourage you to buy more products, or spend more time on a website. And for every tweak Google makes to attempt to improve its results, a legion of search engine optimization (SEO) services follow along behind, promising the latest tricks and ways to game Google’s ranking systems and bump your company or organization up, while bringing your competitors down.

More than a few people have been killed or stranded by GPS instructions that led them in the wrong direction, sometimes even when the evidence of their own senses indicated the route was becoming worse, not better. The challenge with evaluating when algorithms and machine intelligence are returning good data is that in some cases, computers are absolutely superior to our own capabilities.

A team of engineers working on a new vehicle chassis cannot simulate its performance in a dozen different operating conditions one-thousandth as capably as a computer. Facebook’s People You May Know feature occasionally coughs up people I did used to know, and would like to be in touch with again. While GPS navigation and mapping services can lead you in the wrong direction–Apple Maps once memorably took me on a 15-mile trip and dropped me in a quiet suburb when I asked it to locate the nearest Chase Bank in an unfamiliar town–they mostly don’t, and these systems have only improved with time.

That’s part of the problem. When a company pushes out a new software version, it leads with patch notes like: “Improved route management” not “Drivers approaching Springfield on I-95 will no longer be directed over the edge of an abandoned gravel quarry.” Quantitative improvements to many platforms are often obfuscated in ways that make them difficult to systemically evaluate. We trust them anyway.

The outsourcing of human interaction to algorithms has other implications as well. A person of color who regularly sees 10-20 empty taxis go by before finding one to stop for them, while white people nearby have no such problem, has at least circumstantial evidence he or she is being discriminated against in an illegal way. In contrast, a person who is waiting for an Uber never “sees” their would-be driver at all. To hear Uber tell it, the connection between drivers and passengers is created by a neutral matching algorithm. Yet one study last year found men and women who used African-American “sounding” names in their Uber profiles had a far higher percentage of their rides canceled: 11.2 percent versus 4.5 percent for men using white-sounding names, and 8.4 percent versus 5.4 percent for women using African-American versus white-sounding names. In this case, Uber’s algorithm for matching drivers to riders may well be neutral, but its outcomes aren’t. The very nature of the service obscures the evidence that might otherwise be used as part of a discriminatory complaint against the company.

We do not worship the Analog Father, Digital Son, and the Holy Data Ghost, or declare them to be the intertwined representation of the Sacred Ternary. We likely never will. But increasingly, and often unconsciously, we rely on algorithms and supposedly neutral computer services to shape our lives in ways both profound and inconsequential. We take it on faith that in both cases the algorithms that govern the products and services we use have at least some of our own best interests at heart. Meanwhile, the false prophets of the Church of the Neutral Algorithm live on, shaping our understanding of a foundational AI myth that is rarely questioned, but has profound effects on our understanding of the world and our place within it.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com