AI is not coming for you

A current joke that has been circulating around the tech world for a while now is that the way to get VCs to eat out of your hand is to slap the words AI, “machine learning” and/or “blockchain” on your pitch deck.

As they say, the best jokes are indistinguishable from reality.

I’ve come to the conclusion that most of the exuberance about artificial intelligence is founded on sand. I’ve gotten there both from a closer look at what amounts to “artificial intelligence” today, as well as a good understanding of both how software itself works and, importantly, how it is developed within a corporate setting.

A very large majority of the AI hype out there today is pure attention-seeking nonsense. Generously, one might see it as just yet another example of a certain genre of tech hucksterism (see: self-driving cars, internet-beaming drones, 3D printing). Less generously, however, it isn’t hard to see the lavish marketing machine around AI as a strategy to depict deliberate choices by dominant software platforms as technical inevitabilities.

“AI” is not something anyone needs to be worried about. A world mediated by unaccountable corporate software platforms is.

AI-washing

The term “AI” is increasingly stretched to apply to any new field of software – and even quite old ones as well. This is, in fact, a widely remarked-upon phenomenon in “AI” research, in which whatever technique is used to solve a given problem becomes its own definable category, and is thus no longer “really AI.”

Translation software was once considered by serious people to be “AI” – until it became easy. Optical pattern recognition, natural language processing, auto-navigation and chess were the very same way. To be sure, IBM’s Deep Blue and, later, Watson “AI” systems proved very adept at the tasks they were directed at. But as we all later found out, those systems became significantly less impressive when directed at tasks only a few degrees separated from the ones they were specially designed for. They were simply very powerful tools, not “intelligence” in any true sense.

The problem with this trend is not that it waters down a term like “artificial intelligence,” whose definition is effectively half science fiction anyway. Rather, it recasts deliberate human choices as instead derived impassionately from data, thus imbuing them with some manner of impartial truth.

Amazon’s embarrassing experiment with an “AI” resume review system that weeded out all female applicants is a perfect example. The system they used produced “correct” results in the sense that they mostly conformed to the training data. That data, however, came from the real world, and was thus ineluctably flawed with the exact biases an automated system was hoped to overcome. Thus, the results were totally useless. Garbage in, garbage out.

(It’s also worth mentioning that the approach Amazon used sounds like a fairly straightforward model-scoring analytic technique, which has been quite common in many industries for decades. Is model scoring “AI” now?)

In this way, the model could only be as good as the data used to train it. In the real world, that data is in no way impartial or free of bias. Yet casting this as a failure of “AI,” in the sense that the technology “just isn’t ready yet,” misses the real cause: that humans are affected by real-world inequities which inevitably influence human-built technologies. There is just no way to “product” our way around this.

How the software sausage is made

Global, monopolistic platforms like Google, Facebook and Amazon do not pursue “AI” as a science project. Nor do hospitals, insurance companies, banks, airlines or governments. They do so for specific, strategic purposes, which in corporate settings are aimed at generating new revenue. Like any technology choices, these purposes can sometimes be aligned with consumer benefit, but often are not. Like any technology choice, the determination all comes to: it depends!

In many ways, the adoption of “AI” closely resembles that of other tech buzzword concepts like “big data” and “analytics.” Sometimes, they’re literally the same thing: while banks once used “big data” to score mortgage applicants for estimated creditworthiness, now it’s called “AI” because it uses a form of automatic adjustment to statistical models, which we now call “machine learning.” Mysteriously, this has not eliminated demonstrable racial bias in mortgage lending or auto insurance, just to pick two of many such examples. The reason is not only because race is used as a factor in assigning risk (not unlike Amazon’s system picking “female” as a proxy for “inadequate”), but also because lenders are obviously incentivized to find new ways to make money on loans and insurance rates. And it turns out that racially discriminatory lending and coverage can be quite profitable – particularly when enabled with technological precision.

From banking and insurance to Youtube’s algorithm and Facebook’s news feed, it’s become extremely popular to refer to the “AI” governing the critical decisions each face. One reason is because “AI” is easy to blame when things go wrong (“whoops, the system made a mistake!”), but it also conceals the deliberate human choices behind how those systems actually work. Facebook has built the “AI” behind its News Feed to maximize engagement, and Youtube to (somewhat clumsily) match interests to keep you watching videos. This is why, as a 30-something year old white man, portals into the alt-right rathole literally follow me around these platforms. Blaming “AI” for these choices is rich theater by executives who don’t wish to be pressed on the negative externalities of their engagement-maximizing directives.

Engineers and product managers behind these systems, whether they sit at Facebook, Google, Bank of America or Aetna, have specific product revenue and/or engagement goals to hit, and design their products accordingly. Companies often have good reason to not want to discuss those goals publicly, and thus deploy stories about their investment in “AI” both as cover story and positive marketing. Politicians and the public at large still eat up a lot of this messaging without much criticism, in part because “AI” sounds much less scary than “big data,” though they’re often effectively the same thing.

What are we even talking about?

No one has any idea what “artificial intelligence” even means.

One group still profoundly dubious about “AI” that you may not have heard much from are those who actually study the human mind. Virtually any serious psychology or neuroscience researcher will tell you that we still know very, very little about the human brain or how it works. In most ways, we’re still in the “learning how little we know” stage. Don’t let the Medium thinkpieces by tech people who’ve read a few neuroscience books fool you. (Seriously, if you Google this topic, one of the top results is from a tech growth marketer.)

Our paltry scientific understanding aside, even philosophically, there is nothing remotely resembling consensus on what intelligence even is, much less how we would artificially construct it. Psychologists, neuroscientists, and philosophers going back millennia have foundered on this question, and it shouldn’t come as a surprise that Silicon Valley is no where near answering it. What most of us conceptualize as “artificial intelligence” still has more to do with science fiction in books, TV and movies than anything else.

Not that there’s anything wrong with that. Imaginative conceptualizing of what computers could do is critical to progress. But a dose of humility is probably due before triumphantly positing an answer to a question that has bedeviled humanity basically since we discovered fire. If you think you know what it means to be intelligent, or aware, or conscious – congratulations! You’ve stumbled on one of the greatest revelations in the history of humankind. Go write a paper about that.

(Literally, what?)

A future mediated by engagement metrics

In an era when many of our political leaders, particularly in America, are 60 and 70-somethings who barely know how to use email, there is an understandable reluctance to entrust them with regulating technologically sophisticated industries. And for the most part, they don’t! Most of the political class, with some notable exceptions, is happy to let the tech lobby write its own regulatory scheme and pass it in exchange for contributions.

But we don’t need to “regulate AI.” As we’ve seen, “artificial intelligence” is mostly a constructed catch-all term for lots of different types of technology being deployed into all sorts of areas of our lives. What we need are protections focused less on the uses of specific technologies, which change rapidly, and more on their actual effects on people. Our concept of “freedom” must expand to include not only the liberty to do certain things, but also liberty from the effects of certain technologies at scale.

In this way, I’ve developed a deep respect for the EU Commissioner for Competition, Margrethe Vestager. Her vision for a market-led technological future, guided by strong, specific consumer protections by informed regulators, sounds deeply attractive in an era when so many American regulators are effectively captured by the industries they’re supposedly watching over. If you read Vestager’s interviews with The Atlantic’s Franklin Foer and Kara Swisher, it’s really hard not to be impressed with her level of thoughtfulness and sophistication on the issue – especially as compared to that of her American counterparts.

I’ve written before about regulatory areas American policymakers should consider (“Four Boring Ways to Regulate Big Tech”). Chief among them are basic consumer protections that would give individual Americans some means of legal redress over the dominant software platforms that increasingly mediate everything about our society. We don’t yet know what the long-term effects will be of a society where every person’s every errant word and act since adolescence is stored and searchable forever, but most people find it unattractive. (I found my friend Can Duruk’s post on this, “An internet with an elephant memory,” particularly compelling.)

We don’t need to surrender to that, though, any more than we must throw up our hands at the inevitable march of “AI” to make decisions over our lives. Just remember that “AI” is not always what it seems. Behind it is always humans making specific decisions that are in line with their incentives. We should all get a say in what those are.

Related posts:


[mc4wp_form id=”185″]

Standard