Published by the Students of Johns Hopkins since 1896
April 16, 2024

Reflecting on the moral qualms surrounding AI

By KELVIN QIAN | February 27, 2020

a9-kelvin-pic
PUBLIC DOMAIN After four years at Hopkins, Qian feels disillusioned with the field of AI.

I came to Hopkins in 2016. That year, artificial intelligence (AI) and machine learning (ML) were making waves around the world. It seemed like yesterday when we saw machines like IBM’s Watson triumph over humans. Self-driving cars, AI-augmented medicine and smart cities were among the many applications promised to save millions and bring prosperity to many more.

AI and ML were, like their fellow technologies of blockchain and CRISPR, talked about all over. They were hip, they were hype and they brought promises of a better tomorrow.

When I stepped onto Homewood, I sought the AI revolution out. It was one of my motivations for majoring in Computer Science (CS) in the first place. For example, I had heard that natural language processing was the most esteemed subfield of CS at Hopkins. Studying that, the subfield of machine learning that revolves around human language and drives Google Translate and so many other apps, seemed perfect for me. I had an interest in linguistics, and here I could apply it to drive forth the AI revolution and change the world.

I remembered the night I attended an MSE Symposium by Martine Rothblatt, the founder of Sirius XM, where she talked about her book Virtually Human. As the title suggests, the book was about that long-held sci-fi dream of creating a machine in the mold of man, to create an intelligence that goes beyond mere algorithms to achieve consciousness.

Four years later, however, I feel foolish.

For one, studying the nuts and bolts of ML was less interesting than I expected. I probably should’ve known when I asked a friend what courses to take when studying the subject.

His first answer: Intro Statistics. My response: Huh?

I decided to take the course. Intro Statistics, along with Natural Language Processing, Machine Learning and many other courses, served me well in understanding how AI and ML actually worked. As it turned out, our current ML systems are less like virtual humans and more like advanced statistical packages. And statistics was not exactly my favorite field of math.

Yet that was not the only reason why my AI hype had deflated, for in the past few years the dark side of AI applications has become clear.

One day we read about China’s ML-powered Orwellian surveillance state; the next day we witness the bias that ML systems perpetuate against America’s minorities. One day we celebrate the ML algorithms that Facebook, Google and Amazon develop; the next day we resist their efforts to collect our data and addle our brains with their addictive apps.

We once saw facial recognition as a great hope for security and commerce; now we wear masks to hide ourselves from these algorithms. We scarcely anticipated the effects of AI-created deepfakes and social media bubbles in our society, and we have seen a man run for president just to warn us about AI-driven unemployment.

Now, I wasn’t totally ignorant of these things back then (though I did seek out the AI hype and downplayed any bad news I heard).
The Hopkins CS department wasn’t either. In my freshman year we had to read Weapons of Math Destruction by Cathy O’Neil, which warned us about that dark side of AI. Yet the book did not portend the scale of my disillusionment. As the promise of AI became increasingly lost, I decided that it was time to focus on something else.

After a bit of soul searching, I decided to do research with the Programming Language Lab in the CS department. It turned out that I liked studying programming languages more than human ones, and I found the math used in this subfield far more interesting than the statistics and linear algebra used in ML. And by holing up in a mostly academic and theoretical subfield, I can, for a moment, avoid the moral questions posed by applying technology to the real world. But as a computer scientist, I must answer them someday.

Some of my friends believe that we only need to care about developing the technology; the ethics will follow later. But that is actually backward. Technology is but a mere tool, one that is nothing without religion and ideology. More than hardware prowess or software sophistication, it is the society on which the technology is built that will determine its successes and failures, its promises realized and promises lost.


Have a tip or story idea?
Let us know!

Comments powered by Disqus

Please note All comments are eligible for publication in The News-Letter.

Podcast
Multimedia
Alumni Weekend 2024
Leisure Interactive Food Map
The News-Letter Print Locations
News-Letter Special Editions