Part 2
Over the course of my life, it feels like the line between science fact and fiction has begun to blur in conjunction with the increasing capabilities of artificial intelligence. As I described in last week’s post, [1] our existing technology can only complete tasks (increasingly impressive tasks, to be sure) based on assignments given to it by humans. However, our fascination with intelligent machines results in fantastical stories about what they could be capable of if they achieve and even surpass human capabilities. Those stories raise bizarre questions of ethics and morality – including the inherent rights of self-aware beings and the legitimacy of human relationships with them.
In some cases, these self-aware machines are benevolent and simply want love, acceptance, equal rights, and/or freedom. In other cases, they rebel against their creators with the goal of eradication (often as a result of their subjugation by humans). Either scenario creates an us vs. them (man vs. machine) narrative with a power imbalance, if not all-out war, with the edge going to the entity with the greater intelligence. While it is currently in development, Artificial General Intelligence [3] still does not exist in the world of science fact, which begs the question of why it has been so prevalent in the world of science fiction for so long.
Self Actualization or Imitation?
It is possible that the allure of this subject for me stems from my childhood fascination with (and – let’s be honest – crush on) Star Trek’s [4] Lt. Commander Data, an android with many super-human capabilities in some areas (encyclopedic knowledge, computational and problem-solving abilities) but clear limitations in others (emotions, intuition). His character arc over the course of seven seasons showcased an ever-present fascination with the human experience and an ever-growing ability to function in a more-human way. The season two episode “The Measure of a Man” came right out and asked the question “does Data have a soul?” as he participated in a trial that would determine his rights. [5] That concept has clearly stuck with me since that impressionable age, as I have difficulty consuming science fiction that involves the unethical treatment of sentient machines.
Over the past three decades I’ve seen computers beat humans at chess [6] and Jeopardy!, [7] which struck me as cool but not concerning: they have a super-human ability to crunch relevant data to accomplish those tasks. Computers of present day are creating music, [8] art, [9] and poetry, [10] which is impressive but still not surprising: if given access to our global recorded history, they can generate similar and surprisingly human-seeming content as they are prompted to do so. Now there are instances of chat bots being trained to mimic human emotions in order to fill gaps in social relationships [11] and professional services (such as talk therapy), [12] which is interesting in function but disconcerting in application: less so because of what the AI is capable of doing (it’s still responding to text-based prompts) and more because of how humans are relying upon it.
In none of these real-world situations do we see Artificial Intelligence undertaking an activity for the sake of doing it. Unlike Commander Data painting or writing poetry in order to express himself or better understand the human experience, even the most creative applications of AI result only in (admittedly increasingly realistic) outputs prompted by humans. And the more realistic these results seem, the more inclined we are to rely on them. Unfortunately, now that we have AI available to do research, problem solving, communication, and even art for us, we are spending less time doing those activities ourselves, and when we do, we are doing them differently.
Adapting Circuitry
We tend not to think of our brains evolving much over time, at least not after the period of massive neural plasticity in early childhood. However, even in adulthood, our brains have incredible capacity for learning, and they even change structure at the cellular level depending on what we’re learning. If we’re learning new things and building new skills, the neurons in our brains forge new synapses over time to help retain those lessons; if we’re not making use of certain facts or skills, related synapses that aren’t being used shrink through a process called “synaptic pruning.” [14] Being rusty in a foreign language is a great example of what happens as a result of synapses not being used when you stop practicing a certain skill – similarly we can become rusty in things like concentration, deep reading, and critical thought.
One of the Facebook posts that served as a catalyst for this blog series was an Atlantic article from 2008. In this article that is now old enough to drive, the author describes how advances in research tools (specifically internet search engines) have changed not just how we do research but also how our brains function as a result of adopting those tools. I know I used to spend days poring over books deep in library stacks before Google was a household name; now I “‘power browse’ horizontally through titles, contents pages, and abstracts going for quick wins” when researching new subjects for my blog. [15] (Ouch.) Around the time that article was published, I used to spend hours at my computer diving down Wikipedia rabbit holes (notably before I joined social media), and now I can barely make it through a single article on a subject that interests me.
To be fair, there is far too much information in the world now to sort through it with a human brain – we must rely on AI search functions at some level simply to find what we’re looking for. But we must also recognize the tradeoff there: the sheer act of relying on someone or something else for a summary or interpretation of information relinquishes a certain level of our own involvement in identifying what information is critical in the first place. And that is a level of discernment that is, as far as I can tell, uniquely human. The more we rely on AI search results or AI summaries of search results, the less we challenge our brains to pay attention, be curious, be skeptical, or find insights by making connections between seemingly disparate topics. In short, while machines may not be gaining human-like critical thinking skills, we humans are at risk of actively diminishing our own capacity for those skills through lack of practice.
In focusing on mental capabilities and the human capacity for discernment, I’ve barely touched on the increasingly prevalent examples of AI-generated art, music, and poetry – things that are distinct expressions of human creativity. Despite one chat bot’s claim that “poetry helped me become myself,” [17] AI is not creating art for the sake of art: AI is creating art at the behest of humans. That fact is an important one to remember in the framing of any man vs. machine conflict in which the machines in question are still controlled by man. And that’s where we’ll pick up next week.
In the meantime, do you have an AI tool you can’t live without? What is it and why? (And what did you do before it existed?)
Thanks for reading!
[1] https://radicalmoderate.online/man-vs-machine-ai-friend-or-foe/
[2] https://www.youtube.com/watch?v=ex3C1-5Dhb8
[3] https://en.wikipedia.org/wiki/Artificial_general_intelligence
[4] https://www.imdb.com/title/tt0092455/
[5] https://www.imdb.com/title/tt0708807/?ref_=ttep_ep9
[6] https://www.chess.com/article/view/deep-blue-kasparov-chess
[7] https://www.ibm.com/history/watson-jeopardy
[8] https://radiolab.org/podcast/91515-musical-dna
[9] https://www.youtube.com/watch?v=3YNku5FKWjw
[10] https://www.goodreads.com/book/show/78530406-i-am-code
[13] https://www.startrek.com/news/ode-to-spot
[14] https://www.youtube.com/watch?v=5KLPxDtMqe8
[15] https://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/
0 Comments