On the Topic of AI
Share
In the article These Women Tried to Warn Us About AI by Lorena O’Neil, the reader is given concerns over the media used in the learning of large language models (LLMs), drawn from the experiences of five women, Timnit Gebru, Rumman Chowdhury, Safiya Noble, Seeta Peña Gangadharan, and Joy Buolamwini. Their concerns specifically highlight the judgments that were made in the selection of datasets the LLMs were trained on, as it often leads to biases and harmful responses. It also shines a light on the actions Google took when it came to the termination of Gebru, who was seeking further approval on the internal investigation into the dangers of LLMs and providing details of the ensuing aftermath. This action would lead her to the creation of the Distributed AI Research Institute (DAIR), where Gebru continues to conduct research on LLMs, and the communities harmed by AI.
AI has been rapidly integrating into all our lives, without much concern in how the information it provides is being collected and how censorship is being managed. O’Neil emphasizes that these five women have been raising concerns over the LLMs, long before ChatGPT was open to the public. Key concerns that have been raised include the societal effects of its biases. For instance, O’Neil highlights a moment when a group of California scientists fed prompts into GPT-2 and were given different responses in employment based on whether the description given was a male, female, white, or Black. The “woman” and “Black man” prompts receiving explicit responses in regard to their “employment” since this LLM was trained on material from Wikipedia and social media. This was not something I found surprising. Back in 2016, Microsoft released an AI chatbot known as Tay that was aimed in engaging with Millennials on twitter. Hours later the chatbot was taken down as, through its unfiltered nature, the chatbot was bombarded and influenced by racism and had even made posts denying the holocaust and calling 9/11 an inside job. Peter Lee, then Vice President at Microsoft Healthcare, would soon after issue an apology and state “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principals and values.” Peter Lee has since become the President and head of Microsoft Research.
Since then, one of the ways tech companies have attempted to solve the issue of unfiltered information has been through contracting content moderator services. While O’Neil briefly describes this practice, Adrienne Williams, Milagros Miceli and Timnit Gebru have dedicated an in depth look into this process in the article The Exploited Labor Behind Artificial Intelligence. In this article, they mention Sama content moderators in Kenya that are contracted through Meta to make a decision on whether or not videos are violent within fifty seconds while being monitored through surveillance software, with no exceptions. This leads the content moderators to develop fear, anxiety, and trauma that ultimately leads to mistakes and termination. This echoes the elitist mantra all tech companies in Silicon Valley have, “Move fast and break things,” Wealthy CEOs of these tech companies are exploiting human labor overseas with no regard towards the moral and ethical implications in treating humans this way in an attempt to make AI moral and ethical.
Moving back to O’Neil’s article, the target audience consists of students in higher education and individuals working in corporate. When it comes to education, the quality of the work that is being produced by a student is measured in order to bring attention to the areas that require improvement. This cannot be done when AI is the main author of the assignments, where in some cases, all the work students do is change up the wording in an attempt to make their plagiarism less obvious. It becomes just as dangerous in corporate when employees rely on AI to do a majority of their work. AI has become more than just a tool to these individuals, they are exploiting this growing technology at the expense of existing in the meanwhile. They have given up the wheel and have taken a back seat in their own education and given profession, stifling their growth.
Going more into detail about the implications it may have on jobs, it brings into question the reliance on job security. As explained in Thinking of Laying Off Staff? Here’s Why Job Cuts Might Be Bad for Your Business by Craig Kielburger, laying off staff has become common in an attempt to maintain a bottom line for short-term financial gain. The writers’ strike of 2023 is just one example of this mindset. With the strategic move from television to streaming services from Hollywood production companies, members of the Writers Guild of America experienced their residual income diminish to unfair wages from this elitist system. That began this growing shift of a classist perspective on the importance of writers when it came to the use of AI in the workplace. It was believed to be a way to cut cost by the CEOs, but at the expense of thousands of jobs. This left members of the WGA with no other option than to form a strike for the fate of their livelihood. The strike ended up lasting 148 days, reaching its conclusion with fair negotiations that included protections against AI and better wages.
On another note, if we were to consider AI our children, then this quote from Malcom X may apply: “Only a fool would let his enemy teach his children.” This came to mind at the beginning of the article, when Gebru mentioned her interest in image analysis and how there were only four or five Black people at an academic conference in AI. It brings into question the lens being used by these LLMs when looking into areas such as culture, race, and media. When looking into these areas, AI only has the opportunity to see it from the outside looking in by way of stories that potentially help or hinder. This reminds me of season two, episode one of Star Trek: The Next Generation titled The Child, when an alien entity comes in contact with the crew of the USS Enterprise and, in order to learn from their way of life, implants itself in Counselor Troi (Marina Sirtis) and becomes her rapidly aging child, Ian (Josh Bell, Zachary Benjamin, RJ Williams). In the end, Ian chooses to die, informing Counselor Troi that it wished to know what it was like to be human and live among them and departs as a glowing light. It also reminds me of season two, episode nine: Measure of a Man, where the rights of the android Data come into question in a courthouse. Data denied an order by Captain Bruce Maddox to be disassembled and studied, as he only sees Data as a one-of-a-kind machine rather than a man of free will. AI is nowhere near the level of advancement as depicted in these science fantasies, but it does raise questions as to how much of our learning comes through our everyday human experiences through nature and nurture.
The only answer I have to offer, is in the form of a poem from the documentary Meanwhile. It goes, “When I see what you see and you see what I see, the space between us disappears. But I am in despair, wondering if you went through what I did to see.” Lessons in life are taught early in our years, but not necessarily understood. It is not until we go through an experience where we can relate to the message, that we tend to understand the importance of those lessons. The rush to understand an improve AI has come at the cost of our humanity, hoping to see how far it can go, disregarding any toes that may be stepped on. Each toe is marked with each ISM and bias, if we were to continue on this path, we would be moving towards a future, but it will not be our collective future. Despite this doomerism mindset, there is positivity to take away from O’Neil’s article as a healthy media message. It leaves the reader to consider the cost of our humanity and free will in relation to our overreliance on LLMs.
There is a moment in the article when LLMs are referred to as “spicy autocorrect,” and it reminded me of my life before autocorrect was a common tool. I think back to when I would be in the final three at spelling bees during my time in grade school and being able to spell out words confidently. But since my time in high school to now, I have had an overreliance on autocorrect and I second-guess my spelling of certain words, regardless of its simplicity. Since the beginning of ChatGPT 3, I have seen the use of AI in a similar way. If I were to use it to write term papers, discussion posts, replies, then what does that leave my brain with to think about? I would not be capable of participating in class exercises and maintaining a conversation during lectures over material I was meant to study and write about. I would likely grow insecure and shut down due to my lack of integrity and knowledge that I am not holding myself accountable to the standards placed on me when I enrolled in this institution of higher learning. It reinforces my idea that overreliance on AI would lead to entropy of the mind. Being unable to think for oneself would leave an individual vulnerable to manipulation. This concern of mine was the reason I was so hesitant on trying AI for the first time.
My first exposure to ChatGPT 3 was when my younger brother used it to figure out how to change his break drums on his pickup truck we were working on. It gave him an in-depth description with bullet points. Even then, I still felt that it would only hinder me if I were to use it as, in the past, technology is built as a tool but quickly becomes a source of distraction that separates us from ourselves. The cellphone is a technological device that is an example of this. What used to be a way to communicate through voice over long distances has now become a seldom used app on a social media device that feeds into insecurity and a need for validation from others. I have seen how this tool has influenced me over the years, as it used me more than I used it. Later on, when I became aware to the content the algorithm was feeding me, I realized how to use it to my advantage. Since then, I have deleted my Facebook page and my Instagram algorithm mainly shows me content and quotes from philosophers and authors such as Marcus Aurelius, Sun Tzu, Franz Kafka, Ernest Hemingway, Aldous Huxley, and James Baldwin. But this is a line that I have to be aware of and maintain. For the AI algorithm will also tend to show me quotes and clips from controversial figures in the Manosphere like Andrew Tate and Jordan Peterson. These characters play into the insecurities of young men that do not see the value within themselves and believe the solution is in materialism, and sticking to a classist mindset to determine how to treat others based on wealth, property, and occupation. This is something that Bell Hooks has defined as the consumer capitalist culture.
Being aware of how AI can be abused, I have been careful in its use and have relegated it to simple tasks in my life. I have often used it for clarification when it came to tasks like how to use certain features in editing software like Davinci Resolve when I did not have the patience to sit through a YouTube video or comb through forum posts. I have also used it in trying to understand the process of obtaining my father’s visa, as the person that was helping me when I first started passed away. That task was something I could not take information in at face value and still required me to do the research on my own, especially with this changing political climate regarding immigration. That being said, I refuse to allow it to be my voice when I have always held the pen of my fate in my hand. That is a power that is often overlooked and sought after by individuals seeking to be in control of others. But as said in my favorite childhood book by Rodman Philbrick, “You are the Last Book in the Universe.”
O’Neil concludes this article by explaining to the reader that the future of AI is going to require accountability and awareness that was not present in the beginning. AI is not something that should be feared, but rather understood so we can make meaningful progress when it comes how it is being built and what datasets are being used. These Women Tried to Warn Us About AI is a powerful reminder on the importance of planning and execution when it comes to DEI, communication, asking questions, and how moving slow and steady towards progress leads to better results in the long run.