AI, Education, Software, and the Future
I have had the desire to write a blog post on the impacts of AI for a while now. AI is the hot topic of the moment, and has caused what many describe as another economic bubble1234. I am by no means an economist, rather a software engineer and student, and would much rather write on the topic from the perspective I have insight into. Whether one likes it or not, AI is rapidly changing the landscape of education and work. I am not going to pretend to be unbiased, nor am I going to try to downplay that there are already incredible uses for this technology, rather I wish to provoke some serious thought into how people are using these tools.
Education in Education
For context, I am currently a student at the University of North Carolina at Chapel Hill, and am studying business at Kenan-Flagler. Despite the small ChatGPT can make mistakes. Check important info.5 callout at the bottom of ChatGPT dot com, a staggering number of people that I interact with are unaware that ChatGPT (and by extension, other GenAI products), are not search engines, and a subset are unaware that it cannot truly reason.
Last week, my product development class had a guest speaker talk about how revolutionary GenAI already is, with distinct subtones that anyone not using AI is going to be left behind. When the speaker opened the floor, they were unable to field questions about how one can trust GenAI to do mission-critical work in light of hallucinations, nor the professor’s question about the problems of AI copyright infringement. After class, the person sitting directly in front of me asked their neighbor, “What is a hallucination?” The neighbor did not know.
This is not an isolated incident. I have encountered many situations where I have been working on a group project, which KF designs to emulate a consulting environment, and had teammates immediately ask ChatGPT to answer the prompt verbatim. While this in and of itself is not a problem, the issue arises when group members fixate on the first answer or idea that the bot comes up with, employ plenty of confirmation bias during the research phase, and often seem unwilling to consider information that conflicts with the artificial outcome they have already espoused. Anyone who has worked on a team is familiar with this type of anchoring, but why does it apply when the initial idea came from a known-unreliable source, not one’s own thoughts?
In by far the most memorable occasion of this, a group member threw all of our research from a 13-page document we had collected over a week into ChatGPT, and asked its “opinion” on our assessment that the company we researched was failing. While I cannot find any studies on the phenomenon, my experiences and conversations with heavy users find that ChatGPT is rarely pessimistic and almost never negative, unless a general “consensus” has been reached, i.e. Enron. When ChatGPT’s response disagreed with our findings, this group member petitioned we change the angle of our entire presentation, choosing to trust the bot over the five other humans in the group.
Higher education is a massive monetary and temporal investment, and exists to help develop one’s ability to critically examine and synthesize information to solve problems. Humanity innovates and moves forward via critical thinking, and while college is by no means the definitive way to improve this skill, it is a tool that almost half of the US population entrusts with their development6. Different colleges and universities have different approaches to handling the era we have entered as they have to counterbalance the risk of “cheating” with that of harming their trustee’s future success in an AI-driven world. I am not sure these institutions are equipped to balance this complex equation, potentially leaving their constituents unequipped for the future, or overreliant on AI.
This puts the load on students to determine how much to use the technology and what for, but without a true understanding of its limitations, as I have observed, students run a real risk of dependence. Incorporating AI into one’s workflow and thinking process may prove to be a vital skill for the future, but one must ask themself whether they are using the tool to enhance their critical thinking, or subvert it.
Enhancing Human Thought
One of the most common things students are told in regards to GenAI is that “AI should help you think. Not think for you.”7 While this is a great ideal, reality often disappoints in my experience. The first instinct of at least one student in all my group projects over the past two years is to instantly throw the prompt into ChatGPT. Aside from the cognitive biases and anchoring that I mentioned, many of my acquaintances will describe doing the exact opposite of the GenAI usage guidance.
The most common usage I hear about is what I like to call the “middle-school research project” approach, were someone asks ChatGPT for an answer to the prompt, then rewrites it using their own words. This approach, while much more palatable than the pedestrian prose GenAI tends to output, still tends to lack a certain depth that I would expect of aspiring professionals. The exposure I get to this is primarily via class discussion boards, a staple of post-Covid education, where posts will lack nuance, many will say nearly the same thing, and some will even contradict the readings or themselves in extreme circumstances. When asked in a casual manner, many of my peers will readily admit to having rephrased a ChatGPT output, rationalizing that less time spent on assignments leaves extra time for their More Important Thoughts™️.
GenAI can be an incredible aid to ideation and creativity, but it is still only that - an aid. The slippery slope occurs when one begins to allow AI to perform the entire thinking process, rather than coming up with personal opinions, preferences, and maintaining healthy skepticism. In the past, one would search for information and make a judgement for themself as to whether a source was trustworthy, but recent developments have delegated this process to the search engine, and now, AI tools. Unlike search engines, however, GenAI tends to remove the context around the information it synthesizes, responding to queries authoritatively, even when wrong. In this future, maintaining reservation is more important than ever, and one must intentionally choose to think critically about the information they receive. This ability will differentiate those who rely on AI tools from those who are able to truly use them.
The Copilot Glaze
I will now (un)gracefully pivot to talk about my own personal experience using generative tech, from the perspective of a software engineer. On July 13th, 2021, I was given access to the GitHub Copilot Technical Preview. When I started using it, the capabilities it had blew me away. I used it every time I wrote code, filling in the most mundane aspects of programming so that I could focus on the fun part - architecting the system’s interactions.
Over the years, however, my disillusionment with the technology grew. I found that most of the catastrophic bugs that were being pushed to production could be traced back to the little ‘roided up autocomplete I had installed in my editor. I can blame no one but myself for these bugs slipping by; it’s a pair programmer after all, not an engineer, right? Even as I used it and became more aware of its limitations, however, my brain still “checked out” when Copilot finished my line. Along with the “Copilot Pause,” the phenomenon where a programmer will type a bit of code then wait for Copilot to do the rest (coined by ThePrimagen , I think), I would like to propose the “Copilot Glaze,” where your brain basically glazes over the generated code and moves on to the next line. Even aware of my own tendency to do this, I still struggled to read the output character by character, and opted to trust more often than check.
Over the past few years I have slowly come to realize that using Copilot, while improving the raw speed at which I can output lines of code, lies in direct opposition with my goal of becoming the best programmer that I possibly can. In my two primary languages, TypeScript and Rust, usage of Copilot became a dependence, reducing the creativity of my solutions and rotting away my memorization of the core libraries as I simply leaned on the tool to complete the syntax. Turning Copilot off in my editor has forced me to rememorize specific syntax that had faded, and led to me coming up with more creative solutions to problems I had leaned on Copilot to complete over the years.
In the future, AI tools may become proficient enough at outputting the hundreds of programming languages that memorization need not apply, but the way Copilot and other tools currently operate is by completing solutions, not just syntax. The field of computer science moves forward through truly creative solutions to problems, something I was not getting near as much practice with when Copilot was enabled. At the moment, using these tools is akin to guiding a junior engineer through implementing code, robbing one of trying new methods when a solution that Copilot has decided is “best” would also fit. The current generation of AI tools are not able to create anything truly novel in the way an expert can, as they merely create “unique” mashups of information they were trained on. Whether the technology will ever be able to generate truly novel ideas is yet to be seen, but human ingenuity is currently the thing with the capability to drive fields forward, and should be trained and practiced in the same way companies are working to improve AI models.
Usage of a GenAI tool like Copilot for programming and other work is not a binary question of “good” or “bad,” rather a question of what one intends to get out of the tasks they complete. Jumping into an open source Ruby project to quickly fix a bug led to me reenabling the tool temporarily, and saved me a decent chunk of time when simply trying to get the app working. My expertise in programming lies elsewhere, and in that moment a solution was more important to me than learning the Ruby syntax just to update 10 lines. When working with my core competency, however, usage of GenAI is antithetical to my improvement, and I have bettered my skills through its absence.
I would like to ask the reader to think deeply about why they are using GenAI. Is your goal to use the technology to broaden your capabilities across a wide range of subject matter, or are you working to become an expert in your specific field? Are you able to balance the efficiency gains you might receive with your learning and improvement, or will you become dependent on it, unable to perform without?
The Future
To put it lightly, I am concerned about a future where many people don’t engage deeply with content, opting to let GenAI do the “thinking” for them, whether in education, code, or the workplace. Technological advancements in history have come about by the human capability for critical thinking, and overreliance on AI threatens to inhibit future creativity. Analytical reasoning is a practiced trait, and those aspiring to success must reconcile the potential gains of AI with their own need to continue developing their thinking.
One must seriously consider the question of who they are when they use these tools, and what they desire to get out of the task they are trying to complete.
Footnotes
-
https://www.nytimes.com/2024/09/23/technology/ai-jim-covello-goldman-sachs.html ↩
-
https://www.washingtonpost.com/technology/2024/07/24/ai-bubble-big-tech-stocks-goldman-sachs/ ↩
-
https://www.bloomberg.com/news/articles/2024-07-18/goldman-s-top-stock-analyst-is-waiting-for-ai-bubble-to-burst ↩
-
https://edition.cnn.com/2024/08/02/tech/wall-street-asks-big-tech-will-ai-ever-make-money/index.html ↩
-
https://www.census.gov/data/tables/2018/demo/education-attainment/cps-detailed-tables.html ↩
-
https://provost.unc.edu/student-generative-ai-usage-guidance/ ↩