ChatGPT Did My Homework: A Parent’s Guide to AI, Cheating, and Character
I’m a humanities teacher, but for some reason in our house it’s always me who ends up helping the kids with their math homework. That was just fine for a while. I was good at math in school. But now I have a senior, a sophomore, and a seventh grader, and it’s a whole different ball game than when it was fourth grade math. Half the time I don’t know the answer anymore, and a few weeks ago I was stuck and said, “Gimme a sec.”
And then just typed the question, verbatim, into ChatGPT: What is the answer to this math problem and why? Show your work.
It was magical. “Okay, I’ve got it.”
Explanation done. Problem solved. Dad’s a genius.
Now imagine one of my high schoolers in her room at her desk with her laptop struggling through Geometry. Doesn’t get the problem.
If I ask dad, he’ll probably explain it in the harshest way conceivable. If I call my friends, I’m the “dumb one.” I can’t wait until tomorrow, because I have math second period! Wait. What if I just type the question into ChatGPT?
Problem solved. Work shown.
And what if I just type the next problem into ChatGPT? Done!
I could finish my math homework in ten minutes! Done.
Imagine facing this temptation as a kid. Could you have beaten it as a sixteen year old?
![]()
Educational institutions have taken a one-two punch in the last five years when it comes to academic integrity.
COVID was the first punch, breaking down all sorts of walls of conscience in American students. Fifth graders with laptops were asked to take online tests with every answer immediately available an ALT+TAB away and zero accountability to tell them no. Students were having open conversations on Facetime with their friends during class while on mute. Students were saying “Oh, my video doesn’t seem to be working” during a Zoom class and then going outside and playing with their friends. The fabric of the American academic conscience became threadbare during COVID.
Then, about a year later, those same kids with seared consciences were handed ChatGPT to do all their work for them.
So, how do we parent in this new world of AI and with this generation of kids? It’s not 1997 anymore, folks. Here are five tips as you navigate this new landscape.
Tip #1: Talk about it.
Often we parents throw up our hands and say we don’t get it. It’s a new generation. “Kids these days.” “Screenagers.”
Let’s be real: this cop out is not good parenting.
Every generation of parents ever has been able to say this about their kids. Part of our job as parents is to ask. To learn. To figure it out. And to guide them, very clearly, in what is right and what is wrong. If you ask enough questions, you will see how they process things like you did as a kid, just in a different context. But you have to have the conversation.
So, have you talked with your kids about where the lines are when it comes to cheating?
Here are a few questions to consider talking through.
For your Lower Schooler (probably grades 3-5):
- Do you know what cheating is? What might that look like at school if someone cheated on something like a project or on homework?
- Why do you think cheating is against the rules?
- Do you think the Bible talks about cheating? (Ex. 20:16, Prov. 11:1, Gal. 6:7, etc.)
For your Middle or Upper Schooler:
- Have you ever been tempted to cheat? How? Why?
- Why do you think cheating in school is against the rules? (Ex. 20:16, Prov. 11:1, Gal. 6:7, etc.)
- What are some rationalizations you’ve heard for cheating? Are any of them valid?
- What are some different ways you’ve seen cheating happen or some ways you’ve been tempted to cheat?
- Have you ever used ChatGPT? How could ChatGPT be used to help you learn? How could it be used to cheat or cross the line – or just feed you answers and shortcut learning?
- What if ChatGPT could just write your English essay for you? If it could, would that be cheating? What if it could just solve your math problem for you – and show the work? If it could, would that be cheating?
We are having these conversations at school. But it’s always best when the moral foundation starts at home. So please, partner with us in wrestling with this very real and very important moral issue for this generation.
![]()
Tip #2: Discuss the WHY.
Tip #1 encourages the start of a conversation. Lower School parents, that might be all that’s needed with your kids – for now. These next four tips, while primarily for the parents of teens, are really good fodder to consider. Those years will be here in the blink of an eye. Parents of teens, listen up.
Your kids are using AI. It’s happening. They are making goofy image modifications. They are using it to find quick answers about sports stats and which ice rinks have open skate this weekend. They are also using it to take shortcuts. To build a works cited page of fictitious sources they haven’t read. To do their math for them. To make a poster for them so they don’t have to do the reading. It’s happening, and you need to talk about it at home while we discuss it at school.
But here’s the rub. We can’t answer any of these questions unless we think about the WHY. And it’s the same with our kids. They are completely powerless to navigate this (or any!) ethical minefield unless they, too, are asking WHY.
Let’s start with the positive. AI, like ChatGPT for example, is awesome. WHY?
AI is incredibly efficient.
![]()
Our school recently wrapped up TRAINucopia. Wouldn’t it be fun to have images of our own teachers in inflatable costumes so students can get excited for the competition? Two years ago we would have had to, very slowly, slam these out on Photoshop. It took us about 20 seconds to get Mr. Chavez in a lobster costume. And it even did a weird thing with his tie! Awesome. Time saved; kids inspired.
I can tend to be too wordy in my emails (and my blogs…sorry…). Sometimes I ask ChatGPT to reduce my emails by 50% to save the recipient that email time. 20 seconds of proofreading and done. Awesome.
AI does wonders for analysis.
My oldest has committed to playing volleyball next year in college at Bethel University in St. Paul. Part of our struggle in the athletics recruitment process was finding schools that might be interested in a kid with a solid vertical but who is only 5’7”. At first we were scouring the internet. Then I asked ChatGPT for the ten best Division III or NAIA colleges in the midwest that have a strong nursing program with a kid who is 5’7” or shorter on their active varsity volleyball roster. Tada!
We (and you, probably) use it to complete even more complex tasks at work. At CHA we use it for marketing analytics. For large-scale curriculum review. For survey synthesis. It’s incredible. For all you know, AI may have written this article! (Just kidding; this is all me…so far!)
AI can be a great tutor.
More and more tools are coming out that can be incredibly helpful for learning. Imagine an AI tool, for example, that can listen to a child reading out loud and then correct them if they miss or mispronounce a word. According to The Economist, Google’s Read Along has shown promise in India, and CoolE Bot has shown promise in Taiwan.
Using AI gives us more time to focus on the important stuff.
As we are more efficient with the mundane and the slog through data crunching and mind-numbing revisions, we can focus on why things are the way they are. On what needs to be tweaked or changed. On how to define and solve problems.
AI is a real blessing, and it’s worth acknowledging that with our kids. They can see that clearly, and we should, too. A Luddite reaction to the entire topic is very likely going to shut down the whole conversation.
![]()
Tip #3: Explore the WHY NOT.
Starting with awareness of the WHY helps us effectively navigate the thing that we often want to talk about first: the WHY NOT. Where does AI mess us up? Where can it be used to cross lines that just shouldn’t be crossed? How can we steward this new tool well and wisely?
The process matters.
Google Maps has completely destroyed our ability to know where on earth (literally) we are. My kids – and many adults I know – will drive an hour to get somewhere in Chicagoland and have no idea where they are. Is this bad? I would argue that it certainly is, especially if the next generation doesn’t understand the Rand McNally foundation like the rest of us. If you can’t read maps, whether digital or physical, you can’t fully understand the physical world. The same goes for writing. The same goes for art. The same goes for music.
Challenge: dare your junior or senior driver to not use Google Maps for three weekends in a row.
Smart people think all the way down.
Something I used to always say to my students is “Good reading is un-writing.” If you are going to fully understand what I’m saying in this article, for example, you need to sort out my outline. The thesis of the whole thing. What point I’m on. What counter-point I’m arguing against. Where I might be going next.
In effect, you must be un-writing as you read in order to be smart about all this. But you can’t un-write very well if you’ve never written very well. Good readers think all the way down to the outline because they already know how to write all the way up from one. Good musicians know their scales and tempos. Good artists know the color mix and the medium and the brush stroke. Good cooks know the ingredients. Being excellent takes about 10,000 hours of work, and AI poses a huge temptation to cut corners. In a recent survey in China, 21% of both elementary and secondary students said “they would rather rely on AI than think independently.” If we just accept AI-produced work – like AI produced math homework – it’s a guarantee that we will be dumb. This is a fair question to ask your high schooler: Do you think using AI keeps you from getting smarter?
AI will cause a separation between those who are diligently clever and those who are cutting corners.
In several studies, researchers have found that AI is often only effective when the users are strategic, equipped with useful experience and strong judgement. In light of this, we will also likely see a growing gap with AI use: the smart will get smarter as they use it strategically, and those who see it as a shortcut rather than a tool will receive no benefit or, worse, be replaced. Put another way, for a conscientious and responsible student, AI may help; for a student with a procrastination problem or an addiction to video games, it may well compound an aversion to the hard work of learning. How students use and think about AI now matters for their future.
We must recognize that being bored is incredibly valuable.
Often the urge that compels us to pull out our phones in line or while waiting at a light or (let’s be honest) going to the bathroom is fueled by our hatred for being bored. In a Dan Gilbert study (summarized here), participants were asked to sit in a room with nothing to do – except press a button that would give them an electric shock. Incredibly, a majority of participants chose the pain of being shocked over being bored. But boredom actually gives us more meaning and makes us less depressed. More, it is often the best means to a new idea, to creativity, to insight. Do your kids (or you, for that matter) no longer see the value of the “gaps” in our lives – of the time and to meditate? Is AI killing our boredom – and thus our genius?
Finally, we can’t let AI unintentionally shape our identity.
“Who am I?” is one of the most important questions we can ask. In fact, Decarte would argue it’s the only one we can ask with total integrity! In the last decade, we’ve seen social media eat away at the fabric of our identity. It has been well-documented that the dopamine hits we receive from likes and hearts on social media vie to replace real social interaction in a way that is often negative. I would argue that AI will do the same if we let it replace our thinking. If I’m asking ChatGPT “Why are my friends so mean to me?” or “Why is the Bible true?” or “What is the perfect date?” – without critical thinking – then where do I go? Where are my opinions? And if I am no longer doing the work to have them, then who am I? If I allow AI to do menial tasks and clear the clutter, it's one thing – that might actually help with my identity formation. But if I’m letting AI do my thinking for me, especially at a young age, I’m missing out on the core of my personal and spiritual formation. A discussion about maintaining the discipline of forming and arguing for opinions – even ones that are the opposite of what ChatGPT has to say – is an important conversation worth having with our kids.
Tip #4: Discuss Some Case Studies.
Sometimes it’s not the best to just jump into the why and the why not. Sometimes it’s easier to just talk about some case studies. If that’s your vibe, try out one or all below:
Case Study #1: Essay Helper
A student uses AI to brainstorm and outline an essay but writes the final draft independently. Is this smart support or academic dishonesty?
Case Study #2: AI Tutor Gap
One student uses an AI tutor nightly; another lacks access at home. Does AI improve learning—or widen equity gaps?
Case Study #3: Hallway Surveillance
A school uses AI cameras to detect fights and vaping. Safety improves, but students feel constantly monitored. Is the tradeoff worth it?
Case Study #4: Teacher’s Shortcut
A teacher uses AI to grade short answers faster. Feedback is quicker, but sometimes inaccurate. Should efficiency outweigh precision?
Case Study #5: A Head of School Who Loves Irony
A Head of School uses AI to generate four case studies for his blog. Is his use of AI in this way hilariously ironic, or is it not practicing what he’s preaching?
(By the way, I did use ChatGPT for those, verbatim. I hope you enjoyed that.)
![]()
Tip #5: Same Ethics, Different Temptations.
This final point is an important one. While AI may at times feel like the wild wild West in terms of moral boundaries, in many ways, we are still dealing with a lot of the same issues that Gen Xers, geriatric Millennials, and actual Millennials dealt with in school.
Your work is still your work; others’ work still needs to be cited. I used AI above for my case studies. But I gave AI credit, just like I gave the hyperlinked news articles I used above credit. AI is a source. Use it, but cite it.
Just ask the teacher. For decades, students have been asking, “Can I work with a partner on this assignment?” and “Is it okay if we look up the odd # answers in the back of the book?” Now, students can ask, “Is it okay if we use AI for this assignment?” And it fully depends on…you guessed it…the WHY. The answer might be “Yes.” The answer might be “No.” But it’s still a question that can (and should) be asked.
Cheating is still cheating. If I use AI to write an essay, when the purpose of the assignment is for me to write my essay, I’m cheating. In 1994, if I convinced my classmate to write my essay for me? Also cheating. Finding one on Google in 2008? Still cheating. It’s the same thing. The line is the same, and just because it’s now about Artificial Intelligence doesn’t always make that line terribly mysterious.
Conclusion: Keep the Conversation Going
Artificial Intelligence is now a part of our world whether we like it or not. Our kids – especially our Lower School kids – are going to grow up not remembering a world in which AI didn’t exist. So it’s time to start (and probably never finish!) the conversation as we seek to train up our children in the way they should go (Prov 22:6).
—J.T.
Joe Torgerson (M.Ed., University of Missouri, B.A., Bethel University) serves as Head of School at CHA. Drawing on nine years of high school humanities teaching in addition to nearly a decade of U.S. and international administrative leadership, Joe guides the school with a global perspective rooted in a lifelong commitment to Christian ministry. He is a husband, father of three, and an avid enthusiast of athletics, board games, music, and dim sum.


