Artificially Intelligent | Part IV
If the obstacle is the way, what happens to humanity when AI removes the obstacle?
This is Part IV of this series. You can find part I here, part II here, and part III here.
“You have died of dysentery.”
The most formative digital experience of an 80s kid’s life was the once-a-week trip — if you were lucky — to the computer lab. I literally cannot remember a thing we did there except learning to type and playing The Oregon Trail. The teacher would always let us after we’d futzed around with the typing program, almost certainly under the rationalization that we’d learn from the game… which required no typing. I didn’t learn a thing about the actual Oregon Trail from the game other than what I already knew from reading books about Westward expansion like those written by Laura Ingalls Wilder and Mark Twain.
Thus began my first observations around the use of technology in schools. Things haven’t changed much from my elementary school days. Any tech schools use isn’t used productively. Instead, it almost always ends up becoming a distraction from learning.
In high school, I had multiple computer classes in which I wasn’t required to do much. I mainly used those periods to type my papers, so at least they freed up time I could use to read for pleasure. As an adult, I’ve enjoyed multiple acquaintances with people who teach graphic arts and digital technology. Before widespread 1:1 device adoption in Districts, their classes became de facto arcades. Kids would go in, address the task the teacher assigned in 10-15 minutes and then surf the internet. Some kids never did the work; they’d only surf. Everybody passed. It’s an elective after all, and teachers with a gig that sweet need to keep their enrollment up.
There is, however, one powerful example of technology’s ability to facilitate learning in adolescents. Unfortunately, it’s a class swiftly disappearing from the American high school landscape because of social media.
I’m talking about Yearbook.
As a new teacher RIFed from my first position, I took an English job that came with Yearbook (the red-headed stepchild usually dumped on the language arts department) and kept it for four years. Nobody wants to teach yearbook. It’s the only class with widely-seen, outward-facing evidence of what a school can produce. It entails a huge workload that rarely earns kudos but does generate plenty of angry emails at year end.
Here’s the thing: if the kids have to do something with technology— in the case of yearbook, it’s a LOT of doing—they learn. Boy, did we ever learn. I was right there slogging through it with them. While I was exacting about my staff’s reporting and writing, my photography skills were nonexistent. Still, we’d all seen enough pictures to know what a good one looked like, so we knew enough to craft questions whose answers would help us take better ones. We’d level up from there. In the process, my staff learned how to set their cameras up for different scenarios, and learned to prompt Google with solid queries when their setup didn’t result in great shots. On the rare occasion my editors didn’t have a ready answer for a photography or design question, they’d hop online, carefully query, then judge the most relevant results and quickly skim them for the right content. We didn’t have time to get lost in a photography rabbithole—we were always bumping up against deadlines— but because we used the answers we found, those answers stuck.
We understood our mission as memorializing a year of good experiences for 2200 kids. The weight of that was real. I had a 3-year veteran editor-in-chief. She had a well-trained editorial staff, all with at least two years of photography, reporting, writing, designing, and organizing spreads under their belts. I had counseling drop kids from the class if they didn’t do their jobs or made my editors pick up more work than their already monstrous share. We had a clear command structure; everybody knew their direct report, knew their responsibilities, knew the rules around their job, and knew who to go to for help. I was in the weeds with every student, pushing, teaching, cajoling, and threatening. I’ll never forget telling Matt—the best photographer but worst procrastinator on staff—that if he didn’t revise the girls JV Softball page with its sloppy layout and lazy, generic, saccharine copy, I was going to put his name and cell phone number in huge block lettering with a note that this was where the team’s complaints should be directed. (He fixed it.)
A number of those kids went on to publishing careers, most notably my editor-in-chief, a first-generation American. She got a full ride to a university renowned for its journalism program and, immediately upon graduation, a full-time job at a women’s magazine, where she rose quickly to the editorial staff.
My Editor-in-Chief did the work. She could’ve gotten the yearbook done without me being much help, no doubt, as my first yearbook staff had, but it wouldn’t have been great. It was great because she and I had both done the thing. Three years of teaching yearbook for me and one year of staff-level work plus two of editorship for her were our trial by fire. By year four, B and I both knew what we were doing, and when we weren’t sure, we knew enough to ask the right questions and, from there, work out a solution we could teach the team. Our knowledge also meant we could hold staff strictly accountable for doing the thing well too.
For me, yearbook remains the most powerful example of what doing, aided by technology, can mean for kids in school.
Artificial Intelligence undercuts that. It will steal the doing and, thus, the learning from kids, creating a form of intellectual dependence that should make any parent nervous.
I cannot emphasize this enough: if we allow the existing anti-success structures present in K12 schools to be further weaponized against learning by the inclusion of AI in some misguided educrat attempt at “digital citizenship” and/or “21st century learning” (whatever those ambiguous catch-phrases are supposed to mean), we will cede our liberties to the megalomaniacs who think of our children as cattle at best, and useless eaters at worst.
If we let AI dominate public school classrooms, such a classification of our children will become painfully accurate in short order.
The more of this series I write, the clearer it is that AI is destined to replace teachers. Maybe we deserve it for obediently executing educrat-mandated, pants-on-head stupid policy that made an absolute dog’s dinner of our schools, schools that once produced literate, knowledgeable citizens. I think our biggest failure was the complacence with which we accepted 1:1 devices in our classrooms. Damn near every teacher I talked to suspected putting an iPad or laptop in the hands of every kid in school would have devastating consequences, but no teacher refused to use them. Throw in Covid, et voila!, we made ourselves redundant with Google Suite for Education and online learning modules like the ones produced by Edmentum and Florida Virtual Schools.
1:1 devices were the camel’s nose under the tent. As District Superintendents began begging local citizens to voluntarily raise property taxes for bond measures to buy iPads and Chromebooks and Surface Pros, student performance dropped. I won’t claim that digitizing the American classroom is entirely responsible for the drops in performance over the last dozen or so years, but to me, it feels like our compliance with digital “learning” initiatives was a willing march up the scaffold where AI waits to deliver the coup de grace.
The promise of AI in schools is that it will unshackle students from the limitations of learning at scale. It’s twofold. First, the speed of instruction can be individualized; if a student is bright and/or knowledgeable in a subject, she can move along faster. If she’s less skilled and/or knowledgeable, she can move forward at a pace that will allow her to succeed, the AI patiently tutoring her in areas of weakness. Secondly, AI purports to allow a student to indulge her curiosity, wandering down a self-determined path, but with an LLM shepherd who will supposedly only offer answers which aggregate the best possible information from the vast internet.
Oh, how beautiful that would be!
Unfortunately, anyone who’s worked in the system knows that’s not how AI is going to be deployed in modern K12 institutions. I know the school system. I understand its incentives. I have laid out for all of you how it works in numerous articles here, on Substack. I understand, as most of you do, that the purpose of a system is what it produces. To put it mildly, there is no universe where we take our current system which produces kids who don’t read, can’t write well, know precious little about history or science or literature and turn them into wunderkind LLM prompt engineers.
My yearbook students were able to learn with tech because they had so much existing knowledge that they knew what they didn’t know and could quickly refine a question to get the information they needed.
What AI will be used for in schools is not a self-paced, self-directed stroll toward the wondrous questions like “What does it mean to be a Man?” or “What is beauty?” or “What is a Good life?” Instead, it will be a shortcut to answers on a digital worksheet where there are definite right answers, and those right answers are programmed in advance by a group of people parents don’t know, can’t talk to, and, thus, cannot hold accountable.
AI-lite has already been adopted in some schools. It’s called Adaptive Learning and it works like this: the program presents a topic, then tests the student’s knowledge/skill. If a student can reproduce what he learned accurately, the program moves him along to the next module. If the student struggles, it will patiently reteach the topic until he gives the “right” answer consistently. Sounds very vanilla and acceptable, of course. And, I think, in a math or engineering course, it absolutely can be.
While some schools are much closer to full implementation of 1:1 AI tutoring, most are using something akin to it, though a kludgy, old-school type in the form of online credit recovery. If a student fails a class required for graduation, the school will enroll him in an online credit recovery class which is faster and cheaper than summer school. The software will run the student through multiple videos and texts and then test him on them. Once he clicks through the module and passes the tests, the new grade earned in credit recovery replaces the original grade he earned in the class.
School Districts employ no or very lax monitoring of credit recovery. Students cheat. They click through the modules and videos as quickly as possible to access the tests. They copy-paste test questions into another browser tab or use image search to take a picture of the question and let their phone provide the right answer. The laziest and/or least savvy kids simply retake the test as many times as they must to pass, retakes being unlimited. Teachers have been complaining about credit recovery for years, but Districts don’t listen because online credit recovery is a cheap way to raise graduation rates.
It’s also an easy way for a kid to get straight As in core classes. My favorite credit recovery story happened during COVID. A very wealthy family I was acquainted with sheepishly admitted that their son had to take credit recovery for his junior year US History course because he’d earned a D semester 1 and a C in semester 2, which just won’t do when your family expects you to enter a prestigious university straight out of high school. The kid enrolled in online credit recovery. He finished both semesters in one three-hour session on a Saturday afternoon, replacing both grades with As.
In credit recovery, learning never happens, the process faked, the doing left undone. Of course, the one thing that matters to school leadership — academic data — looks good.
In yearbook, you can’t fake anything because everyone is watching. Your work affects them. What you know and can do makes a difference to everyone on campus. Learners were tested by the doing, their results tangible. Staff left yearbook with enormous confidence because they knew so much, including how to use technology to increase their competence. It was the real world practice, the doing, that made the learning stick.
Perversely, what matters for modern K12 isn’t learning, it’s checking boxes. And while the ostensible purpose of AI is to unlock our species’ potential by alleviating mundane tasks, what AI in schools will actually do is help the adults check an increasing number of boxes, without said adults putting in much additional effort.
Teachers, who can’t mount a defense against cell phones in schools, are not going to be able to get the ship of learning back on course, and certainly won’t if AI is widely accepted in public schools. Administrators have zero reason to fight AI because their job is based on scoring political points with the higher-ups in the District office and nothing is better for an administrator than spending a ton of money on a program whose efficacy is hard to pin down but that results in better data.
In fact, the more I talk about AI, the more often teachers tell me they’re using it. They use it to plan their lessons. They use it to write their tests. They use it to make worksheets and practice assignments. Most infuriating (and please congratulate me for having thus far been able to control my face in reaction to hearing this), teachers use it to skip studying the material they’re supposed to teach. AI tells them what Animal Farm and Hamlet and To Kill a Mockingbird are about, the teachers themselves rarely consulting directly with Orwell or Shakespeare or Lee more than once, if that. AI writes lecture outlines for history teachers. AI creates slide decks for biology classrooms. Teachers outsource “the deep work” to ChatGPT and let it craft the formative assessments they’ll use to hold students accountable for doing the work to understand the things the teachers didn’t do the work to understand. Hilariously, AI the teacher leans on to create assignments will access and aggregate the same source material the AI the students use to cheat the assignments will.
And round and round we go, slouching toward Gomorrah.
In destroying teaching, AI has the potential to completely throw away the time we, as a society, have bought and paid for with our labor so our children spend their days in school instead of at work. (Of course, you’ve gotta hand it to the public schools for doing most of the heavy lifting on that already.) Teaching well at scale is way more involved than what AI can do. Taking a set of impossible to achieve standards and the garbage textbooks the schools provide to teach them all, expanding on what was best, breaking it down into chunks, scaling it in support of differing student achievement levels, and finding ways to constantly assess then re-teach it all is what made me an excellent teacher. It’s also what allows me to laugh when I do play with LLMs and realize what absolute garbage takes the models regurgitate based on keyword skims of published materials.
The push to integrate AI into a K12 school system concerned only with the appearance of learning, measured largely by metrics leadership can game, will make teachers with content expertise seem irrelevant. The consequences of that to students has yet to be seen, though we can get a glimpse of it based on outcomes in classrooms headed by teachers not up to this increasingly difficult job in the modern, digital school. Such teachers often shift the work of teaching to the learner under the auspices of “discovery” or “skills-based” or “mastery” learning.
Across the country, students are handed a device and told to “discover ______” on their own. Students who attempt work like this without the helping hand of AI rarely do it well, frankly. As you may be aware, 99% of people who search using Google never look beyond the first search result. Students are no exception. They take what Google gives them, dutifully assemble a slide deck or some other “evidence of learning” that rarely shows much depth of understanding, and learn very little in the process. Equitable grading policies mean there’s no real need to produce any evidence that the learning was meaningful — something just needs to be turned in that can then be handicapped for trauma, socioeconomic status, race, sexuality, disability, whatever, and given a passing grade. Once, this was a time-consuming process, even when rushed and last-minute. Kids had to do some reading and a bit of actual work to create a coherent presentation of their findings.
Keep in mind that most teachers are just starting to be encouraged to integrate AI into their lessons. Students, on the other hand, have already embraced it. It’s a huge time-saver. Students can just put their teacher’s prompt into the AI command box and get a complete project produced at a particular grade level (to prevent suspicions of cheating, of course) in less than five seconds. The only thing a kid needs is a few minutes to type or voice-to-text a prompt. After that, they can get back to whatever they actually want to do on their school-provided devices, which, if my old students are any indication, is near-mindless consumption of dopamine-delivering short-form content for hours and hours on end.
Now, perhaps an ethical, forward-thinking teacher would ask students to produce and turn in transcripts of their interaction with AI as evidence of learning. The most important skill a human brings to AI is prompt engineering, so a well-intentioned teacher might require students to submit transcripts of their AI conversations. Adorable, really. But the K12 system will give that teacher 150+ if not 200+ students. And those transcripts, if AI is being used daily, will need to be read. How in the world will a teacher do that? And how in the world would a teacher know if a very savvy kid just opened two different LLMs and prompted one platform with the information the other LLM produced and asked the other AI to ask probing questions at the ____ grade level to continue the conversation, all while browsing YouTube shorts on his personal cell phone? There’s no way a human teacher will be able to accurately assess what her students are learning, and she certainly won’t be able to gauge how much they’re cheating the process, at least given the current constraints of the factory schooling model still in play everywhere in the United States.
That means that if you want to roll out AI, you almost certainly have to adopt the 1:1 AI tutoring model, which will meet the learner where they are and build her capacity over time and in direct response to her input. For math, okay, fine; I’ve seen some great adaptive learning programs. I can get behind that. But for literature? History? Government? Economics? Composition? How would a teacher police what an individualized, 1:1 AI tutor is teaching her students? Schools are already opaque to most parents, so in a 1:1 AI tutoring K12 scenario, how in the sam hill would a parent know what their kid was doing?
The technology Huxley and Orwell and Postman feared most is old hat to our kids. If these men were alive to see our brave new world, they’d probably laugh sardonically and be sorely tempted to emulate Hemingway’s end. You’d better believe AI will increasingly be used as a tool in classrooms, and will likely become more invasive over time, all in the name of accountability. China is already experimenting with facial scanning and heart rate monitoring of students; collecting not just student answers, but the biometric data students produce as they are exposed to new lessons and information. Though I have clearly read too much dystopian literature, it’s not a stretch to imagine that an LLM could be programmed to ensure students respond correctly to questions like “Why is socialism the most humane economic system?” and “Why is President X the greatest American statesman?” and then not allow a student to move on in her studies until she regurgitates the approved answer. We won’t know exactly what the AI will be programmed to do until enough kids with enough parents who are paying close attention go through it. That could take a really, really long time if 1:1 AI tutoring doesn’t leave a record that can be accessed by you, the parent. Due to increasingly parent-excluding interpretation of FERPA laws, you may not be able to access any such transcripts. And even if you could, would you have the time to review them daily?
I’d be remiss if I didn’t also mention this. It’s highly likely that while competing LLMs for private customers of differing religious affiliations, ideologies, worldviews, etc., will be available, the vast majority of students will be exposed to the single LLM the government decides is best, as that one will enjoy monopoly status in the public schools. (For those in the know, adoption of an LLM will be like textbook adoption and all that implies.) Such subsidies will crowd out competition from smaller, leaner competitors with narrower target markets.
Just like LLMs built to produce results that resonate with the beliefs of a particular market, the LLM adopted by the state for school use will also be customizable. The in-school AI will be a hard-to-monitor moving target that a central authority, either at the Federal or State level, could change whenever the prevailing political winds change. Damnably, this also means the government could use the state-subsidized LLM to train particular beliefs into the up-and-coming generation through 1:1 AI tutoring in its free public schools, thus using “free” public education to change the political winds, normalizing and advancing the state’s agenda through total control of what students are exposed to in the classroom.
There ain’t no such thing as a free lunch, parents. Look at the chart below. The vast American middle class has long been the great bulwark of liberty, but the vast American middle class is also the most likely to entrust the education of their children to the state-run and taxpayer-funded public schools.
Imagine your child being referred to counseling when the weekly AI student data report pings the counselor to check into his high biometric stress indicators around topics like gender, racism, bias, and during socioemotional learning lessons. Imagine his grades going down because he won’t just give the “right” answer, no matter how many times the LLM prompts him, no matter how many different ways it tries to reteach him. Imagine how bad his grades might be. Imagine what that might mean for his options after high school.
Worse, though, is this: what if your kid plays along, year after year, because he knows grades are important? It’s easier to go along with a lie than to make himself a target by fighting the system. It’s not a big deal, right? He doesn’t need to learn all this stuff anyway. He may never open a book or even write a paper, using the AI instead to create work that would have taken him hours to do in mere seconds.
The AI will do its job and only prompt him with the right answers, whatever those may be. His GPA will even indicate high intelligence, just not a human one.
If you appreciate this and believe that my essays, podcasts and lesson plans will be useful to American families recovering control of their child’s education (even if they can’t fully control their schooling) please consider subscribing to support my work or buying me a coffee and contributing what you can. If you can’t afford to help, know that I intend to provide the most important posts to support you in teaching your own children free of charge.
Every word you wrote is so true and so terrifying
Thank you, DT! Adding this one to my “Why We Should Homeschool” doc for my skeptical partner. Been thinking a lot lately about technology. As of today, my plan is to Sugata Mitra that shit around “4th grade,” when they’ll be 8, 10 & 10 and studying modern history by way of the Well-Trained Mind. (They’re 2, 4 & 4 now.) Let em figure out Word, Paint, Solitaire, etc. without instruction to inhibit their curiosity. With firm parameters and paper & pencils for everything else.