TLDR: Most of her students have delegated doing their homework to ChatGPT.
Well I agree with her: fuck that. She makes a good point that writing is not some busywork consisting of transcribing thoughts, it is thinking. I can certainly understand the frustration of correcting LLM slop for days. If the student can’t be bothered to write it, why would the teacher be bothered with correcting it? Just ask ChatGPT to correct "your" homework and put "AI Prompter" in your resume. Apparently it pays really well.
…OK I’m not being nice, I’ll step back a little. Article touches on an interesting concept:
Using ChatGPT to complete assignments is like bringing a forklift into the weight room
Why are students bringing the forklift into the weight room though? Is it because they don’t give a shit? Or are the stakes too high and they don’t trust their own abilities? Do they have the time to even try between their work shifts?
Is it because they don’t give a shit? Or are the stakes too high and they don’t trust their own abilities? Do they have the time to even try between their work shifts?
Likely a mix of all these factors and more. I think the author fails to critically examine how much skill is necessary for the average person and sets a bar of mastery for which many of her students are clearly uninterested in clearing.
While I don't say this as a criticism of the author, it is worth pointing out that she's also failed to adapt to the new technologies. She talks about how teachers will need to adapt to the new tools but ultimately places the blame on the students rather than reconsidering who her audience is. I'm guessing these are not individuals who are honestly pursuing a career in writing as those individuals would likely be much more engaged on the subject and willing to grow their skills (unless it's purely a means to an end- the acquisition of any degree). Using a tool which obscures stylistic choices may be "good enough" for these individuals and being able to accommodate the use of this tool effectively would necessarily require a shift in teaching style which gets them asking questions of the output. She recognizes this, but rather than questioning her teaching style it's written off as a failure of the student's ability to withstand the 'temporary discomfort of not knowing'.
I think you have some good points, except that these are not average people, but grad students being taught writing for PhD programs. I think that level of study necessitates mastery by its nature- you have to write original work for (most) doctorate degrees.
While I think there may be more to pull apart here, I think we're missing the necessary context to weigh in any deeper. How many assignments there are, what the assignments look like, whether they feel like just busy work, how much else is going on in the students life, etc. I think it would be telling (albeit not all that surprising as some are still just looking for a degree at that level) if they were using chatgpt on their doctorate, but even in that case I would perhaps argue that learning to use chatgpt tactfully or in ways which aren't the direct writing might be useful skills to have for future employment.
While I don’t say this as a criticism of the author, it is worth pointing out that she’s also failed to adapt to the new technologies. She talks about how teachers will need to adapt to the new tools but ultimately places the blame on the students rather than reconsidering who her audience is.
How would you propose adapting to this? Do you believe it's the teacher's responsibility to enact this change rather than (for example) a principal or board of directors?
The average teacher does not have the luxury of choosing their audience. Ideally you'd only teach students who want to learn, but in reality teachers are given a class of students and ordered to teach them. If enough students fail their exams, or if the teacher gives up on the ones who don't care, the teacher is assumed to be at fault and gets fired.
You can theoretically change your exams so that chatbot-dependent students will fail, or lower your bar because chatbots are "good enough" for everyday life. But thanks to standardized testing, most teachers do not have the power to change their success metrics in either direction.
This article is about PhD students coasting through their technical writing courses using chatbots. This is an environment/application where the product (writing a paper) is secondary to the process (critical analysis), so being able to use a chatbot is missing the point. Even if it were, cancelling your technical writing class to replace it with an AI-wrangling class is not a curriculum modification but an abdication. Doing that can get your program canceled, and could even get a tenured professor fired.
The author was really stuck between a rock and a hard place. Re-evaluating the systemic circumstances that incentivize cheating is crucially important -- on that we absolutely agree -- but it's a responsibility that should be directed at those with actual power over that system.
[Edit: taking the tone down a notch.]
How would you propose adapting to this? Do you believe it’s the teacher’s responsibility to enact this change rather than (for example) a principal or board of directors?
To be clear, I'm not blaming anyone here. I think it's a tough problem and frankly, I'm not a professional educator. I don't think it's the teacher's responsibility and I don't blame them for a second for deciding that nah, this isn't worth my time.
This article is about PhD students coasting through their technical writing courses using chatbots. This is an environment/application where the product (writing a paper) is secondary to the process (critical analysis), so being able to use a chatbot is missing the point.
Completely agreed here. I would have just failed the students for cheating if it were me. But to be clear, I was talking in more the abstract, since the article is written more about the conundrum and the pattern than it is about a solution. The author decided to quit, not to tackle the problem, and I was interested in hearing them follow that thread a bit further as they're the real expert here.
There's a part of me that thinks some of the blame should go to outdated pedagogical methods. Designing learning experiences and testing modalities is hard and most higher education for educators doesn't provide enough emphasis to meet the challenges of the modern classroom.
I'm not an educator, but the best teachers and professors I've had came up with ways to check for understanding, not just retention.
In the case of ChatGPT, maybe we have to admit writing papers is not as effective a teaching tool as we've given it credit for?
In the weightlifting analogy maybe it means understanding the reason one goes to the weight room. If it is to lose weight, for example, there are other ways. If it is to be able to lift heavy things maybe go lift some heavy things. If you don't care about any of those things then you bring a forklift.
Now, in defense of teachers like the one mentioned in the article, the entrenched administration and bureaucratic systems are likely the largest barriers to this sort of innovation.
I supervise exactly one person, and it's a ton of cognitive effort to analyse his level of competence, determine what skills/paradigm/knowledge he should gain and then create tasks for him to expose him to, help him internalize and then reinforce that same thing.
He's a great colleague.
Makes sense to me. For teachers that's their whole job (mostly... research professors and the like have other responsibilities, of course). And multiplied by a hundred or more.
Imagine something comes along that invalidates all of the tricks of the trade that have helped make it manageable. You're back to square one. I don't envy the position educators are in.
i assure you that i do not give a shit about writing a memo discussing the ethics of the Challeneger disaster. It's an assignment just to write something.
I don't care, my teacher doesnt care, the world doesn't care. It's a grade.
This is 95% of writing assignments in my experience.
Good for her to quit, if she can't make her lessons worth doing she shouldn't be teaching
I thought this was going to be about middle and high school students, not PhD students 😬
I wonder if a way to deal with this is to just accept that students are going to use LLMs and have them bring in both their prompts and what output they think looks good. That way you can at least bypass the lie that they’re not using ChatGPT, try to see what they’re going for and what they’re not confident about or don’t understand, and have some honest discussions about that. Trying to instill the idea that using an LLM should be close to the beginning of the process and not the end would be nice, but my hopes aren’t high on that front.
I think you might be onto something. A research paper or thesis, when boiled down, is just a product. How the product is made is difficult to determine, and there's an inherent incentive to make it the best product by any and all means. But if it were instead a process that was facilitated and had to be done in-person, that can be controlled more tightly
Glad she quit, if she can't make her lessons worth following, then she shouldn't be teaching
The lesson was that students shouldn't take the easy way and use AI. I'm sure this was communicated to some degree, not that it needed to be.
Her opinion was clear, yes. Regardless of it, I still think this is a clear sign of a bad teacher
Is it possible that the curriculum is maybe too high level for the students enrolled in it, but they are being made to enrolled in it without the path for an lower level course first?
I don't really have a great understanding of university course structure for reference.
Edit: Read some more comments bellow. Got my fill of theorizing about why students might be trying to take the easy way or why teachers might be struggling to educate effectively. Feel free to ignore me lol.
I teach as a tutor myself occasionally. You need to be able to understand and meet your students where they're at. Sometimes universities definitely do give teachers quite a challenging situation. But when you can't get your students to give a shit, you've failed as a teacher
I wonder if future generations will have to write everything by hand to prove it's not chatgpt
In cursive.
i guess it depends on the subject matter, but in my college courses i remember writing many timed in class essays (for
midterms, exams, etc).
However I studied mostly humanities, (english, history, philosophy etc) in university so that might be why, but with the rise of LLM I find it strange this has not become more common given the situation?
I realize the article is about PhD students so this isn't exactly on point, but still.
That'd be stupid
It also wouldn't prove anything useful, since students could simply use AI to author it and then just write it out. Then add onto that, writing it with pen would also make it harder to use automated ai-detection.
Something tells me they didnt think this all the way through.
Automated AI-detection doesn't work. That's discussed in the article. Even OpenAI deprecated their detection tool.
I mean, detection is and will continue be an arms-race. I've had some success in tests, especially with smaller or older models. Of course openai will not want to pay to develop the detection tools, given the misaligned incentives there. It would be silly to expect them to do so without a government mandate, or something of the sort.
But all that is tangential to my point, since hand-writing would still only change how convenient it is to cheat.
students could simply use AI to author it and then just write it out.
Many companies voluntarily offer solutions to problems they themselves created, to try and prevent government regulation. This isn't a new thing. The MPAA is a perfect example of this that is over 100 years old.
K
In maths, we have calculators, including ones that break down every step for you. Things like wolfram alpha have been around for what must be decades now.
How did we solve the problem for maths?
We didn't. Students cheat on their homework and do poorly on tests and beg and whine for ways to make up the grade with other cheatable assignments.
Source: taught math in universities for 6 years
But isn't that then the problem solved? They can't pass the exam if they don't study, I don't see why ChatGPT changes this.
You might think most people who pass math classes learned what the class is about, but that's not correct. People pass by learning slightly less than the bare minimum and cheating.
That is a very "one size fits all" opinion. I am very gifted in math at a baseline but failed to fully master the fundamentals. And have no ability to memorize a formula I would look up every time in the real world. I only ever cheated by adding the formulas to a program in my graphing calculator.
However, even with these flaws, I graduated high school, having not taken trig, only geometry and algebra. I tested past all pre trig classes by figuring out what sin cos and all are vaguely from context clues.
I then failed hard as you can imagine, and though I fully understood the concepts, I would make small errors in computations. I wouldn't write out everything I did step by step because I was so used to doing things in my head before the teacher was done with the question.
Everyone is different. Signed someone who probably would have made some breakthrough in math with proper mentorship with zero interest in learning math.
I was also never ever taken aside by a teacher who showed any interest at all in my math abilities. Still bitter about it.
Math is one of thoses subjects where you both need muscle memory and full conceptual understanding.
Unfortunately there is no skill out there that translates well into giving you that, like geography would to history, english to another language etc.
It requires very deliberate practice, which is painful and effortful, so I understand how hard of an obstacle it is for students out there.
Well, it was a generalization after all. The point is that cheating is easy and indeed people do it. Not that nobody ever failed because they weren't given enough of a chance.
TLDR: Most of her students have delegated doing their homework to ChatGPT.
Well I agree with her: fuck that. She makes a good point that writing is not some busywork consisting of transcribing thoughts, it is thinking. I can certainly understand the frustration of correcting LLM slop for days. If the student can’t be bothered to write it, why would the teacher be bothered with correcting it? Just ask ChatGPT to correct "your" homework and put "AI Prompter" in your resume. Apparently it pays really well.
…OK I’m not being nice, I’ll step back a little. Article touches on an interesting concept:
Why are students bringing the forklift into the weight room though? Is it because they don’t give a shit? Or are the stakes too high and they don’t trust their own abilities? Do they have the time to even try between their work shifts?
Likely a mix of all these factors and more. I think the author fails to critically examine how much skill is necessary for the average person and sets a bar of mastery for which many of her students are clearly uninterested in clearing.
While I don't say this as a criticism of the author, it is worth pointing out that she's also failed to adapt to the new technologies. She talks about how teachers will need to adapt to the new tools but ultimately places the blame on the students rather than reconsidering who her audience is. I'm guessing these are not individuals who are honestly pursuing a career in writing as those individuals would likely be much more engaged on the subject and willing to grow their skills (unless it's purely a means to an end- the acquisition of any degree). Using a tool which obscures stylistic choices may be "good enough" for these individuals and being able to accommodate the use of this tool effectively would necessarily require a shift in teaching style which gets them asking questions of the output. She recognizes this, but rather than questioning her teaching style it's written off as a failure of the student's ability to withstand the 'temporary discomfort of not knowing'.
I think you have some good points, except that these are not average people, but grad students being taught writing for PhD programs. I think that level of study necessitates mastery by its nature- you have to write original work for (most) doctorate degrees.
While I think there may be more to pull apart here, I think we're missing the necessary context to weigh in any deeper. How many assignments there are, what the assignments look like, whether they feel like just busy work, how much else is going on in the students life, etc. I think it would be telling (albeit not all that surprising as some are still just looking for a degree at that level) if they were using chatgpt on their doctorate, but even in that case I would perhaps argue that learning to use chatgpt tactfully or in ways which aren't the direct writing might be useful skills to have for future employment.
How would you propose adapting to this? Do you believe it's the teacher's responsibility to enact this change rather than (for example) a principal or board of directors?
The average teacher does not have the luxury of choosing their audience. Ideally you'd only teach students who want to learn, but in reality teachers are given a class of students and ordered to teach them. If enough students fail their exams, or if the teacher gives up on the ones who don't care, the teacher is assumed to be at fault and gets fired.
You can theoretically change your exams so that chatbot-dependent students will fail, or lower your bar because chatbots are "good enough" for everyday life. But thanks to standardized testing, most teachers do not have the power to change their success metrics in either direction.
This article is about PhD students coasting through their technical writing courses using chatbots. This is an environment/application where the product (writing a paper) is secondary to the process (critical analysis), so being able to use a chatbot is missing the point. Even if it were, cancelling your technical writing class to replace it with an AI-wrangling class is not a curriculum modification but an abdication. Doing that can get your program canceled, and could even get a tenured professor fired.
The author was really stuck between a rock and a hard place. Re-evaluating the systemic circumstances that incentivize cheating is crucially important -- on that we absolutely agree -- but it's a responsibility that should be directed at those with actual power over that system.
[Edit: taking the tone down a notch.]
To be clear, I'm not blaming anyone here. I think it's a tough problem and frankly, I'm not a professional educator. I don't think it's the teacher's responsibility and I don't blame them for a second for deciding that nah, this isn't worth my time.
Completely agreed here. I would have just failed the students for cheating if it were me. But to be clear, I was talking in more the abstract, since the article is written more about the conundrum and the pattern than it is about a solution. The author decided to quit, not to tackle the problem, and I was interested in hearing them follow that thread a bit further as they're the real expert here.
There's a part of me that thinks some of the blame should go to outdated pedagogical methods. Designing learning experiences and testing modalities is hard and most higher education for educators doesn't provide enough emphasis to meet the challenges of the modern classroom.
I'm not an educator, but the best teachers and professors I've had came up with ways to check for understanding, not just retention.
In the case of ChatGPT, maybe we have to admit writing papers is not as effective a teaching tool as we've given it credit for?
In the weightlifting analogy maybe it means understanding the reason one goes to the weight room. If it is to lose weight, for example, there are other ways. If it is to be able to lift heavy things maybe go lift some heavy things. If you don't care about any of those things then you bring a forklift.
Now, in defense of teachers like the one mentioned in the article, the entrenched administration and bureaucratic systems are likely the largest barriers to this sort of innovation.
I supervise exactly one person, and it's a ton of cognitive effort to analyse his level of competence, determine what skills/paradigm/knowledge he should gain and then create tasks for him to expose him to, help him internalize and then reinforce that same thing.
He's a great colleague.
Makes sense to me. For teachers that's their whole job (mostly... research professors and the like have other responsibilities, of course). And multiplied by a hundred or more.
Imagine something comes along that invalidates all of the tricks of the trade that have helped make it manageable. You're back to square one. I don't envy the position educators are in.
i assure you that i do not give a shit about writing a memo discussing the ethics of the Challeneger disaster. It's an assignment just to write something.
I don't care, my teacher doesnt care, the world doesn't care. It's a grade.
This is 95% of writing assignments in my experience.
Or is it because kids are dumb?
Nah even the smart ones use chat gpt https://www.smithsonianmag.com/smart-news/this-award-winning-japanese-novel-was-written-partly-by-chatgpt-180983641/
Good for her to quit, if she can't make her lessons worth doing she shouldn't be teaching
I thought this was going to be about middle and high school students, not PhD students 😬
I wonder if a way to deal with this is to just accept that students are going to use LLMs and have them bring in both their prompts and what output they think looks good. That way you can at least bypass the lie that they’re not using ChatGPT, try to see what they’re going for and what they’re not confident about or don’t understand, and have some honest discussions about that. Trying to instill the idea that using an LLM should be close to the beginning of the process and not the end would be nice, but my hopes aren’t high on that front.
I think you might be onto something. A research paper or thesis, when boiled down, is just a product. How the product is made is difficult to determine, and there's an inherent incentive to make it the best product by any and all means. But if it were instead a process that was facilitated and had to be done in-person, that can be controlled more tightly
Glad she quit, if she can't make her lessons worth following, then she shouldn't be teaching
The lesson was that students shouldn't take the easy way and use AI. I'm sure this was communicated to some degree, not that it needed to be.
Her opinion was clear, yes. Regardless of it, I still think this is a clear sign of a bad teacher
Is it possible that the curriculum is maybe too high level for the students enrolled in it, but they are being made to enrolled in it without the path for an lower level course first?
I don't really have a great understanding of university course structure for reference.
Edit: Read some more comments bellow. Got my fill of theorizing about why students might be trying to take the easy way or why teachers might be struggling to educate effectively. Feel free to ignore me lol.
I teach as a tutor myself occasionally. You need to be able to understand and meet your students where they're at. Sometimes universities definitely do give teachers quite a challenging situation. But when you can't get your students to give a shit, you've failed as a teacher
I wonder if future generations will have to write everything by hand to prove it's not chatgpt
In cursive.
i guess it depends on the subject matter, but in my college courses i remember writing many timed in class essays (for midterms, exams, etc).
However I studied mostly humanities, (english, history, philosophy etc) in university so that might be why, but with the rise of LLM I find it strange this has not become more common given the situation?
I realize the article is about PhD students so this isn't exactly on point, but still.
That'd be stupid
It also wouldn't prove anything useful, since students could simply use AI to author it and then just write it out. Then add onto that, writing it with pen would also make it harder to use automated ai-detection.
Something tells me they didnt think this all the way through.
Automated AI-detection doesn't work. That's discussed in the article. Even OpenAI deprecated their detection tool.
I mean, detection is and will continue be an arms-race. I've had some success in tests, especially with smaller or older models. Of course openai will not want to pay to develop the detection tools, given the misaligned incentives there. It would be silly to expect them to do so without a government mandate, or something of the sort.
But all that is tangential to my point, since hand-writing would still only change how convenient it is to cheat.
Many companies voluntarily offer solutions to problems they themselves created, to try and prevent government regulation. This isn't a new thing. The MPAA is a perfect example of this that is over 100 years old.
K
In maths, we have calculators, including ones that break down every step for you. Things like wolfram alpha have been around for what must be decades now.
How did we solve the problem for maths?
We didn't. Students cheat on their homework and do poorly on tests and beg and whine for ways to make up the grade with other cheatable assignments.
Source: taught math in universities for 6 years
But isn't that then the problem solved? They can't pass the exam if they don't study, I don't see why ChatGPT changes this.
You might think most people who pass math classes learned what the class is about, but that's not correct. People pass by learning slightly less than the bare minimum and cheating.
That is a very "one size fits all" opinion. I am very gifted in math at a baseline but failed to fully master the fundamentals. And have no ability to memorize a formula I would look up every time in the real world. I only ever cheated by adding the formulas to a program in my graphing calculator.
However, even with these flaws, I graduated high school, having not taken trig, only geometry and algebra. I tested past all pre trig classes by figuring out what sin cos and all are vaguely from context clues.
I then failed hard as you can imagine, and though I fully understood the concepts, I would make small errors in computations. I wouldn't write out everything I did step by step because I was so used to doing things in my head before the teacher was done with the question.
Everyone is different. Signed someone who probably would have made some breakthrough in math with proper mentorship with zero interest in learning math.
I was also never ever taken aside by a teacher who showed any interest at all in my math abilities. Still bitter about it.
Math is one of thoses subjects where you both need muscle memory and full conceptual understanding. Unfortunately there is no skill out there that translates well into giving you that, like geography would to history, english to another language etc. It requires very deliberate practice, which is painful and effortful, so I understand how hard of an obstacle it is for students out there.
Well, it was a generalization after all. The point is that cheating is easy and indeed people do it. Not that nobody ever failed because they weren't given enough of a chance.