ablerism
ablerism

Hitting a new low of discouragement about the widespread use of chatgpt among students, so if you’ve got links to frameworks, insight, even-more-creative assignment design, do please share.

|
Embed
Progress spinner
jonah
jonah

@ablerism I substitute taught a Christian jr high and all of them wrote with AI-- it is taught as normal... their textbooks included fiction that was written with messaging online-speak. I don't know what to do with all that.

|
Embed
Progress spinner
lukemperez
lukemperez

@ablerism I've settled on the notion that generative AI is a calculator for words. It makes as much sense to ban or discourage its use in writing assignments as it does to ban calculators in an engineering class. But just as there are times when a science class may choose or need to evaluate students without the use of calculators, so to in a humanities or social science class may the need emerge to require an evaluation without AI. Everything below is a work in progress, some of which works well for my university with my students and some of it needs more refinement.

Lower division classes like my freshman humanities class that I'm teaching this fall get the most oversight and changes. I introduced these one class or term at a time so I could work with one new pedagogical variable at a time.

  1. no more papers. We spend 5-10 min every class writing by hand. I have a prompt that I score on a simple 5-3-1. 5 for a perfect answer, 1 for anything, even "I have a purple dog" or (as one student wrote) "shit, sorry. I didn't read"; 3 for anything in between.
  2. Exams with multiple choice are back. This term I scaffold three exams so that the first was strict MC, the second was MC and short answer, the final will be short answer and essay in a blue book. Additionally, the short answer questions in the second exam are cumulative testing across units 1 and 2, but the MC are only on readings from Unit 2. In the third, the short answer questions only test unit 3 but the essay questions will evaluate across all three. The idea is to scaffold up from "can students remember basic black letter law questions about the text" in the first exam, to making interpretive comparisons across the semester in the end.
  3. The final will be open book, closed notes. It won't help students who haven't prepared and read because if you write enough short answer and essay questions, they won't have enough time to check everything. But it will let them look up the page numbers, and you can expect a greater level of depth or accuracy because of it.
  4. I have also experimented with reading quizzes on canvas. I learned how to use r-exams and so could write exams that shuffle questions and wrong answers. There's a bit of a learning curve, but the payoff is that even if two or three students take quizzes together they cannot guarantee that they'll be able to coordinate. I don't use any lockdown browser. I loathe surveillance academe. Instead I aim for about 6-7 minutes for ten questions. I found if I write the questions well, students have enough time to look up maybe one or two questions. And most of the time, the higher the grade, the faster they took the quiz.
  5. The main thing is shift the weights around. I think my in class reflections are worth 50 percent, and I drop two. But we have nearly 30 for this semester. So it rewards students to keep up with their work. I might shift the weights when I teach this class again.

Next spring I'm teaching a lecture class, 200 level on American political institutions. No way reflections for 100-120 students can work. I need to tweak things. Tbd.

For my senior level seminar on American Strategic Thought, also in the spring, my approach is much simpler. All assignments are written, and I don't want to change it. So a raw ChatGPT, Gemeni output to an assignment = 70 percent. Passing but barely. I have participation points so someone who phones it in with just AI won't pass. The trade off of course is that what it takes to get a 90 percent or higher has gone up. I am up front with my students that this reality reflects what most of their first post college jobs will look like.

Fwiw, a colleague of mine at Michigan State does everything in Canvas with no lock down software. She tweaks prompt difficulty, question difficulty, and time so that if students are using AI, then it's like using notes or books. Students have enough time to design the start of answer but not enough to do all the questions on any assignment or exam. I think she got to a similar place as I have with much less front loading.

|
Embed
Progress spinner
ablerism
ablerism

@lukemperez Thank you so much for spelling all this out! Useful. I have done some versions of these but not to the extent you have.

|
Embed
Progress spinner
lukemperez
lukemperez

@ablerism no problem. It all comes to evaluation and whether AI would distort that. If it would, then we need another evaluation tool, if not, then don’t worry about it. My prior is that I need less AI with freshman so I can see what they’re learning, but for seniors who have a lot knowledge behind them it might not matter as much.

|
Embed
Progress spinner
KyleEssary
KyleEssary

@lukemperez Luke, thank you so much for all of these tips. I find this enormously helpful.

|
Embed
Progress spinner
KyleEssary
KyleEssary

@ablerism My context includes seminary grad students. Many have non-English speaking backgrounds. Some have educational backgrounds that discourage critical thinking and independent thought. Others studied at top global universities in the UK, Australia, Singapore, and China. It's quite the mix.

  1. Like Luke, I use multiple choice and also give short answer essays and insist on hand-written work under me or a proctor.
  2. If I have a smaller class, I have used viva style dialogues with students instead of short answer essays.
  3. Students hate this one, but I have a pretty standard list of take-home written assignments for my courses. I make sure that our library has as good of resources on those topics as we can afford, and then I insist that students only cite resources from our library or from a set list of academic journals. I insist that a major portion of their writing consist of critical interaction with these sources. This constrains the boundaries of their research, but also minimises their ability to develop arguments directly from generative AI apart from their own input and critical thinking.
|
Embed
Progress spinner
jonah
jonah

@KyleEssary this is really good! Limitation to specific sources is a great way to encourage engagement

|
Embed
Progress spinner
lukemperez
lukemperez

@KyleEssary use it, make it your own.

|
Embed
Progress spinner
ablerism
ablerism

@KyleEssary Thank you, Kyle! Can you say more about #3 — what is preventing them from finding digital sources of the same library materials online and using AI from there?

|
Embed
Progress spinner
In reply to
KyleEssary
KyleEssary

@ablerism I'm more familiar with the set list of sources. This helps me to discern whether the critical interaction in their paper is human or AI. It also frustrates their ability to type, "Outline a theology paper on Gen 22" into a generative AI and have it produce what I'm requesting.

|
Embed
Progress spinner
ablerism
ablerism

@KyleEssary ah, I see — thank you!

|
Embed
Progress spinner