Develop your policy

a kitchen funnel
Funnel down the big ideas

As you craft your policy, it can be helpful to review  those of your colleagues across institutions. You will also need to adapt and personalize your own policy based on your institution’s guidelines, your student body, the subset of your student body that you teach, the kinds of classes you teach, and numerous other factors. In short, it is time for some rhetorical analysis. 

Analyze your context

Here are some questions to help you get started with this analysis:

1. What is my mental model of ChatGPT or other conversational agents?

What metaphors do I use to describe ChatGPT? For instance, do I think of ChatGPT more as a graduate research assistant? My student’s smart roommate who helps them with homework? The school delinquent who supplies lazy students with pre-fabricated homework answers? Don’t necessarily judge if this model is correct or helpful – just un-submerge it.

2. Based on what I already know of the students who take my classes, what do I expect their mental models for ChatGPT to be?

Do students think of ChatGPT as a teaching assistant who is awake and responsive at all hours? A peer reviewer or paired programming partner who is talented but needs to be trained? A glorified search engine? An essay mill? If you are unsure, you might consider circulating a survey at the beginning of the semester to gather more information on student attitudes toward generative AI assistance.

3. What are my institution’s priorities for student learning?

Even if you think you know, it can be helpful to take a few minutes to review your institution’s mission statement, strategic plan, and demographic information. Search terms like “enrollment and demographic reports” for your university to see what information is available on your student body. Take some time to think about the goals you think that your students have for themselves in your class and how these goals relate to those of the institution, your department, and your individual courses. 

4. How is my own work-life balance faring at this point?

Am I overwhelmed? Busy but satisfied? A little bit bored and interested in experimenting with my curriculum? Do I anticipate a major life event such as surgery, pet adoption, having a child, moving, etc. in the near future? Think through whether the upcoming semester or quarter is the right time for you to experiment with a full overhaul of your curriculum or whether it is a time to dip your toe in with a single project or assignment. 

5. What part of my job gives me the most satisfaction?

Do I thrive most when I am:

a. Meeting individually with students

b. Working with small groups

c. Lecturing

d. Giving feedback

e. Designing lectures, workshops, or other curricula

To whatever extent is possible, gear your revisions to the parts of your job that you already enjoy. For example, if you like one-on-one student meetings, changing your assessment criteria to incorporate an in-person conversation on your student’s topic could also help you and your students explore the opportunities and limitations of generative AI assistance (and so be reflected in your guidelines). If this is not an aspect of your job that you enjoy, then sitting in on small group student team project meetings might be a better alternative to assess your students’ progress toward their learning goals and the thinking that they do unaided by ChatGPT.

Compare your initial thoughts with the guidelines of your colleagues

The following example AI Guidelines were posted to Lance Eaton’s resource on “Classroom Policies for AI Generative Tools” during the summer of 2023. The authors, like you, may have since changed their minds about some aspects of generative AI. These are useful examples but the following discussion should not be taken as a criticism or endorsement of one particular author or policy.

Example 1: Dan Copulsky, Writing 2: Rhetoric and Composition, UC Santa Cruz

All assignments should be your own original work, created for this class. We will discuss what constitutes plagiarism, cheating, or academic dishonesty more in class. […] You must do your own work. You cannot reuse work written for another class. You should not use paraphrasing software (“spinbots”) or AI writing software (like ChatGTP).

Discussion

What is Dan’s rhetorical context? How do you think he envisions his audience? Is it important that this is a lower-division writing class as opposed to an upper-division major-specific class?

Tradeoff analysis

On the one hand, this policy’s simplicity is a real strength. Students are more likely to read it and get a clear sense of what they should not be doing. It leaves a good deal open to interpretation, in the form of in-class discussion. This implies that Dan is fairly confident that his students are able to attend most classes and participate in such discussions fully. This may be a fair assumption for his institution, UCSC, where first-year students are required to live on campus and transfer students typically do not take this class – transfer students are more likely to live off-campus and to have already satisfied the requirements for Writing 2.

The flexibility to adapt AI policy as the course evolves comes with a cost, however. Will it disadvantage students with health or family issues that impact participation? What about students for whom reading is the most accessible way to access course information? Are there students in Dan’s class who rely upon being able to do assignments in advance, such as athletes or people with less flexible schedules for other reasons? How will waiting until an assignment is discussed in class impact their pre-writing processes?

On a more conceptual level, is it fair to characterize using ChatGPT as not “doing your own work”? Or, is there a way in which one’s own work can be combined with ChatGPT in an educationally rigorous manner? What does Dan lose by starting with the assumption that one’s own work and the work of ChatGPT are mutually exclusive? Does framing his discussion immediately after he talks about academic dishonesty encourage his students to see ChatGPT usage as morally suspect in a way that will inhibit their use of the technology beyond the limits Dan has in mind?

Example 2: David Joyner, CS6750: Human-Computer Interaction and CS7637: Knowledge-based UI Georgia Institute of Technology.

We treat AI-based assistance, such as ChatGPT and Github Copilot, the same way we treat collaboration with other people: you are welcome to talk about your ideas and work with other people, both inside and outside the class, as well as with AI-based assistants. However, all work you submit must be your own. You should never include in your assignment anything that was not written directly by you without proper citation (including quotation marks and in-line citation for direct quotes). Including anything you did not write in your assignment without proper citation will be treated as an academic misconduct case.

If you are unsure where the line is between collaborating with AI and copying from AI, we recommend the following heuristics:

  •  Never hit “Copy” within your conversation with an AI assistant. You can copy your own work into your conversation, but do not copy anything from the conversation back into your assignment. Instead, use your interaction with the AI assistant as a learning experience, then let your assignment reflect your improved understanding.
  • Do not have your assignment and the AI agent itself open on your device at the same time. Similar to above, use your conversation with the AI as a learning experience, then close the interaction down, open your assignment, and let your assignment reflect your revised knowledge. This heuristic includes avoiding using AI assistants that are directly integrated into your composition environment: just as you should not let a classmate write content or code directly into your submission, so also you should avoid using tools that directly add content to your submission.

Deviating from these heuristics does not automatically qualify as academic misconduct; however, following these heuristics essentially guarantees your collaboration will not cross the line into misconduct.

Discussion

What is David’s rhetorical context? How might the learning outcomes of these graduate level classes differ from those of undergraduate level classes? How has David analyzed his audience and in what ways is he responding to them? How does his audience differ from yours?

Tradeoff Analysis

These AI guidelines takes a different approach than most. Rather than enumerating instances that are prohibited or allowed, he offers two general heuristics (rules of thumb) to follow during the process of writing and creation. A strength of this approach is that it is easy to remember and to follow. It reinforces an atmosphere of trust and encourages students to explore AI technology for what it can do for them as learning tools. He seems to be working with the understanding that his students are self-motivated to learn and recognize that they need certain skills to do well in their professions – so they presumably will not evade gaining and practicing these skills.

Long term, asking all students to actively engage AI technology may work to increase equity (more research is needed on this topic). In the short term, however, students without reliable Internet access, students with fewer waking hours to devote to assignments (i.e., students with families, students with full-time jobs) may feel more pressure to use AI technology to provide quick answers to assignments rather than to deepen learning because they will be assessed in comparison to peers who are using generative AI.

Example 3: Center for Teaching and Assessment of Learning, University of Delaware, examples of syllabus language #2: “Use only with prior permission” (not specific to course)

Students are allowed to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2) on assignments in this course if instructor permission is obtained in advance. Unless given permission to use those tools, each student is expected to complete each assignment without substantive assistance from others, including automated tools.

If permission is granted to use advanced automated tools (artificial intelligence or machine learning tools such as ChatGPT or Dall-E 2), they must be properly documented and credited. Text generated using ChatGPT-3 should include a citation such as: “Chat-GPT-3. (YYYY, Month DD of query). “Text of your query.” Generated using OpenAI. https://chat.openai.com/” Material generated using other tools should follow a similar citation convention.

If a tool is used in an assignment, students must also include a brief (2-3 sentences) description of how they used the tool.

Discussion

What kind of class do you think the author of this policy had in mind? It is written to be fairly general, but can you detect assumptions about audience? What learning outcomes might an instructor who uses this policy be emphasizing? Does it make a difference that it is not targeted to a discipline or topic?

Tradeoff analysis

This policy casts ChatGPT as simply another influence, like a roommate or friend, who might help with an assignment. However, it also includes more structured guidelines for documenting the part generative AI assistance played in the final product than, say, guidelines on how much a roommate can help you with an assignment. Reflecting on the tool itself is part of any assignment using AI tools. Does this approach discourage students from experimenting with generative AI? Is it clear on what constitutes appropriate or inappropriate usage?

Example 4: Rebecca Weaver, English 1101: Composition 1. Georgia State University, Perimeter College Clarkston.

How to use AI (such as ChatGPT, Wordtool, Quillbot, Spellcheck, etc) for this class*

  •  Each of your professors and major programs will have different policies about whether or not and to what extent they’ll allow you to use any kind of AI tool. You are responsible for understanding the policies in all of your classes. You need to know that some of your professors will see ANY AI use as “unauthorized assistance”—one of the violations of our academic integrity & honesty policy.

You can expect me to be transparent about what you need to learn in each assignment, and when it’s effective / ethical to use certain tools and when it’s not. In this class, some AI tools are ethical and effective and some are not. If you have a question about a specific tool, please ask me!

In the assignments where I specify that you can use specific AI tools, I’ll specify how much/what aspects fall under this “ok use” policy. You will be required to state what you used AI for in the assignment reflections that you submit with each project.

The most ethical and effective help you can get with your writing is by bringing drafts or even just your ideas to me, your SI, and our school writing tutors.

Ethical (does not violate principles of academic honesty or integrity)

  • Grammarly is more ethical on the spectrum, because it suggests grammar for you and doesn’t write for you (the same goes for MSWord and Google Docs grammar checkers).
  • Spellcheck, within MSWord, is ethical, as is the spelling check in Google Docs.
  • Easy bib, Zotero, and other tools that create citations for you: ethical (but see note below)
  • Paraphrasing bots: these are NOT ethical because they’re doing work that *you* should be doing, and some of them steal your data!
  • ChatGPT is NOT very ethical as a whole: it steals stuff that other people wrote and doesn’t give credit to those people (called “scraping”), is a terrible and stupidly disrespectful nerd robot, makes information and sources up, hallucinates, and steals your data. You will likely have to pay for it within the near future—either through a subscription or by giving it your data.
    • It can but be used in some ethical ways, such as: creating meeting minutes, video transcriptions, meeting agendas, lists, brainstorming and idea generation (pre-writing), outlining, quiz writing, and trip planning.
    • You should NOT use AI for more than pre-writing and brainstorming—it’s not ethical in this class to have it write actual papers or projects for you or otherwise perform tasks that you are expected to do or learn how to do yourself. In short, ChatGPT isn’t taking this course; you are. You are here to learn how to write. You won’t learn how to write if you turn in a ChatGPT / AI paper. If you don’t want to learn how to write right now, you don’t have to take this class.

Effectiveness (helps you do your work better and saves you time)

  • Grammarly: You can learn a lot from it and other AI grammar/spelling tools such as those within MSWord and Google docs.
  • Spellcheck, within MSWord, is a VERY useful tool, and you should just default to using it for anything you write.
  • Easy bib, Zotero, and other citation tools that create citations for you: these are pretty effective, BUT, you need to know the basic parts of citations and what they mean (we’ll go over this in class), and: double-check their work—they will sometimes capitalize the wrong thing or miss words.
  • Paraphrasing bots: most of these aren’t worth the time it takes to click on the URL. They tend to get A LOT wrong. They’re pretty much garbage. Plus, using them doesn’t help you learn to how read a thing and write about that thing you read. This is one of the most important skills you’ll ever need in college.
  • ChatGPT: because of its ethical issues (see above) and limitations in its coding, it tends to get a lot wrong, so if you use it to write a paper, you’ll probably be spending twice as much time correcting errors and bad info than you would just writing your paper. In class, we’ll look at examples of AI-generated papers and how they often don’t pass this class.
Discussion

Rebecca teaches at a two-year college which transfers into a large public university. Her students come from a variety of backgrounds and many have time-consuming commitments outside of the classroom. Rebecca uses an “ungrading” approach to emphasize process and growth in response to feedback. This approach already requires her to meet frequently with students and engage individually with them. How might ChatGPT interact with these pedagogical decisions? In what ways are the challenges and limitations of ChatGPT in the classroom highlighted differently in Rebecca’s context as opposed to a more traditional grading context? 

Tradeoff analysis

Rebecca uses ethics and effectiveness as two guiding principles for student choice. This policy is strong in how it situates ChatGPT in relationship to other writing and instructional technology, such as Grammarly. Students will have a clear sense of how ChatGPT relates to existing technologies that they may already use. The level of detail means that the policy is quite long – about a page in a 13-page syllabus. This may discourage some students from a careful reading. The “effectiveness” portion of the policy may quickly become dated as ChatGPT becomes more adept and AI-generated papers more difficult to spot. In-person meetings with students will help Rebecca to monitor progress toward learning goals, but in a class that does not require individual meetings it may be more difficult to determine if a student or an AI has written a paper, particularly as ChatGPT improves with training. 

Write your policy

  1. If you teach more than one subject, decide if you will make a different policy for each class. 
  2. Decide the extent to which you will allow human-AI collaboration in your writing assignments. As a rule of thumb, stricter guidelines mean more work up front in reworking your assessment plan. More open guidelines mean more work during the semester to introduce prompt engineering and the limitations and capabilities of conversational agents. Be sure that whatever you decide does not contradict the position of your Office of Student Integrity, your program, your department, or your institution. 
  3. Explain in simple, second-person language to your students what you will allow and what you will not allow. Give reasons, but keep the statement as concise as possible while still covering expectations in detail. Cover the consequences students will face if they are found in violation of your guidelines and then stick to that policy. 
  4. Get your policy in front of users. Ask for feedback and integrate recommendations. To supplement human users, consider using ChatGPT or another conversational agent to reveal confusing or vague areas. For example,

Query 1: Act as a staff member for the office of student integrity. Suggest improvements to my guidelines for student use of AI [paste guidelines].

Query 2: Rewrite that but in more concise language and as an instructor teaching a writing class.

Query 3: Act as a student and respond in an email to the instructor with questions about specific situations. 

Once you have your draft and your user feedback, rewrite your policy with any necessary clarifications and add it to your syllabus. Consider incorporating a more detailed version on the syllabus and shorter, more targeted versions on each major assignment.