AI in the Syllabus
Brief Overview
As AI tools become more commonplace, it is increasingly important to consider how to address their use in academic settings. Maintaining academic integrity while balancing the 麻豆传媒 and challenges inherent to AI usage is both essential and challenging. Given that academic disciplines are developing a variety of responses to AI, 麻豆传媒s who take more than one course in a semester stand a good chance of encountering different AI policies. By producing a clear AI statement and discussing it with 麻豆传媒s on the first day of class or through the introductory module of an asynchronous course, instructors can ensure that 麻豆传媒s have the information that they need to meet expectations.
The Challenge
One of the biggest difficulties in approaching AI classroom policies is the speed at which AI tools like chatbots are gaining wide-ranging capabilities. These tools are infinitely expanding the information 麻豆传媒s have access to, supporting them in producing content while simultaneously reducing the ability of educators to evaluate the origin of that content. To that end, it may be tempting to take a prohibitive classroom policy stance against the use of AI tools, but that is problematic (and potentially unethical).
First, it is nearly impossible to detect, and therefore enforce, the use of AI with any consistency or validity. Rigorous testing has shown that AI detectors are unreliable in most practical scenarios (Sadasivan, 2023). AI detectors have also been shown to be biased against English language learners (Liang, 2023). Another reason prohibitive policies may be problematic is more nuanced: the restrictions you impose may hinder the development of skills and behaviors needed to engage thoughtfully in an AI-integrated world.
Rather than all out prohibiting AI usage, one approach educators are using to address issues of information origin is to require 麻豆传媒s who use AI to reflect on and cite their use of AI tools (e.g. sharing their process and citing the prompts used in generating content). While this may be a useful way of integrating AI currently, it may already be somewhat outdated. This is because some AI tools like Google鈥檚 鈥淗elp me write鈥 button and Microsoft鈥檚 Copilot require no prompts at all. These so called 鈥渂uttons鈥 pose an interesting challenge as discussed in a recent essay by Ethan Mollick (2023):
So why do I think this is a big deal? Because, when faced with the tyranny of the blank page, people are going to push The Button. It is so much easier to start with something than nothing. Students are going to use it to start essays. Managers are going to use it to start emails, or reports, or documents. Teachers will use it when providing feedback. Scientists will use it to write grants. And, just as we are seeing with Adobe incorporating AI into Photoshop, when AI gets integrated into a familiar tool, adoption becomes simple. Everyone is going to use The Button. ...With everyone pushing The Button for most emails, documents, and even (soon!) spreadsheets and presentations, what documents mean is going to change fundamentally, and that is going to spill over to our work.
A Flexible Solution
Considering the unreliability of AI detectors, increase in output capability, and ubiquity of integration, AI syllabus policies should be being clear, flexible, learner-centered and process-focused. Academic integrity is always extremely important, but how, when, and why AI may be used in any given discipline may need to be an ongoing conversation between institutions, educators, and 麻豆传媒s. Following are several examples of what policies that support that ongoing conversation might look like. These examples are based on contributions to , a crowdsourced document that educators from around the world are using to share ideas about AI policies.
One final point: some educators are choosing to have 麻豆传媒s collaborate on the development of AI policies for their courses. This practice gives the instructor the opportunity to learn what their 麻豆传媒s think about AI and how they may be using it in a variety of contexts, including classes in other subjects, internships, and jobs. An added benefit is that when 麻豆传媒s have a voice in the development of a policy, they may be more likely to adhere to it.
Example 1
The following example reminds learners of the university鈥檚 ethical standards, outlines requirement for referencing use of AI, and engages learners in the co-creation of an equitable AI policy for the course:
We will use large language model AI, including ChatGPT-4o and DALL-E 2, to enhance our learning in this course. Our use of AI will allow us to develop our understanding of this technology and examine the complex challenges and 麻豆传媒 it offers to us, both as 麻豆传媒s and future professionals. The conversations we will have around AI, the potential implications of its use and our need to be thoughtful in our approach will be essential to our ability to adapt and support others in adapting to ever-evolving technologies.
In accordance with university policies around academic integrity in the Student Code of Conduct, we will be transparent in our use of AI in the completion of any classroom tasks. And, although there are questions around how and whether to cite AI, we will reference our use in the following way.
AI Citation Format: Title of AI Tool. Prompt or brief description of topic of search depending on tool. Date of creation.
Because the policies around the use of large language tools will vary among courses, we will spend some time early in the semester co-creating a class agreement on the use of AI tools. The goal is to ensure that everyone has access to and understands how to use the tools, recognizes the benefits and limitations of their use now and going forward, acknowledges and respects differing perspectives on the use of AI, and feels confident in options and requirements for using the tools in this class, what we are trying to accomplish with their use, and why it is or isn鈥檛 appropriate to use the tools for certain course tasks. We will revisit the agreement, as needed, throughout the semester.
Example 2
The following example supports open use of available AI tools, encouraging 麻豆传媒s to thoughtfully consider and reflect on their use of AI through a required 鈥淎I Acknowledgement鈥 for all assignments:
This course encourages and embraces the ethical use of Artificial Intelligence (AI).
As a 麻豆传媒 in this course, you will sometimes be required to incorporate AI tools in your work. You are also, encouraged, as it makes sense to you to do so, to use AI in the completion of assignments. It is your responsibility, however, to be transparent in that use. It is also your responsibility to examine the inherent bias and limitations of the AI tools you use while also considering the implications of their use on your individual learning, your work in this course, and the future expectations of your work.
To that end, you are required to read and edit, thoroughly, your assignment submissions, particularly any items created using AI. You are also required, for every assignment in this course, to submit an 鈥淎I Acknowledgement鈥.
AI Acknowledgement: For every assignment submission, you will include a 150-300 word acknowledgment. Your acknowledgement should include, 1. The identification of the tool(s) you used, 2. An explanation of why you decided to use the tool(s), 3. A description of how you used the tool(s) to manage assignment requirements, and 4. A reflection on your experience using the tool, exploring what worked or didn鈥檛, and acknowledging limitations of the tool for this assignment, potential biases, etc. If you opt not to use AI tools, when not required, please use the 鈥淎I Acknowledgement鈥 to highlight your non-AI approach and/or your reasons for deciding not to use certain tools.
The approach to AI use in this course and the inclusion of this acknowledgement in our work will help us to develop our skills for using AI while maintaining our academic integrity.
Example 3
The following example clearly establishes parameters for use and explains why the instructor feels that extensive use of AI might interfere with learners鈥 abilities to develop foundational skills:
Large-language, generative AI tools are not to be used to complete work for this course unless specifically directed to do so.
Your work in this course should demonstrate your learning and your ability to apply that learning through critical thinking and the development of your own ideas. And, although there are many tools (technological and human) that exist to support your ability to be efficient in this application, to be effective in your use of existing tools, you need to be skilled in your own right. The development and demonstration of those foundational skills is the aim of this class. It is why I ask you to embrace the challenge of thinking through and developing your responses to assignments without the support of AI.
In the case, however, that we do use AI in our work, information about accessing and using specific tools for specific assignment components will be clearly outlined for you.
Please let me know at any time if you have questions or concerns about the use of AI in this course.
References
- Eaton, Lance.鈥溾 Crowdsourced Google Doc. Last Updated 8 March 2023.
- Liang, Weixin, Mert Yuksekgonul, Yining Mao, Eric Wu and James Zou.鈥溾 ArXiv, 18 April 2023.
- Mollick, Ethan.鈥溾 oneusefulthing.substack.com, 2 June 2023.
- Sadasivan, Vinu Sankar, et al.鈥溾 Computation and Language, Pre-print version: 17 March 2023.
- Wilhelm, Ian.鈥溾 The Chronicle for Higher Education, 12 June 2023.