Skip to main content

Introduction

Generative Artificial Intelligence (hereafter referred to as GenAI) in broad terms refers to a branch of computer science that utilizes machine learning and other advanced algorithms to generate new content de novo. This content includes text, images, code, audio, and video, and is created from prompts or other user inputs. A point of interest is that the output produced by GenAI systems such as ChatGPT is often indistinguishable from that of a human. This development has created a need for establishing guidelines in the university setting that are concrete enough to provide immediate direction on best practices, including syllabus development, but can remain malleable as we continue to research this topic and gain a better understanding of its role in university pedagogy.

Based on work from the Centre for Teaching and Learning (CTL) and other units at Concordia University, in addition to current literature on the topic, we are aware that there are diverse teaching approaches and learning outcomes across campus that necessitate tailored, context-specific guidance for GenAI and not blanket, university-wide policies. For this reason, the CTL has generated the following guidelines while further consideration is being given to university policy.

Given the diversity of pedagogical approaches on our campus and the unreliable nature of GenAI detection tools, Concordia’s optimal strategy is to assist educators in understanding the quickly-evolving nature of GenAI in order to make informed decisions about its use in their classes. Above all else, it is essential that professors dedicate time to articulating their positions on GenAI clearly within the classroom, on Moodle sites, and, as will be discussed, in syllabi.

Academic Integrity Perspectives on GenAI

One specific issue consistently raised by faculty regarding GenAI is academic misconduct. This concern is consistent with research currently underway that investigates how emerging GenAI technologies like ChatGPT interact with established academic integrity principles and standards.

One central theme focuses on GenAI's implications for accuracy and misinformation. Currie (2023) critically examined GenAI's propensities towards factual inaccuracy, error, and fabrication of citations or other content details. This research documents concerns about the potential for GenAI to generate and perpetuate misinformation, which can undermine the integrity of academic work. Currie (2023) discusses the importance of transparency when using GenAI in research, explaining the role the technology played. Accordingly, faculty should similarly define how GenAI is to be used in class, the goal of the transparency and syllabus statements that follow.

Another key area of focus is the re-evaluation of notions of integrity, quality, authorship, and plagiarism norms in light of GenAI's capabilities. Currie (2023) and Eaton (2023) argue that historical definitions of plagiarism require reassessment to account for AI co-creation and ubiquitous hybrid human-AI writing. The resulting ambiguities around ethical authorship and quality standards underscore calls by these authors for developing updated ethical frameworks and supportive policies to guide appropriate assimilation—the goal of these guidelines.

Finally, an emerging perspective introduced by Eaton (2023) is the notion of a "post-plagiarism" era. As human-AI co-creation of content becomes normalized, Eaton argues that historical definitions of plagiarism may no longer apply. Wholly new ethical frameworks around integrity, authorship norms, and technology's role in teaching and learning would need to emerge. This post-plagiarism lens underscores the pressing need to reimagine policies, standards, and assumptions to account for emerging tech like GenAI. As technologies enabling seamless hybrid writing and creative collaboration advance, maintaining academic integrity is dependent upon reassessing ideas of quality, effort, and plagiarism liability in light of GenAI's capabilities. Rather than policing ever-advancing technologies, Eaton (2023) advocates focused research efforts to outline new integrity guardrails befitting a future of ubiquitous GenAI integration into core academic functions.

The themes and perspectives emerging from current scholarship on GenAI and academic integrity highlight the complex challenges and opportunities presented by these technologies. As we navigate this rapidly evolving landscape, it is essential to engage in ongoing research and dialogue to ensure that our academic integrity frameworks remain relevant and effective in the age of GenAI.

Faculty Perceptions of GenAI at Concordia

A recent survey conducted by the CTL at Concordia University has shed light on faculty perceptions regarding GenAI integration. The survey, based on a well-established model (Technology Acceptance Model; Davis & Venkatesh, 2000), looked at what factors influence faculty's intention to use GenAI in their teaching. The results indicate that educators who believed GenAI would be useful for teaching and learning were more likely to plan on adopting it. However, concerns about academic integrity played a key role, with results from a hierarchical multiple regression revealing that faculty were less likely to use GenAI if they believed it would enable cheating or plagiarism. 

The survey revealed diverse faculty concerns regarding the pedagogical use of GenAI, including academic misconduct, diminished critical thinking, copyright infringement, data privacy, labor issues, environmental impact, and bias perpetuation. These wide-ranging issues highlight the complexity of integrating GenAI into education and emphasize the need for a comprehensive approach that transcends technical training, addressing both the pedagogical benefits and ethical challenges.

Bridging the Gap Between Pedagogy and Ethics

The faculty survey insights underscore the complex landscape of GenAI integration in higher education, with perspectives ranging from maximizing pedagogical potential to prioritizing ethical considerations like data privacy and bias mitigation. Navigating this multifaceted terrain requires a balanced approach that addresses both pedagogical and ethical aspects without compromising either.

The Generative AI Ethical Foundation Principles in Teacher Education (GENAIEF-TE) framework, proposed by Radwan and McGinty (2024), provides a comprehensive solution. By incorporating five key principles, the framework empowers faculty to harness the pedagogical benefits of GenAI while addressing ethical concerns, ensuring a holistic approach to integration. Please note that each principle of the framework below contains links to resources for more information:

1.     Transparent Accountability: Ensuring clarity and openness in the decision-making processes involving GenAI applications, and holding entities accountable for the outcomes of these applications (note that academic integrity discussions can be embedded here):

2.     Privacy and Secure Data Management: Safeguarding the personal and sensitive information of educators and learners, ensuring transparent and consensual data collection and processing, and prioritizing privacy in the creation and use of AI-generated instructional content.

3.     Transparent Data and Algorithmic Literacy: Promoting stakeholders' understanding of data-related concepts, transparency in data practices, and the development of algorithmic literacy to enable informed decision-making and critical engagement with GenAI technologies.

4.     Culturally Sensitive and Inclusive Fairness: Ensuring that GenAI applications are fair and equitable to all users regardless of their cultural or social background, mitigating biases and ensuring inclusivity.

5.     Pedagogy-Centered Design: Enhancing the learning experience by aligning GenAI applications with educational objectives and pedagogical practices.

By adhering to this framework, the CTL can focus on each principle individually while maintaining a holistic view of GenAI integration. This approach allows for targeted pedagogical efforts in specific areas without losing sight of the other essential ethical components, aligning with both Radwan and McGinty’s (2024) suggestions and the concerns expressed by Concordia faculty. A key starting point is for faculty to clearly define how GenAI is to be used in their courses, which is the goal of the transparency and syllabus statements that follow.

Transparency Statements

In line with the GENAIEF-TE framework’s focus on transparent accountability, an important direction is to promote the use of educator and learner transparency statements. These statements are designed to foster open communication and set clear expectations regarding the use of GenAI in the classroom, including space for reflection and change as the semester progresses. To date, much of the discussion about transparency has been learner-facing, highlighting the directionality of the discussion and need to create a dialogue centered around mutual responsibility in the use of GenAI.

Educator Transparency Statement: An educator transparency statement, as suggested by Radwan and McGinty (2024), is a detailed explanation provided by the educator to learners outlining how GenAI tools will be used in the course. This may include information on how GenAI will be employed to develop curriculum, create assessments, and provide feedback. The statement should also address any potential limitations or risks associated with the use of GenAI in the course.

Learner Transparency Statement: A learner transparency statement, conversely, is a detailed explanation provided by the learner to the educator outlining how they plan to use GenAI tools to complete assignments and engage with course material. This statement should include information on the specific tools being used, the purpose of their use, and how the learner intends to integrate the output generated by GenAI into their work. The statement should also acknowledge any potential limitations or risks associated with the use of GenAI in their learning process.

The use of educator and learner transparency statements aligns with the broader goal of fostering a culture of transparency and accountability in the use of GenAI in higher education. By openly communicating their intentions and expectations regarding the use of GenAI, both educators and learners can work together to ensure that these tools are being used in an ethical and responsible manner.

It is important to note that these transparency statements are not intended to be binding contracts, but rather serve as a starting point for ongoing dialogue and collaboration between educators and learners. As our understanding of the role of GenAI in education continues to evolve, it is essential that we remain open to adapting and refining these statements to meet the changing needs of our educational community.

Syllabus Statements

Faculty willl still likely have questions about that they should state in their syllabus about GenAI use. On this basis, this document provides sample syllabus statements on GenAI use for professors to adapt. Customizing the samples can promote responsible use of GenAI by learners based on specific course learning outcomes and the demands of specific assignments or activities. Transparency on expectations and limitations is key to leveraging GenAI tools successfully. Clear syllabus statements are crucial for setting consistent standards, mitigating misuse, and fostering the ethical adoption of GenAI. 

The following syllabus statements provide guidance on addressing the following topics:

  • General Statements
  • Misuse
  • Constraints
  • Acknowledging GenAI
  • Prohibiting GenAI Use
  • GenAI Detection

General Statements

If one is allowing this type of use, here are some ideas for opening general statements about the use of AI tools (adopted from the University of Toronto):

  • Learners are encouraged to make use of technology, including generative artificial intelligence tools, to contribute to their understanding of course materials, under the circumstances outlined below.
  • To achieve favorable results with generative AI, it is essential to invest time in building knowledge in the target subject and refining prompts, as this enables learners to produce more accurate output while validating its accuracy and relevance to the topic at hand.

Misuse 

You may also consider adding statements about misuse, such as the following:

  • Material drawn from ChatGPT or other AI tools must be acknowledged; representing as one’s own an idea, or expression of an idea, that was AI-generated will be considered an academic offense (on acknowledgement, see below).
  • Only some uses of ChatGPT or other AI tools are permitted. Prohibited uses and/or not sufficiently acknowledging use will be deemed misconduct under Concordia’s Academic Code of Conduct. Learners who engage in these behaviours may be charged under Articles 18 (general cheating/plagiarism/dishonest behavior) and 19a (plagiarism) of the Code.
  • In this class, writing assignments and submitting outputs that contain incorrect information related to class concepts, inappropriate responses to assignment prompts, or details that you are unable to explain or discuss in detail is considered a misuse of GenAI.

Constraints

Clearly identify constraints in statements, such as the approach taken by the University of Toronto syllabus language document (adapted, and expanded):

  • Learners may use artificial intelligence tools for generating ideas, creating an outline for an assignment, or polishing language, but the final submitted assignment must be the learner’s own work.
  • Learners may not use artificial intelligence tools for completing exams, writing research papers, or completing other course assignments, including creative assignments, posts to discussion forums, or smaller writing assignments. However, these tools may be useful when gathering information from across sources, assimilating it for understanding, improving writing, and even in the early stages of developing what they will produce for the assignment.
  • For multimedia projects, learners may use AI to generate initial drafts or components of their work. However, the final submission must include significant original input and transformation by the learner, demonstrating their personal creativity and understanding.
  • Learners may not use artificial intelligence tools for major assignments in this course, but learners may use generative AI tools for smaller assignments.
  • Learners may use the following, and only these, generative artificial intelligence tools in completing their assignments for this course: .... No other generative AI technologies are allowed to be used for assessments in this course. If you have any question about the use of AI applications for course work, please speak with the educator.

Acknowledging GenAI 

If GenAI use is allowed, learners must be instructed on proper acknowledgement. The following are helpful additional statements from the University of Toronto:

  • Learners must submit, as an appendix with their assignments, any content produced by an artificial intelligence tool, and the prompt used to generate the content.
  • Learners may choose to use generative AI tools as they work through the assignments in this course; this use must be documented in an appendix for each assignment. The documentation should include what tool(s) were used, how they were used, and how the results from the AI were incorporated into the submitted work.
  • Any content produced by an artificial intelligence tool must be cited appropriately. The MLA and APA are now providing information on citing generative AI.
  • Learners must provide detailed documentation of any AI-generated multimedia content used in their assignments. This includes specifying the tools used, the extent of AI involvement, and how the content was modified or integrated into their final submission. All AI-generated components must be clearly cited and acknowledged.
  • Generative AI may be used to draft some of the writing/phrasing, but cannot simply be cut-and-pasted, and can constitute no more than 25% of the text. You must also specify what work is your own and what comes from AI (e.g., by color-coding AI produced text or ideas).
  • Acknowledge how GenAI inputs were created by documenting their workflow as well as prompts used.

Prohibiting GenAI Use

While some educators may opt to prohibit GenAI entirely, this approach can be difficult to enforce in practice. As an alternative, we recommend focusing policies on setting clear thresholds and expectations around responsible use. However, for educators who still find a need to prohibit GenAI use (e.g., for placement exams), here are some sample syllabus statements:

  • The use of generative AI tools is prohibited for all assignments in this course/exam. Their use in this course will constitute a violation of the Academic Code of Conduct.
  • The use of AI tools like ChatGPT is prohibited for all items on this placement course/exam.
  • Prohibited uses of ChatGPT or other AI tools will be deemed misconduct under Concordia’s Academic Code of Conduct. Learners who engage in these behaviours may be charged under Articles 18 (general cheating/plagiarism/dishonest behavior) and 19a (plagiarism) of the Code.

Again, restrictive policies are very challenging to monitor and enforce effectively. They also limit opportunities to actively engage learners in learning about responsible and ethical AI use. As detailed below, we also strongly discourage the use of GenAI detectors, given their unreliability. We instead encourage focusing on transparent syllabi statements that empower learners to harness these technologies as learning aids while upholding academic integrity.

GenAI Detection

One of the major concerns of faculty continues to be the misuse of GenAI, whereby learners use the tool to do their work, and consequently do not engage in the required learning. One response may be to inquire about the possibility of AI detectors, but it should be noted that online detectors, like GPTZero are known to be unreliable – commonly producing both false positive and false negative results. Furthermore, as informed by Concordia Legal, there are serious privacy concerns surrounding the use of such detectors. At the moment, Concordia has not approved of or acquired the use of any online AI detectors and therefore the use of them by staff or faculty is not permitted due to privacy laws and regulations (more about the University’s obligations concerning Privacy & Protection of Personal Information). To learn more about the University’s obligations when acquiring or using new software that could capture and/or share personal information contained in learner work, you may consult the Privacy Impact Assessment and for information about available resources, please consult the IITS Service Catalogue.

GenAI Misuse and Exemplary Use Checklists

As discussed above, identifying and evaluating potential misuse of GenAI requires a balanced approach. To this end, the following checklist (see Figure 1) aims to provide instructors with a three-level scoring system for detecting potential misuse. The levels range from potential evidence to strong evidence, with each factor weighted by point values. While not definitive, the checklist, based on the abovementioned factors, can aid in identifying assignments that merit further investigation, or discussion with students. It is recommended that instructors review this list with students at the beginning of the term so students can better understand boundaries for GenAI use in their coursework.

Table 1: Checklist for Detecting Potential Misuse of GenAI

Level

Factor

Points

Check 
1 - Potential evidence Writing is overly broad or generic 1  
1 - Potential evidence Departure from student's usual style 1  
1 - Potential evidence Lacks specificity related to class content 1  
2 - Moderate evidence Incorrect information related to class concepts 2  
2 - Moderate evidence Inappropriate responses to assignment prompts 2  
2 - Moderate evidence Overly polished writing beyond student's abilities 2  
3 - Strong evidence Student unable to explain or discuss work in detail 3  
3 - Strong evidence Text that includes fabricated references 3  
3 - Strong evidence Student admits to GenAI misuse 3  

Note. Please consider the following ranges: Minimal evidence (0-2 points), moderate evidence (3-5 points), and strong evidence (6+ points).

Exemplary Use Checklist

In addition to mitigating misuse, it is essential to define and encourage exemplary use of GenAI. Accordingly, the following checklist (see Figure 2) ranges from developing skills to mastery, with each factor weighted by point values. This checklist can aid in identifying student work that demonstrates proficient to advanced integration of AI as a resource. It is similarly recommended that instructors review this rubric with students at the beginning of the term so that students understand expectations for GenAI use in their coursework. The rubric provides a balanced method for recognizing students who use GenAI to enhance their learning and original thought.

Table 2: Checklist for Detecting Exemplary Use of GenAI

Level

Factor

Points

Check
1 - Developing Writing reflects student's voice and style 1  
1 - Developing Specific details related to class concepts 1  
1 - Developing Student can explain in own words 1  
2 - Competent Accurate information related to course content 2  
2 - Competent  Appropriate responses to prompts 2  
2 - Competent Writing quality matches student's abilities 2  
3 - Mastery Student cites AI as resource appropriately 3  
3 - Mastery Student understands and can discuss work 3  
3 - Mastery Writing shows original thought and effort 3  

Note. Please consider the following ranges: Developing (0-3 points), Competent (4-7 points), and Mastery (8+ points).

Best Practices

While integrating AI tools into the classroom presents opportunities, it requires thoughtful guidance to uphold ethics and academic integrity. These best practices provide concrete recommendations across key areas including faculty support, student training, and promoting ethical AI use. The best practices aim to foster responsible AI adoption that enhances pedagogy without compromising rigor.

Faculty Support

Provide comprehensive support for faculty members about the integration of GenAI tools in their courses. Address the benefits, challenges, and ethical considerations associated with AI use. Offer workshops, resources, and articles to help instructors make informed decisions on how to effectively incorporate AI while maintaining academic integrity. 

Student Support

Faculty need to equip students with the knowledge and skills needed to responsibly engage with generative AI tools. Develop modules that emphasize the ethical use of AI, including proper citation of AI-generated content. Incorporate interactive sessions where students can practice using AI tools effectively, acknowledging their usage, engaging in best practices, and avoiding potential pitfalls related to academic misconduct.

Promote Ethical GenAI Use

Foster discussions on ethical AI use within the student community. Engage them in conversations about potential biases in AI tools, the importance of acknowledging AI assistance, and the implications of misusing AI-generated content. Encourage students to reflect on the ethical considerations of relying on AI and making responsible choices. 

Promote Reflective Practices

Incorporate a preliminary summary of intended AI tool use before assignments and a reflective piece afterward. Encourage students to assess the role and impact of AI tools on their learning process and assignment outcomes. This time also allows students to have discussions about prompts and other productive uses of GenAI in their work. This reflective practice fosters a deeper understanding of AI's influence on their educational journey.

Concluding Statement

As our understanding of AI develops along with new tools and techniques, we recognize the need to continuously reassess best practices for responsible classroom integration. These guidelines will therefore be reviewed at least once per academic term to ensure they remain current amidst this rapidly evolving situation. We aim to provide guidance that balances GenAI's potential and risks in order to enhance pedagogy while promoting ethical use and academic integrity at Concordia University.

Dr. Mike Barcomb, Centre for Teaching and Learning, is leading the initiative to provide ongoing support to the Concordia community, including the development of these guidelines. Faculty are encouraged to contact him directly if they have questions about the pedagogical application of GenAI in teaching and learning.  The sample syllabus statements provided above were originally developed by and large by Dr. Naftali Cohn, Chair of the Department of Religions and Cultures at Concordia University. We thank Professor Cohn for contributing his perspectives on integrating GenAI technologies into university classrooms.

FAQ for Faculty

Given that GenAI can perform functions that would affect the outcome of almost any university assignment, one response is to consider banning its use. While this may be a reasonable course of action in certain circumstances, such as establishing a standardized score for a specific skill, this should not be viewed as a comprehensive response to GenAI in higher education. Instead, we recommend determining how to include AI in the classroom and some best practices.  

The CTL does not currently recommend the use of GenAI detectors due to their tendency to produce unreliable results. There is an additional issue with concerns about detectors unfairly flagging the text of second language speakers as being produced by GenAI. For these reasons and more, we do not recommend or support the use of GenAI plagiarism detectors. We note, however, that this is a quickly evolving field and that this may change in time.

The university does not currently have an official policy regarding the use of GenAI. One reason is that GenAI has not officially been adopted by the university, meaning that its use cannot be required in coursework. A second, more crucial reason, is that researchers and practitioners alike are trying to determine the best way to integrate AI in the university classroom. To learn more about how to navigate this evolving development while maintaining academic integrity, please refer to International Center for Academic Integrity or to the European Network for Academic Integrity.

Given that GenAI can perform a range of functions that would affect the outcome of almost any university assignment, re-considering assessments is an important direction., These efforts should not only focus on grading but also actively involve students in work that facilitates assessments and neutralizes the potential for the misuse of GenAI:

  • Process oriented approaches: Students document the differences across drafts of a paper, highlighting what they did and the effect it had on their work.
  • Pre- and post-assignment reflections: Students make statements about GenAI use in an assignment and then follow up, which can also serve as a basis for discussions about best practices, prompts, etc.
  • Spontaneous, live discussions: Students have opportunities to competently, present, pose, and answer questions in a live setting, where using GenAI is not possible. 

Professors play a key role in guiding students on how to use GenAI tools ethically and effectively within their specific field or context.

Back to top

© Concordia University