Skip to Main Content

Generative AI in Higher Ed

Ethical Considerations

If you choose to use GenAI tools or other AI technology, it's important for you to be aware of related ethical concerns regarding privacy, intellectual property, academic honesty, bias, human labor, and environmental issues. 

Academic Honesty

Some of the ethical considerations around GenAI and academic honesty are highlighted below:

  • Can I use a GenAI tool for class assignments? CSB and SJU does not have a "one-size-fits-all" institutional policy regarding GenAI use. What is allowed or required in one class may be prohibited in another class, and some instructors will consider any unauthorized/undisclosed use of GenAI tools as potential academic misconduct. Look for statements on GenAI use in your course syllabi and if you have any questions, check with each of your instructors about their expectations regarding student use of GenAI tools. (Note that an instructor's stance might vary depending on the goals for a particular course or even specific assignments - so it's good to check!) 
  • Should I cite a GenAI tool? If your instructor allows you to use GenAI tools for an assignment, ask for instructions on how to cite or provide attribution for any quotes, paraphrasing, and/or ideas you get from them. Some instructors might expect you to provide the prompts you used, an explanation for when, where, and how you used a GenAI tool in your assignment, and/or turn in the full response(s) the GenAI tool generated.  
  • Can I trust GenAI content? GenAI tools are frequently trained to sound confident - even when their output might be inaccurate or factually wrong! They can also make up credible-sounding citations for sources - sometimes incorporating actual researchers' names or a real journal title - for journal articles or books that have never been written or published. This phenomenon is called “hallucinating" or "fabricating." If a GenAI tool's output includes references to sources that you're planning to use for an assignment, make sure to track down the actual citation first. If you aren't sure how to locate the full text for any of these sources, ask a librarian for help.
  • What's the "source" of my information? Most big-name GenAI tools are trained on a "black box" of data, which means we don't know which information sources have been used in its training, and if those sources were generally reliable. Note that GenAI tools also aren't "intelligent" - even though it sounds like they're having a conversation with you, they use algorithms to predict word choice. Do some lateral reading (checking to see if GenAI output matches what other, trusted sources are saying) and compare the quality and accuracy of the GenAI tool's output against other more reliable, transparent, or trusted sources before "taking it at its word."  

Environmental and Sustainability Concerns

Some of the ethical considerations around GenAI and the environment are highlighted below:

  • Energy consumption: Both training GenAI models and using them in our day-to-day lives currently takes a lot of computational power. The data centers needed to train and support GenAI tools require huge amounts of electricity.
  • Water use: GenAI data centers also require a lot of water to sufficiently cool hardware.
  • Emissions: Extracting materials used to make hardware components and transporting these materials/products can involve dirty mining, the use of toxic chemicals, and a large carbon footprint. 

 

Futher Reading:

The Ethics of AI

Codes of ethics for AI frequently highlight issues of accountability, trust, transparency, fairness, and agency.
The video "AI Ethics" outlines five key principles for ethical artificial intelligence (including GenAI):

  • Beneficence: AI should be developed and applied to improve the well-being of our planet and its people.
  • Nonmaleficence (or "do no harm"): AI should avoid harming privacy, autonomy, or employability.
  • Autonomy: Our ability to act freely and independently must be preserved and promoted, while the autonomy of machines must be restricted.
  • Justice: AI must be developed, designed, and deployed in ways that promote justice, fairness, equity, and other related values.
  • Explicability: To promote the other principles, we need to know the "how" and "why" of AI systems and products; this allows us to hold the correct groups responsible for the positive and negative impacts of AI. Accountability and intelligibility are key.