PHYS 280 :: Physics Illinois :: University of Illinois at Urbana-Champaign
Section 4: AI Policy & Augmented Authorship
PHYS 280 adopts an "Augmented Authorship" protocol. Generative AI tools (e.g., ChatGPT, Claude, Gemini) represent a transformative technology for technical writing. When used ethically and transparently, these tools can increase your creativity and productivity by helping you identify overlooked issues, generate alternative approaches, and refine your prose. However, AI cannot replace the critical thinking, domain expertise, and professional judgment required in nuclear policy analysis. This policy is designed to help you learn how to use AI as a tool that augments rather than replaces your authorship, preparing you for careers where AI literacy and critical thinking are both essential skills.
We view Generative AI as a calculator for thought: it can perform operations (grammar checks, summarizing, code debugging), but you must set up the problem and interpret the results. AI cannot be an author, and you remain 100% responsible for every word submitted.
Best Practices: Do's and Don'ts
| DO'S (Encouraged Best Practices) | DON'TS (Risky Actions) |
| Revising & Editing: Using AI to improve sentence structure, fix grammar, or suggest concise phrasing for your own draft. | Drafting from Scratch: Asking AI to write the essay, introduction, or entire paragraphs for you. The core analysis must be yours. |
| Formatting Citations: Providing the AI with your raw references and asking it to format them into a specific style (e.g., APA, Chicago). | Generating Citations: Asking AI to find sources or create a bibliography from scratch (High risk of hallucinated/fake papers). |
| Brainstorming: Generating outlines, counter-arguments, or exploring topic angles to overcome writer's block. | Unverified Content: Submitting AI-generated facts, dates, or numbers without verifying them in primary sources. |
| Summarization: Using AI to summarize long texts to aid your understanding of complex material. | Visuals: Using AI to generate scientific figures or charts. |
| Feedback: Asking AI to critique your draft against specific criteria or rubrics. | Co-Authorship: Listing AI as an author or failing to disclose its use. |
Best Practices for AI Use Required in PHYS 280
In using AI tools, you must adhere to the following three rules to ensure academic integrity and professional development:
- Disclosure Statement: Transparency is key. You must include a standard disclosure statement at the end of your assignment (e.g., "During the preparation of this work, I used [Tool Name] for [Purpose]...")
- Process Documentation (Writer's Memo & Logs): You must document your workflow. As part of your submission (e.g., in the Writer's Memo), describe how you used the tool. Additionally, you must save the entire conversation log (e.g., use the "Print to PDF" function in your browser to save the full chat history) and be prepared to submit it as an appendix. This validates your "Augmented Authorship" process.
- Fact-Checking (Zero Tolerance for Hallucinations): Generative AI is known to fabricate facts, data, and texts. It might return factually incorrect numbers about uranium properties just as easily as it invents a non-existent Article in the Non-Proliferation Treaty (NPT). In this course, citing a hallucinated treaty clause or incorrect physical data is a major academic and professional failure. You remain 100% responsible for the accuracy of every word submitted.
Academic Integrity Violations
Undisclosed Use : Using AI without the required Disclosure Statement violates this course's AI policy and constitutes academic dishonesty under the university's general academic integrity standards (Article 1, Part 4 of the Student Code)
Hallucinations (Fabricated Information) : Generative AI is known to fabricate information across many domains. For example, it might invent a non-existent Article in the Non-Proliferation Treaty (NPT), cite a fake study on uranium enrichment, or provide incorrect physical properties of nuclear materials. The hallucination problem is particularly severe in specialized technical fields where AI lacks deep expertise. In this course, citing hallucinated treaty clauses, fabricated scientific studies, or incorrect technical data represents a major academic and professional failure. Such errors will result in a deduction of 15 points per occurrence and egregious cases may be reported as academic misconduct.
Loss of Voice : Submitting work that is entirely homogenized by AI, lacking your unique analytical voice and perspective, may be returned for revision or penalized.
Why These Rules Matter
- The "Author" Problem: AI cannot be an author because it cannot be held legally or ethically responsible. If your policy recommendation violates international law because of an AI error, you are 100% responsible. For instance, if your memo mischaracterizes a key provision of the Chemical Weapons Convention due to AI-generated misinformation, you bear full accountability for that error. "The AI told me so" is never a valid excuse in professional or academic contexts.
- De-Skilling: If you use AI to summarize every reading or draft every paragraph, you lose the ability to critically analyze dense nuclear texts yourself. This leads to a superficial understanding of a field that demands deep expertise.
