Self-Regulated Personal Contracts as a Harm Reduction Approach to Generative AI in Undergraduate Programming Education
Aadarsh Padiyath, Jessica Shen, Barbara Ericson
Abstract
Students learning programming exercise agency in deciding when and how to use GenAI tools like ChatGPT. However, this agency is often implicit and shaped by deadline pressure and peer behavior rather than explicit and conscious learning goals. We designed a GenAI Contract grounded in harm reduction and self-regulated learning theory to scaffold intentional decision-making: students articulated personal learning goals, created usage guidelines, and reflected on alignment at strategic points across an eleven-week semester. The contract was non-binding and graded only for completion, emphasizing self-awareness over enforcement. We implemented this with N=217 students in an intermediate Python course. For students still forming their relationship with GenAI, it worked, as 58% of students reported the intervention changing their thinking and created helpful accountability structures. However, awareness did not always translate to sustained behavior change. Some students who valued their guidelines still abandoned them under various pressures. Maintaining guidelines required constant self-control across hundreds of decisions, while using GenAI freely requires none. Many students could not sustain this burden despite this self-awareness. We discuss supporting student agency when GenAI tools and learning goals create tension.
