In Teaching in the Age of AI, we argued that meaningful integration of generative AI must begin with pedagogy rather than technology. That principle becomes especially important when we turn to assessment. Assessment shapes what students prioritize, how they allocate effort, and how they define success. As generative AI changes how students draft, revise, and generate work, long-standing assessment practices are being tested in new ways. Rather than responding with restriction or uncertainty, this moment invites educators to revisit the purpose of assessment itself and consider how both formative and summative practices can remain meaningful, aligned with learning goals, and guided by pedagogy when AI is part of the teaching and learning environment.
Why Assessment First?
Tensions around generative AI often surface first in assessment. Questions about academic integrity, originality, and fairness often arise in graded work, even as AI influences how students prepare, practice, and learn. Because assessment signals what is valued in a course, it plays a powerful role in shaping student behavior. Beginning this series with assessment allows educators to examine how grading, feedback, and evaluation can better reflect what we truly want students to learn. These goals include critical thinking, application, reflection, and growth, even as we acknowledge the realities of AI-supported learning.
Revisiting Formative and Summative Assessment
To understand how assessment can evolve in the age of AI, it is helpful to revisit the distinction between formative and summative assessment. While both play important roles, they serve different purposes in the learning process.
Formative assessment is designed to support learning as it unfolds. It includes activities such as drafts, practice problems, reflections, and low-stakes checks for understanding. In the context of generative AI, formative assessment offers opportunities to help students practice skills, receive feedback, and reflect on their thinking without the pressure of grades. When aligned with clear learning goals, formative assessment can encourage students to use AI as a learning support rather than a shortcut.
Summative assessment, by contrast, is intended to evaluate learning at a particular point in time. Exams, final projects, presentations, and major papers often carry higher stakes and greater consequences. As generative AI becomes more capable, summative assessments are where many educators feel the greatest uncertainty. This uncertainty invites a deeper question: not simply how to prevent inappropriate AI use, but how to design summative assessments that emphasize application, reasoning, and decision-making. These are outcomes that remain closely tied to human judgment.
Formative Assessment in the Age of AI
Formative assessment is often described as assessment for learning rather than assessment of learning. Its purpose is to support students as they develop understanding, practice skills, and make sense of new ideas. For these reasons, formative assessment is a natural place for thoughtful integration of generative AI. Because formative activities are typically low stakes, they offer space for experimentation, feedback, and reflection, key elements of learning that can be strengthened, rather than undermined, by AI.
When aligned with clear learning goals, formative assessment can help students use AI as a learning support rather than a shortcut. For example, AI can assist students in generating practice questions, revising drafts, or testing their understanding of concepts before submitting work for evaluation. The emphasis remains on the learning process: making thinking visible, identifying misconceptions, and refining ideas over time.
Examples of formative assessment with AI include:
- Drafting and Revision: Students may use AI to receive feedback on early drafts of writing assignments. Rather than submitting AI-generated text as final work, students can ask for suggestions related to clarity, organization, or argument development, then decide which revisions to apply. This positions AI as a feedback partner while keeping students responsible for their thinking and choices.
- Reflection and Metacognition: AI can support reflective prompts that encourage students to explain their reasoning, compare approaches, or articulate what they found challenging. For instance, students might be asked to reflect on how AI-assisted feedback influenced their revision process or what they learned by comparing their initial response with an AI-generated one.
In each of these examples, the value of AI lies not in automating learning but in extending opportunities for practice and reflection. The educator’s role remains central: setting expectations, framing appropriate use, and helping students interpret feedback meaningfully. When formative assessment is designed with intention, generative AI can support deeper engagement with learning goals while reinforcing the principle that pedagogy, and not technology, guides assessment decisions.
Summative Assessment in the Age of AI
Summative assessment serves a different purpose than formative assessment. Rather than supporting learning as it unfolds, summative assessment is designed to evaluate learning at a particular point in time. Final exams, major projects, presentations, and portfolios typically carry higher stakes and have a significant impact on course outcomes.
In the age of AI, the challenge of summative assessment is not simply determining whether AI was used but ensuring that assessments are aligned with what educators truly want students to demonstrate. When summative tasks focus primarily on producing information, AI assistance can blur the line between student understanding and generated output. This moment invites educators to revisit the design of summative assessments and consider how they can emphasize application, reasoning, decision-making, and reflection. These are outcomes that remain closely tied to human judgment.
Examples of summative assessment approaches in the age of AI include:
- Applied and Contextualized Tasks: Rather than asking students to reproduce content, summative assessments can require them to apply concepts to specific contexts, cases, or problems. Students might analyze a scenario, make decisions based on evidence, or justify choices using course concepts. These tasks focus on thinking and reasoning, even if AI is used as a support during the process.
- Process-Oriented Projects: Summative assessments can incorporate checkpoints or reflective components that ask students to document how their work developed over time. Students may be asked to explain their approach, describe challenges they encountered, or reflect on how feedback (human or AI-assisted) influenced their final product. This shifts attention from the final artifact alone to the learning process behind it.
Across these examples, the goal is not to eliminate AI from summative assessment, but to design assessments that make learning visible. Clear expectations about appropriate AI use, paired with tasks that emphasize judgment and interpretation, help ensure that summative assessment remains meaningful and fair. When summative assessments are thoughtfully designed, generative AI becomes a consideration in the learning environment rather than a threat to assessment integrity.
Assessment, Pedagogy, and Purpose
Whether formative or summative, assessment sends powerful signals about what is valued in learning. In the age of generative AI, those signals matter more than ever. When assessment is guided by pedagogy, educators can design experiences that emphasize thinking, reflection, and application rather than production alone. Formative assessment offers space for practice and feedback, while summative assessment provides opportunities for students to demonstrate understanding in meaningful and authentic ways.
Revisiting assessment through a pedagogy-first lens does not require abandoning long-standing practices, but it does invite intentional design. As generative AI becomes part of the learning environment, assessment choices should reflect what educators want students to learn, how they want them to learn it, and how understanding can be made visible. In this way, assessment remains a tool for learning rather than a barrier, reinforcing the central message of this series: pedagogy comes first, and technology should serve to extend, not replace, good teaching.
Looking
Ahead
Assessment and feedback are closely connected. While assessment signals what is
valued in learning, feedback shapes how students respond to that signal as they
revise, reflect, and move forward. As generative AI becomes part of the
assessment landscape, questions about feedback—its timing, purpose, and
impact—become increasingly important. In the next article, Feedback in the Age of AI,
we will explore how educators can design feedback practices that remain
meaningful, personal, and supportive of learning, even when AI tools are
involved.
References
Hopfenbeck, T. N., Zhang, Z., Sun, S. Z., Robertson, P., & McGrane, J. A. (2026, January 4). Challenges and opportunities for classroom-based formative assessment and AI: A perspective article. Frontiers. https://doi.org/10.3389/feduc.2023.1270700
Ilieva, G., Yankova, T., Ruseva, M., & Kabaivanov, S. (2025). A Framework for Generative AI-Driven Assessment in Higher Education. Information, 16(6), 472. https://doi.org/10.3390/info16060472
Xia, Q., Weng, X., Ouyang, F. et al. (2024). A scoping review on how generative artificial intelligence transforms assessment in higher education. International Journal of Educational Technology in Higher Education, 21(40). https://doi.org/10.1186/s41239-024-00468-z
__________________________________________________________________________
Washington, G. (2025, December 31). Assessment in the Age of AI [Blog post].
Retrieved from https://pedagogybeforetechnology.blogspot.com/
