The USM Kirwan Center for Academic Innovation invites faculty, staff, and faculty/staff/student teams from accredited Maryland institutions of higher education to learn about innovative approaches to incorporating Generative AI into teaching and learning practices at our Spring 2026 Generative AI Virtual Showcase. This state-wide event will provide a platform to share promising practices, critical insights, and lessons learned as we continue to navigate the evolving landscape of AI in higher education.
Location
All event activities will take place via Zoom.
Date
The Spring 2026 Generative AI Virtual Showcase will take place on Friday, April 24, 2026, with limited-enrollment, virtual pre-conference workshops taking place on Thursday, April 23, 2026.
Cost
The showcase and pre-conference workshops are free. The events are open to faculty and staff from accredited Maryland higher education institutions.
Thursday, April 23, 2026
- 1:00-3:00 pm ET: Pre-conference workshops (virtual, enrollment limited to 30 participants each - workshops are now full and waitlist is closed)
Friday, April 24, 2026
- 9:00-9:15 am ET: Welcome and Opening Remarks
- 9:20-10:20 am ET: Concurrent Session 1
- 10:30-11:30 am ET: Concurrent Session 2
- 11:40 am-12:40 pm ET: Keynote Address: Carter Moulton, PhD
- 12:55-1:55 pm ET: Concurrent Session 3
- 2:05-3:05 pm ET: Concurrent Session 4
- 3:15-3:30 pm ET: Closing Remarks and Reflection
Welcome to the Spring 2026 Generative AI Virtual Showcase
This virtual event features a keynote presentation and four concurrent session blocks with 42 presentations exploring innovative approaches to teaching and learning with generative AI. Participants must register in advance to receive the Zoom link. Sessions include live auto-captions and accessible materials.
Concurrent Session 1 | 9:20 AM – 10:20 AM
Session 1A: Teaching AI Literacy and Responsible Use across the Curriculum
These presentations explore how AI literacy is being embedded into courses through structured frameworks that build critical thinking, ethical awareness, and workforce readiness.
Introducing AI to the First‑Year Curriculum
Emily Bailey, Associate Professor and Director of the Towson Seminar (TSEM), Department of Philosophy & Religious Studies, Towson University
As generative AI becomes ubiquitous, first-year students benefit from structured, proactive guidance. This session describes a comprehensive approach to embedding AI literacy into first-year curricula through ethics modules, applied assignments, and common rubrics. The presenter will share lessons from piloting institution-wide AI instruction in the first-year seminar, including strategies for faculty alignment, student engagement, and assessment. Participants will leave with practical ideas for introducing AI early in the curriculum in ways that build ethical awareness, critical thinking, and responsible use.
Using Generative AI to Explore Journalism Archives: Opportunities, Limits, and Verification
Derek Willis, Lecturer, Data and Computational Journalism, Philip Merrill College of Journalism, University of Maryland, College Park
This session examines how journalism students used generative AI to surface insights from large archives of local reporting while documenting AI limitations such as hallucinations and classification errors. By treating AI output as a starting point rather than an authority, students learned about transparency, verification, and responsible use. Participants will gain transferable approaches for teaching AI‑assisted analysis, documentation, and ethical disclosure in research‑based courses.
Decoding AI: Integrating Industry Microcredentials in a First‑Year Seminar
Collin Sullivan, Program Director for Digital Credential Innovation and Josh Abrams, Instructional Design Specialist, University of Maryland, Baltimore County
This session explores a first‑year seminar that treats AI as both subject and tool, combining foundational literacy, ethical analysis, and industry microcredentials. Participants will learn how early, structured exposure to AI can build confidence, critical thinking, and workforce awareness—offering a replicable model for integrating AI literacy into general education.
Session 1B: Writing, Feedback, and Peer Review
This session examines how AI can enhance feedback processes across writing, speaking, and clinical reasoning while preserving student voice and critical evaluation.
Beginner's Guide to Using AI for Composition Peer Review
Naomi Gades, Assistant Professor and Kristin Shimmin, Associate Professor, Department of English, Frostburg State University
This session explores two approaches to using generative AI as a peer‑review partner in writing courses. By structuring prompts and requiring disclosure, students learned to critique AI feedback rather than accept it uncritically. Participants will gain practical strategies for integrating AI into revision processes while preserving student voice and authorship.
The Study Buddy Bot
Diane Alonso, Teaching Professor, Department of Psychology, University of Maryland, Baltimore County-Shady Grove
Personalization can enhance learning without offloading cognition. This session explores a custom "Study Buddy Bot" designed to tailor practice questions to students' interests while preserving disciplinary rigor. Grounded in learning science, the bot prompts reflection, feedback, and self‑assessment rather than answers. Participants will learn design principles for building AI tools that support engagement and metacognition while respecting student choice and ethical boundaries.
Using AI-Powered Learning Tools to Support Online Public Speaking
Zach Runge, Assistant Professor, Department of Communication Studies, Harford Community College
Asynchronous students often lack access to timely performance feedback. This session examines how an AI‑enabled speech analysis tool was integrated into public speaking courses to support practice, reflection, and revision. The presenter will share lessons on using automated feedback to supplement—not replace—human evaluation. Participants will gain insights into accessible, scalable ways to support skill development in performance‑based courses.
Session 1C: Clinical Simulations
These presentations demonstrate how AI can scaffold professional judgment and reflective practice across clinical psychology, social work, and nursing.
Using Generative AI to Teach Case Conceptualization Skills in Clinical Psychology
Amanda Draheim, Assistant Professor, Department of Psychology, Goucher College
Case conceptualization requires synthesis, judgment, and feedback—skills that are difficult to scale. This session explores how generative AI was used to scaffold readings, simulate role‑plays, and provide structured feedback while maintaining ethical boundaries. Participants will learn adaptable strategies for positioning AI as a developmental support tool that strengthens reasoning, reflection, and professional identity across applied disciplines.
Using Simulation in Social Work Classes
Rachel Buchanan, Associate Professor and Becky Anthony, Associate Professor, School of Social Work, Salisbury University
This session explores the use of AI‑driven client simulations to help students practice interviewing and engagement skills in a low‑risk environment. Participants will learn how simulation supports confidence, ethical discussion, and reflective practice—offering adaptable strategies for other practice‑based disciplines.
AI in the Maternal Newborn Classroom
Lauren Pelesky, Assistant Professor, Department of Nursing, Frostburg State University
When clinical exposure is limited, simulation can bridge the gap. This session describes an AI‑supported patient simulation that allows nursing students to practice communication, assessment, and reflection in a safe environment. Participants will learn how AI‑generated feedback can support experiential learning while preserving instructor oversight and professional judgment.
Concurrent Session 2 | 10:30 AM – 11:30 AM
Session 2A: Course Design and Faculty Development
This session explores structured frameworks and workflows that help faculty use AI to design pedagogically sound courses while maintaining instructional voice and rigor.
The IDEA Framework for AI‑Assisted Course Design
Cynthia McGinnis, Professor of Mathematics, University of Maryland Global Campus
Faculty often use AI for course design without pedagogical guardrails. This session introduces a structured framework that embeds adult learning theory directly into AI‑assisted lesson planning. Participants will learn how structured prompts and verification checklists produce more aligned, inclusive, and usable instructional materials without requiring faculty to become instructional design experts.
Using Generative AI to Support Your Course Design: A Faculty Guide
John G. Schumacher, Professor, Sociology, Anthropology, and Public Health
Gerontology, University of Maryland, Baltimore County
This session practically demonstrates how generative AI can support course design when embedded within a backwards‑design workflow. Rather than isolated prompting, faculty will learn to use AI iteratively to align objectives, assessments, and course activities. Participants will gain repeatable, model‑agnostic workflows for using Gen AI to augment design while preserving faculty voice and pedagogical intent.
Increasing Research Productivity with AI Tools and Techniques
Eric Stokan, Associate Professor, Department of Political Science and Director, Center for Social Science Scholarship, University of Maryland, Baltimore County
This session explores how faculty and students learned to integrate generative AI into research workflows at scale. Participants will gain insights into tool selection, scripting, and scaffolding that make AI use more efficient and reproducible—while avoiding overreliance and maintaining analytical judgment.
Session 2B: Students in the Loop: AI-Integrated Learning
These presentations reframe AI as a tool for deepening critical thinking, reflection, and professional readiness
Same Class, New Approach: Reimagining GenAI in Writing Assessments
Jason R. Baron, Professor of the Practice and Elizabeth A. Pineo, PhD Student, College of Information (iSchool), University of Maryland, College Park
When students critique AI output instead of submitting it, learning improves. This session examines a redesigned writing protocol where students evaluate, challenge, and extend AI‑generated responses rather than replacing their own work. Participants will learn how process‑focused assessment design can reduce misuse, deepen analysis, and build critical AI literacy across writing‑intensive courses.
The Story of My Name: GenAI and Identity in Early Childhood Literacy
Shuling Yang, Assistant Professor, Department of Education, University of Maryland, Baltimore County
Generative AI can support identity‑centered learning when used intentionally and critically. This session highlights a digital storytelling assignment where preservice teachers use multimodal AI tools to explore personal narratives connected to their names. The presenter will share how structured reflection and ethical framing helped students maintain voice, examine bias, and build AI literacy. Participants will gain adaptable strategies for using AI to support inclusive, reflective, and creative learning across education contexts.
Beyond AI‑Proofing: Designing Rigorous, Workforce‑Ready Learning
Sharon Jumper, Professor, Cybersecurity, University of Maryland Global Campus
Attempts to "AI‑proof" coursework often undermine learning and ignore workforce realities. This session reframes generative AI as a catalyst for redesigning assessments that increase rigor rather than reduce it. The presenter will share cross‑disciplinary strategies for integrating AI into formative and summative work while emphasizing transparency, metacognition, and human judgment. Participants will gain concrete examples and design principles for aligning AI‑integrated learning with higher‑order thinking and professional readiness.
Session 2C: AI-Powered Feedback and Differentiated Learning
This session examines how AI can provide structured feedback and differentiated learning experiences that strengthen critical thinking, AI literacy, and reflective practice.
AI as a Design Thinking Partner
Andrew Mangle, Associate Professor, Management Information Systems, Bowie State University
Students often miss opportunities for deeper inquiry during interviews. This session explores a course‑specific AI tool that provides structured feedback on student interviews, highlighting assumptions and missed follow‑ups. Participants will learn how AI can function as a feedback partner to strengthen empathy, listening, and reflective practice across project‑based courses.
Design Considerations for Teaching with AI‑Hallucination‑Inducing Prompts
Randall Groth, Professor and Graduate Program Co-Director, Mathematics Education, Salisbury University
AI hallucinations are often treated as a risk to avoid—but they can also be powerful teaching tools. This session presents an iterative assignment design that uses hallucination‑inducing prompts to help students analyze accuracy, sources, and reasoning in AI output. Participants will learn adaptable design principles and rubrics that support critical thinking, verification habits, and AI literacy across disciplines.
The AI‑Augmented Instructor
Justin Bucelato, Adjunct Faculty, Cybersecurity, University of Maryland Global Campus
Faculty time pressures often limit student engagement. This session presents a practical framework for using generative AI to reclaim instructional time while embedding academic integrity safeguards. Participants will learn how structural design—not policy alone—can reduce misuse and support individualized learning across preparation levels.
Session 2D: Critical AI Literacy and Rhetorical Approaches
These presentations position AI as both subject and tool for developing critical literacy, rhetorical awareness, and intercultural competence.
Rise of the Machines(?): Human–Machine Boundaries in a Literature Course
Seth Forrest, Assistant Professor, Department of Humanities and Bianca Lawson-Johnson, Student, Coppin State University
What happens when AI is both the subject of study and a collaborator in course design? This session explores a literature course that uses generative AI to interrogate questions of consciousness, ethics, and authorship while positioning students as critical investigators. Through human–AI dialogue, students test assumptions about intelligence, empathy, and agency. Participants will learn a transferable framework for engaging AI as a thinking partner rather than an answer generator, with implications for ethical inquiry, literacy, and student agency across disciplines.
LLMs in English Departments: Teaching Rhetoric as AI Literacy
Tanya Olson, Lecturer, Department of English, University of Maryland, Baltimore County
This session reframes interactions with generative AI as a rhetorical act within language-driven systems. In UMBC’s ENGL 211, students apply audience, purpose, tone, and revision principles to AI interactions, learning to treat models as thinking partners rather than text generators. Participants will examine course-based assignments that position AI as a process-oriented tool, helping students critically shape and evaluate AI output across disciplines.
Enhancing Intercultural Learning Through AI: A Case Study with Riffbot
Alexander Breitling, Global Learning Advisor, Maryland Global / Education Abroad, University of Maryland, College Park
Discussing identity and difference can be challenging for students, particularly across cultures and languages. This session examines how a conversational AI tool was used as a rehearsal and reflection partner in an intercultural dialogue course. By scaffolding language and perspective‑taking, AI supported confidence without replacing human interaction. Participants will learn transferable strategies for using AI to enhance reflection, equity, and participation in dialogue‑based learning environments.
Keynote Address | 11:40 AM – 12:40 PM
Redesigning Our Relationships: Human-Centered Classrooms in the Age of AI
Carter Moulton, PhD, Faculty Developer, Colorado School of Mines
With the widespread impacts of generative AI tools in the classroom, educators are beginning to redesign their course policies, learning outcomes, and assessments. This interactive talk argues that there is another, often-overlooked aspect of our pedagogy that we must also reimagine to meet this moment: our relationships—those peer connections, classroom communities, and student-instructor interactions that have always anchored the work of teaching and learning. Drawing on Analog Inspiration, a card deck project featuring over 80 concepts ranging from accessibility to wonder, we'll explore what we mean by a "human-centered" AI pedagogy that prioritizes human values, concerns, and skills. Through this human-centered lens, the session will outline practical strategies for cultivating trust with students, increasing productive friction, fostering intrinsic motivation, structuring co-creation, and strengthening our human relationships. Participants will also have a chance to form new relationships of their own through a discussion of unique digital starter packs that will be delivered prior to the workshop.
Concurrent Session 3 | 12:55 PM – 1:55 PM
Session 3A: AI Virtual Teaching Assistants and Course Support
This session showcases AI teaching assistants designed to provide 24/7 support, extend instructor presence, and improve efficiency while preserving pedagogical judgment.
From Pilot to Pattern: A Generative AI Virtual Teaching Assistant MVP
Cory Stephens, Assistant Professor of Organizational Systems and Adult Health and Charlotte Seckman, Associate Professor of Organizational Systems and Adult Health, School of Nursing, University of Maryland, Baltimore
This project showcases JAIMIE, a course-embedded generative AI virtual teaching assistant (VTA) designed as a no-code MVP for graduate nursing informatics education. Using retrieval-augmented generation (RAG), JAIMIE provides 24/7 coaching grounded in specific course rubrics, APA guidelines, and practicum requirements. Pilot results (n=23) indicate high off-hours use, suggesting a just-in-time scaffolding function that supported student autonomy and reduced repetitive faculty inquiries. We present core design principles, domain specificity, transparent guardrails, and faculty modeling to help others integrate VTAs into their courses.
Kinder, Better, Faster, Stronger: An AI Teaching Assistant
David Leasure, Portfolio Director, First‑Year Experience, University of Maryland Global Campus
This session explores an AI teaching assistant designed to support faculty communication, reflection, and course management—without grading or replacing instructor judgment. Participants will learn how structured prompts, guardrails, and optional adoption improved clarity and efficiency while preserving trust and instructional autonomy.
Scaling the "Master Teacher"
Shizuka Nakamura, Adjunct Assistant Professor, Japanese Language, Mathematics, University of Maryland Global Campus
This session introduces an AI architecture that separates content generation from pedagogical expertise, allowing faculty knowledge to be encoded and reused. Participants will learn how constraint‑based design can reduce hallucinations, ensure consistency, and democratize access to expert‑level instructional materials across courses.
Session 3B: Student Voice, Institutional Policy, and Academic Support
These presentations explore how student voice, inclusive policy, and AI-enabled support structures are shaping institutional approaches to generative AI.
Participatory, Not Punitive: Student‑Driven AI Policy Recommendations
Manisha Vijay, Student, Kaoru Seki, Student, and Yasmine Kotturi, Assistant Professor of Human-Centered Computing, Department of Information Systems, University of Maryland, Baltimore County
Through participatory design activities leading to the publication of a zine, students critically examined generative AI's role in their coursework and future professions, then translated those insights into concrete policy recommendations. Rather than treating policy as something written behind closed doors, students co-designed recommendations based on their own experiences using generative AI in class. Their zine offers practical guidance for instructors and peers and serves as a reminder that meaningful AI policy emerges from dialogue, not directives—when those most affected have a hand in shaping the rules. Participants will consider how student‑led policy design can reduce fear, clarify expectations, and promote responsible, intentional AI use across disciplines.
Travelling the Road to Developing a Comprehensive AI Policy at Coppin State University
Jeronda T. Burley, Associate Professor, Department of Social Work and Denyce Watties‑Daniels, Associate Professor and Director of Simulation and Learning Resource Centers, College of Health Professions, Coppin State University
Developing AI policy requires more than compliance—it requires trust, dialogue, and shared governance. This session examines a faculty‑led, multidisciplinary process for creating a campus‑wide generative AI policy that balances academic freedom, equity, and integrity. The presenters will share lessons learned from navigating disciplinary differences, generational perspectives, and ethical tensions. Participants will leave with actionable insights and policy elements that can inform inclusive, non‑punitive AI governance efforts at other institutions.
Next‑Gen Student Success Strategy
Lucy W. Gichaga, Retention Coordinator, Bowie State University
This session explores how generative AI supported scalable student success programming across diverse student populations. By accelerating content creation while preserving human coaching, AI extended reach and consistency. Participants will gain transferable ideas for using AI to support advising, reflection, and student success initiatives.
Session 3C: When AI Gets It Wrong: Teaching Judgment, Oversight, and Responsibility
This session explores how exposing generative AI’s errors, limits, and overconfidence across clinical, safety, and teacher-education contexts can help learners develop critical judgment, ethical awareness, and meaningful human oversight.
Diagnostic Reasoning Errors of Large Language Models in Endodontic Decision Making
Nileshkumar Dubey, Clinical Assistant Professor, School of Dentistry Division of Cariology and Operative Dentistry, University of Maryland, Baltimore
Generative AI is increasingly used for diagnostic support, yet its reasoning failures are often hidden behind fluent explanations. This session examines how large language models reason through complex, ambiguous cases and where they predictably fail. Using simulated clinical scenarios, the presenters analyzed patterns of error, overconfidence, and misclassification to illustrate why accuracy alone is insufficient. Participants will learn how failure analysis can be used pedagogically to teach critical evaluation, responsible AI use, and human oversight across professional and educational contexts.
Safety Is Not a Game—Except When It Is
Robin Shusko, Chief of Campus Police & Director of Public Safety, Frederick Community College
Scenario‑based learning helps students move from awareness to action, but designing realistic cases is time‑intensive. This session explores how generative AI can accelerate the creation of safety and emergency scenarios while preserving instructor oversight. The presenter will share design principles for using AI as a drafting partner rather than an authority. Participants will leave with adaptable ideas for applying AI‑supported scenario design in experiential and decision‑focused learning.
Thinking With, About, and Beyond AI
Shannon M. Kane, Assistant Clinical Professor and Loren Jones, Associate Clinical Professor, Department of Teaching and Learning, Policy and Leadership, University of Maryland, College Park
Students often arrive with strong opinions—but weak understanding—of generative AI. This session shares lessons from teacher preparation courses that embedded AI across multiple assignments to build critical AI literacy. Through modeling, structured reflection, and evaluation of AI outputs, students learned to question bias, accuracy, and authority. Participants will gain a practical framework for helping learners move from tool use to thoughtful, ethical engagement applicable across disciplines.
Session 3D: Improving Quality and Accessibility with AI
This session highlights how educators are using generative AI to improve the quality, accessibility, and instructional value of content.
Fixing Fails and Salvaging Slop
Toni McLaughlan, Collegiate Associate Professor, Speech, Writing, & Research Methods, University of Maryland Global Campus
This session shares practical lessons from extensive experimentation with AI‑generated visuals, slides, and data graphics. Participants will learn comparative strengths, limitations, and settings across platforms, with tips that reduce trial‑and‑error and improve instructional quality regardless of discipline or technical background.
Accelerating Accessible STEM Visuals with Generative AI
Shannon Tucker, Assistant Dean of Instructional Design and Technology and Affiliate Assistant Professor and Chad Johnson, Assistant Professor of Pharmaceutical Sciences and Director, MS in Medical Cannabis Science and Therapeutics, School of Pharmacy, University of Maryland, Baltimore
STEM visuals are often inaccessible to learners using assistive technologies. This session presents an AI‑assisted workflow for generating first‑draft alt text and long descriptions for data‑dense figures, aligned with accessibility standards. Participants will learn how AI can accelerate remediation while preserving human review, accuracy, and instructional intent—offering a scalable approach to inclusive course design across disciplines.
Beyond the Prompt: Building a Grounded AI Accessibility Assistant for Faculty and Staff
Rita Thomas, Manager, Instructional Design & Technology, Frostburg State University
Institutions need scalable ways to support faculty and staff in creating accessible digital content. This session explores the development of a custom generative AI assistant to provide 24/7, just in time guidance. We will walk through how the agent was created, how it was grounded with targeted knowledge sources, and how it connects to Universal Design for Learning principles and Quality Matters accessibility standards. The tool supports inclusive course design without replacing human expertise. Participants will leave with a practical model for building their own grounded AI agents to support compliance, faculty development, and inclusive course design.
Concurrent Session 4 | 2:05 PM – 3:05 PM
Session 4A: AI-Assisted Course Design and Curriculum Innovation
This session examines how AI can accelerate course design, curriculum evaluation, and pedagogical scaling while preserving instructor expertise and rigor.
AI Readiness at an HBCU
Darilyn Mercadel, Program Coordinator, Elementary Education, Gaye Acikdilli, Associate Professor, Department of Management, Marketing, & Public Administration, and Rand Obeidat, Associate Professor, Department of Management Information Systems, Bowie State University
This session examines an experiential learning model that helps future educators move from AI anxiety to pedagogical competence. Through hands‑on creation, reflection, and ethical analysis, students learned to design AI‑supported learning environments. Participants will gain insights into using experiential approaches to build AI literacy, confidence, and equity‑focused teaching practices.
AI‑Based Curriculum Evaluation in Higher Education
Fabio Chacon, Director of Academic Computing, Bowie State University
This session explores using generative AI to compare higher education curricula with AI-generated models to identify alignment gaps, sequencing issues, and assessment relevance to skills. It introduces the "AI Curriculum Evaluation Protocol," based on four queries that define professions, describe programs, generate course lists, and create sample syllabi. The protocol compares goals, courses, and syllabi with AI outputs, incorporating faculty feedback and student outcomes to support evidence-informed improvement while requiring human judgment.
AI-Assisted Course Design: Improving Efficiency and Student-Centered Content Creation
Md Kamruzzaman Sarker, Assistant Professor, Department of Computer Science, and Fahina Salma, Student, Bowie State University
This session examines how generative AI was used to draft assignments, labs, and projects more efficiently while preserving instructor review and rigor. Participants will learn how AI can accelerate content creation, support accessibility, and free faculty time for higher value instructional work.
Session 4B: Scaffolding Exploration, Judgment, and Reflection
These presentations explore AI-driven simulations and chatbots that build clinical confidence, professional judgment, and research skills.
The AI‑Assisted Birthing Room
Wendy Post, Assistant Professor, Department of Nursing, Bowie State University
This session explores how generative AI was used to create rewindable maternal health simulations grounded in complex clinical ambiguity and social determinants of health. Learners encounter realistic patient narratives where nothing is “technically wrong,” yet decisions carry risk. By replaying pivotal moments, students examine hesitation, communication, and escalation choices and how these shape outcomes. Participants will gain transferable strategies for using AI-supported simulation to strengthen clinical judgment, reflection, and psychological safety across experiential learning environments.
AI‑Driven Simulated Learning in Medical Speech‑Language Pathology
Jennifer Rae Myers, Clinical Assistant Professor and Kristin Slawson, Clinical Associate Professor, Department of Hearing and Speech Sciences, University of Maryland, College Park
This session introduces an AI‑driven patient simulation designed to build clinical confidence and reasoning where placement opportunities are limited. Participants will learn how standardized prompts, debriefing, and reflection convert AI interaction into meaningful professional learning across clinical education contexts.
AI, Give Me a Topic: Implementing a Chatbot in HONR 111
Janice Orcutt, Instructional Designer, Viktoria Basham, Lecturer, Clarke Honors College, and Katie Delezenski, Research and Instruction Librarian, Salisbury University
Choosing a viable research topic is a common barrier for first‑year students. This session describes the use of a constrained chatbot to support topic exploration while strengthening critical evaluation. Through iterative redesign, the chatbot shifted from idea generator to critical coach. Participants will learn how prompt design, librarian collaboration, and intentional constraints can help AI scaffold inquiry without narrowing student thinking.
Session 4C: Data, Feedback, and Technical Skills
This session demonstrates how AI can support data literacy, technical skill development, and equitable access in STEM and cybersecurity education.
AI‑Supported Feedback for Data Visualization Sense‑Making
Karen Chen, Assistant Professor, Supakit Boonsongprasert, PhD Student, Chris Song, Teaching Fellow, and Sachin Pathak, Graduate Student, Department of Information Systems, University of Maryland, Baltimore County
This session explores an AI‑supported feedback system designed to strengthen students' ability to interpret and reason about data visualizations. Rather than evaluating correctness alone, the tool provides targeted prompts that encourage explanation, comparison, and sense‑making. Participants will learn how structured AI feedback can make evaluation criteria explicit, support metacognition, and help students develop stronger analytical habits in statistics, data science, and other data‑rich disciplines.
Teaching Ethical Hacking through Generative AI Bot Development: A Case Study
Wendy Xu, Professor, Department of Computer Science and Information Technologies, and Sydney Phillips, Student, Frostburg State University
How can generative AI deepen technical competency? This session explores an innovative Ethical Hacking project in which students transition from AI users to AI developers. By building security-focused bots, students engage with the intersection of Large Language Model (LLM) capabilities and Red Team/Blue Team cybersecurity strategies. Featuring a joint presentation by faculty and students, we showcase the development process, technical challenges, and learning outcomes. Participants will examine this model to strengthen critical thinking, technical judgment, and responsible AI use. Attendees will leave with insights into how this scalable pedagogical approach can be adapted to their own classrooms.
Enhancing Access and Assessment in Introductory Chemistry with AI
Sarah Bass, Associate Teaching Professor and Tara Carpenter, Teaching Professor, Department of Chemistry & Biochemistry, University of Maryland, Baltimore County
Large‑enrollment courses often limit students' access to timely help. This session examines a course‑specific AI assistant trained on instructor‑provided materials to support reasoning, study habits, and conceptual understanding. Rather than providing answers, the tool prompts explanation and reflection. Participants will gain transferable strategies for using AI to extend instructor presence and support equitable access in high‑enrollment courses.
Special thanks to the Showcase planning committee: Mary Crowley-Farrell, University of Maryland Global Campus and USM Council of University System Faculty; Amanda Draheim, Goucher College; Wendy Gilbert, MarylandOnline; Nancy O’Neill, University System of Maryland, Julie Porosky Hamlin, MarylandOnline; and Jennifer Potter, University System of Maryland
Keynote Address: "Redesigning Our Relationships: Human-Centered Classrooms in the Age of AI" by Carter Moulton, PhD, Faculty Developer, Colorado School of Mines

Carter Moulton is an educational developer, facilitator, and media researcher. He works as a faculty developer at Colorado School of Mines and holds a PhD from Northwestern University, where he was a Graduate Teaching Fellow at the Searle Center for Advancing Teaching and Learning. He is the creator of Analog Inspiration, an educational card deck designed to help faculty discuss and reflect on how critical human values, skills, and concerns are being impacted by the age of generative AI. Since its launch in June 2025, the card deck has been adopted by educators at hundreds of universities worldwide, and has been featured by Teaching in Higher Ed, Inside Higher Ed, and OneHE, among others. A Returned Peace Corps Volunteer, Carter has designed teacher training programs in Thailand, created peer observation programs at Northwestern, and facilitated faculty development workshops in India. His research has been published in a wide range of venues like Teaching and Learning Inquiry, the American Society of Engineering Education, and the International Journal of Cultural Studies.
Register
Gen AI Virtual Showcase: Friday, April 24, 2026, 9:00 am-3:00 pm ET, via Zoom
Please complete the Virtual Showcase registration if you plan to attend any part of the Showcase on Friday.
The Spring 2026 Gen AI Virtual Showcase is open to faculty, staff, and faculty/staff/student teams from accredited Maryland higher education institutions who are interested in learning about innovative approaches to incorporating Generative AI into teaching and learning practices. We ask that registrants use their institutional email addresses when registering.
Note: As of April 6th, all three Thursday Pre-Conference Workshops are full and the waitlist has closed.
Registration questions may be directed to cai@usmd.edu
Contact the Kirwan Center at cai@usmd.edu with any questions.







