Demo — Admin / Principal View

School-Level Dashboard

Curriculum health, equity signals, observed learning milestones, and replicable-model evidence — designed for principals, academic leadership, and grant review.

Illustrative Data — For Demonstration Purposes
Course KPIs — Spring 2026 (Complete)
24
Sessions
Completed
100% planned curriculum
6/6
Student
Retention
No attrition
20.2
Avg Project Score
out of 25
Range: 17–23
3.1
Avg Session Quality
Phase 1 (S1–S8)
4.2
Avg Session Quality
Phase 3 (S17–S24)
+35% improvement
2
Working Websites
Shipped by Students
Production deployments
Observed Learning Milestones — Session-Referenced
S3
Imago Dei Framework Established. Pastoral Associate delivered theological foundation; students connected human dignity to AI design in first substantive cross-disciplinary discussion. Framework became the course’s evaluative anchor.
Theology
S11
Student-Led Technical Explanation. One student gave an accurate, detailed explanation of noise-canceling headphone physics (DSP, anti-phase cancellation, latency constraints) without reference material — demonstrating AI-assisted research applied to real comprehension. Highest single-student knowledge demonstration of the course.
AI Literacy
S15
AI Evaluates Student Work in Real Time. Teacher presented AI-generated five-pillar project scores to students individually during class. Students received and engaged with AI feedback on their own work — identified by the teacher as the most effective pedagogical moment of the course. Student response was curiosity and constructive challenge, not disengagement.
AI-Enhanced Feedback
S20
Proactive Ethical Edge Case Identified. One student independently described a scenario — unprompted — in which a user with an eating disorder might harm themselves using a meal-planning app. Student began working through modified response logic immediately. Highest-quality unprompted ethical reasoning in the course; arrived from AI product-mentoring session, not teacher direction.
Ethics Reasoning
S22
First Student Websites Deployed. Two students shipped working web applications to production hosting during class time. Both included functional AI integrations. Teacher noted: “Y’all have real websites. Two of you. That’s amazing.”
Product Build
S23
Environment-Scan Demo Breakthrough. Student’s environment-scanning game generator produced a live, working demonstration that drew a genuine reaction from teacher and peers. Teacher described it as “incredible” twice unprompted. Demonstrates technical capability and AI-assisted development beyond expected middle school ceiling.
Technical Demo
Observed Learning Outcomes
🧠
AI Literacy — Conceptual and Applied
All 6 students articulate the difference between AI pattern-matching and human creativity, apply that distinction to ethical scenarios without prompting, and use prompting frameworks that demonstrate understanding of context-setting and role assignment.
Evidence: S11 technical explanation, S15 rubric discussion, S19 prompt engineering exercise
Ethical Framework Application
Students independently apply the five-pillar rubric to new AI applications they encounter outside class. Peer critique sessions demonstrate the criteria as an active thinking tool, not a checklist. One student identified an edge case in her own product that the rubric was explicitly designed to surface — before being asked to use the rubric.
Evidence: S15 peer evaluations, S20 self-critique, S24 peer Q&A
Catholic Social Teaching Integration
Students connect AI ethics to Imago Dei, human dignity, subsidiarity, and the theology of work. Integration is active, not decorative: students invoke doctrinal reasoning in product critique without prompting. Framework established in S3 remained operative through S24.
Evidence: S13 dignity discussion, Nat’s design reasoning, end-of-course project evaluations
🛠
Product Thinking and Execution
Students demonstrate user-centered design, scope management, constraint reasoning, and iterative product development. Two students shipped working web applications with functional AI integrations. One student built a multi-system application of professional complexity for a first project.
Evidence: S22 live deployments, S24 Demo Day presentations, App Blueprint worksheets
Equity & Inclusion Signals
All 6 students completed a final project and presented on Demo Day
Discussion-only engagement metrics would have undercounted two high-performing students
Build-phase artifact complexity is now scored as a parallel engagement signal
The student with the lowest verbal contribution in Phase 1 built the most technically complex project in the course
Course format adjusted mid-iteration to better serve hands-on learners
No student was graded on “right answers” — rubric rewards reasoning, not conclusions
Discussion dominance: 2 students generated ~53% of verbal output
Structured on-ramps planned for next iteration; contribution share tracked per student per session
One student’s project direction solidified in the final two sessions
One-on-one follow-up completed; individual coaching plan documented before course close
Five-Pillar Class Average — Curriculum Signals
Redeeming Value
4.0
Solid. Range 3–5. Two projects needed sharper user benefit articulation.
Appropriate
4.9
Near-perfect. All projects designed for safe, wellness-oriented use cases.
Human Dignity
3.6
⚠ Lowest pillar. Only 1 of 6 students independently anticipated an edge case. Dedicated session required in next iteration.
Benefit Society
3.8
Individual benefit clear; community-scale argument consistently weaker across the cohort.
Explain It
4.2
Strong. One live demo failure lowered a score that would otherwise have been higher.
Session Quality Trend — AI-Assessed
Phase 1 avg (S1–S8)
3.1
Phase 2 avg (S9–S16)
4.0
Phase 3 avg (S17–S24)
4.2
Peak session
4.8
Session quality scored by AI on engagement, comprehension signals, and productive discussion time. Scores rose consistently from S1 to S24. Three peak sessions: the AI-evaluates-student-work moment (S15), first student website deployment (S22), and RoomSpark demo breakthrough (S23).
Evidence of Replicability & Institutional Value
The most effective pedagogical moment of the course was when students received AI-generated evaluations of their own work in real time. They didn’t shut down — they pushed back, asked questions, and started revising. That’s what we want from any feedback system. The AI made it possible to deliver that experience to six students simultaneously, in class, without grading delay.
— Teacher reflection, Session 15 post-session review
AI-Enhanced Real-Time Feedback Loop
Session transcripts are processed within 24 hours of each class. The AI generates student-specific coaching recommendations, pillar-level project scores, and engagement arc analysis. Teachers receive actionable insights before the next session — not at the end of the semester. This compresses the feedback loop from weeks to hours without adding teacher workload.
✓ Validated across 24 sessions • Zero additional teacher prep time for analysis
Privacy-First Architecture, Fully Operational
Student identities are anonymized before any transcript enters the analysis pipeline. No identifying detail appears in AI output. All sensitive data on the course website uses XOR encoding; student records are stored in a separately encrypted identity table. Teacher-only access controls gate any output that reaches students or parents. This system ran for 24 sessions without a privacy incident.
✓ 24 sessions • 0 privacy incidents • Full teacher-approval gate on all AI output
Transferable Curriculum Framework
The five-pillar evaluation framework, session review format, AI mentor configuration, and student feedback loop are fully documented. A teacher at another Catholic school could run this course from the existing materials with the same AI-enhanced feedback infrastructure. The course has already generated peer consultation requests from other Diocese of Austin schools.
✓ Documentation complete • Diocese consultation requests received • Grant proposal submitted to Notre Dame DELTA
Catholic Educational Mission Alignment
The course integrates Imago Dei, human dignity, subsidiarity, and the theology of work into every unit. AI is taught as a tool that serves human flourishing, not one that replaces human agency. The integration is not an addendum — it is the curriculum. Student-produced projects reflect doctrinal reasoning applied to product design: an outcome not achievable in a purely technical AI course.
✓ Theological integration confirmed across all 6 student projects • Framework active in S1 and S24
This admin view is illustrative. In the live system, all metrics are derived from actual session transcripts and project submissions. No student personally identifiable information is stored or displayed in any shared or administrative view. Session quality scores are AI-generated from transcript analysis, not teacher-reported. All student identifiers in this demo have been changed.