Class at a Glance — Course Complete, Session 24
24
Sessions
Completed
Completed
6
Students
Enrolled
Enrolled
4.1
Avg Pillar
Score /5
Score /5
6/6
Projects
Presented
Presented
3
Mid-Course
Adjustments Made
Adjustments Made
3.6
Lowest Pillar
(Human Dignity)
(Human Dignity)
Student Engagement Arcs — AI Analysis Across 24 Sessions
Student 1
PeakForm — Fitness & Recovery Planner
Consistent contributor S1–S24
Highest contribution volume in the class — initiated or meaningfully extended 9 of the 14 major discussion threads in Phase 1. Philosophical instincts are strong: the question of AI accountability (who is responsible when AI gives harmful advice?) surfaced from this student and became one of the defining themes of the course. Product thinking matured noticeably between Session 12 and Session 20.
AI-surfaced moment (S11): Gave an unprompted technical explanation of noise-canceling headphone physics — compression waves, DSP processing, anti-phase cancellation — accurate and delivered without reference material. Highest single-student knowledge demonstration of the course. Same session: coined the "Ivy League professor vs. pre-K teacher" prompting framework.
↑ Rising
Student 2
StillPoint — Audio-First Spiritual Reflection Companion
Steady contributor S1–S24
Brings theological depth that shifts the direction of class discussion in ways no other student does. Most frequent to connect AI ethics to faith at a doctrinal level rather than a cultural one. Less comfortable offering critique of peers — steps back when disagreement is required. The audio-first design decision for the final project wasn’t random; it came out of a specific AI interaction where she pushed back on text-only spiritual tools.
AI-surfaced moment (S13): When asked about human dignity, responded: “your honor, like as a human, you have like you —” [completing the thought through gesture and context]. This student's intuitive definition was the most theologically grounded in the room. Received end-of-course peer recognition as “Best Human Dignity” reasoning in the class.
→ Steady
Student 3
PantryWise — Pantry Scanner & Meal Planner
Quiet S1–S12 • Rising S13–S24
The most notable growth arc in the class. Minimal unprompted contributions in the first half; by Session 20 was initiating 2–3 times per session without prompting. Product thinking deepened faster than almost anyone else in the class. Recognized by the teacher at course close as “best product manager” — one of only two students who shipped a working website.
AI-surfaced moment (S20): Described how the AI gave her a scenario in which a user said they “hadn’t eaten all day and didn’t deserve to.” She had not anticipated this edge case. Instead of moving past it, she engaged it directly — describing how she would need to change her app’s response logic. Highest-quality unprompted ethical reasoning of the course.
↑ Rising
Student 4
WaveFront — Tower Defense Game
Low verbal engagement • High build engagement
Disconnect between discussion engagement and project engagement is the defining characteristic of this student’s arc. Contributed minimally to Phase 1 ethical discussions but built the most technically complex project in the class — six or seven interlocking game systems. Explicitly noted that he worked harder on this project than his entire social studies grade. Strongest final-day presenter; his product knowledge at the demo was developer-level.
AI-surfaced pattern: Verbal contributions cluster heavily around hands-on sessions and product feedback loops, not abstract discussion. Engagement signal in Phase 1 was misleading — the build phase revealed sustained motivation and capability that discussion-only assessment would have missed entirely. Recognized as “most passionate builder” at course close.
→ Steady (↑ final 4)
Student 5
RoomSpark — Room-Scan Physical Game Generator
Consistent • Demo setback S24
Most creative concept in the class — an app that uses AI to generate physical, offline games from any room. Countercultural direction (using AI to get people away from screens) was original and well-reasoned. AI-observed pattern: produces the strongest feedback for peers when he’s engaged. Recognized at course end as “most likely to improve someone else’s idea.” Demo breakthrough in Session 23 drew a genuine reaction from teacher and peers; execution gap showed on Demo Day.
AI-surfaced moment (S23): Room-scanning demo that had been unstable suddenly worked in class. Transcript captures genuine teacher surprise: the output was described as “incredible” twice in rapid succession. That reaction — the best single demo moment of the course — was followed one session later by a live failure. Confidence impact is real; recommend proactive follow-up before the next course begins.
↓ Watch
Student 6
FieldIQ — Basketball Knowledge & Strategy Tool
Domain-anchored engagement
Domain expertise in basketball is this student’s most reliable on-ramp to engagement. Strongest when discussion connects to a concrete real-world domain; weaker in abstract philosophical territory. Recognized at course close for “most unexpected insight” — observations that came from left field but were thoughtful and original. The human dignity gap (what happens when AI gets a player fact wrong?) was identified by peer discussion, not the student — that question has not been fully internalized yet.
AI-surfaced pattern: Project name was “Basketball History App” through Session 22 — reflecting the student’s tendency to describe function rather than purpose. The commentator prep use case (a legitimate, differentiated value proposition) surfaced only through peer questioning in the final session. The product’s best argument was found by the class, not the builder. Coach toward owning the “why” before the next course.
→ Steady
Pillar Performance — Class Average with Student Spread
Redeeming Value
4.0
Range: 3–5. Two projects needed stronger articulation of specific user benefit beyond “it’s useful.”
Appropriate
4.9
Near-perfect across the class. All six projects were age-appropriate and wellness-oriented.
Human Dignity
3.6
Lowest pillar. Range: 2–5. Most students gave surface-level answers to “what if the AI is wrong?” Only one student (Student 3) independently anticipated an edge case before being prompted. This is the single most important curriculum signal from this cohort.
Benefit Society
3.8
Range: 3–4. Students could identify individual benefit clearly; community-scale impact arguments were consistently weaker.
Explain It
4.2
Range: 3–5. Strong overall. One live demo failure hurt a score that would otherwise have been higher.
Discussion Dynamics — Verbal Contribution Share
Student 1
32%
Student 2
21%
Student 3
18%
Student 4
10%
Student 5
14%
Student 6
5%
Students 1–2 generate >50% of all verbal output. Students 4 and 5 contribute more in build sessions than discussion sessions — engagement tracking based on discussion only would undercount their real participation. Student 6’s low share is partially explained by domain anchoring: when basketball comes up, engagement jumps sharply. Structured on-ramps recommended for Phase 1 of the next iteration.
AI-Surfaced Patterns — Cross-Session Analysis
Human Dignity Is the Class-Wide Blind Spot
Across all six projects, the most consistent gap is the same: students can name a potential dignity risk when directly asked, but only one student identified one independently before being prompted. The “what happens when the AI is wrong?” question needs to be built into the ideation phase as a required step, not surfaced in evaluation. Recommend a dedicated Session 8–10 exercise in the next iteration that forces each student to steel-man the worst-case failure of their own product.
Build Phase Revealed Engagement the Discussion Phase Concealed
Student 4’s verbal contribution share during Phase 1 (discussions, theology, ethics) was the lowest in the class. A contribution-based engagement score at Session 12 would have flagged this student as low-risk of completion. In reality, they built the most technically complex project in the course. The AI analysis of transcript data alone is insufficient to capture this student type — build-phase artifact complexity must be scored alongside verbal contribution as a parallel engagement signal.
Mid-Course Adjustments Correlate with Student 3’s Inflection Point
Student 3’s first unprompted substantive contribution came in Session 13 — the same session in which the class format shifted toward smaller-group product ideation. The correlation suggests that the discussion-heavy format of Sessions 1–12 was creating barriers for students who think through doing rather than talking. The shift in format appears to have been causal, not coincidental. Consider earlier introduction of hands-on exercises in the next iteration.
Demo Day Outcome Is Disproportionately High-Stakes for Project Confidence
Student 5’s live demo failure in Session 24 followed the strongest individual demo moment in the course (Session 23). The proximity of peak success and public failure in final sessions creates a confidence risk that the engagement score alone won’t capture. Two students left the course with unresolved technical failures as their last meaningful product memory. A structured reflection session after Demo Day — separating the idea’s quality from the demo’s execution — should be a required course close for next iteration.
Session Quality Arc — Trend Across 24 Sessions
Phase 1 — S1–S8: Theology & Ethics Foundation
Phase 2 — S9–S16: AI Tools & Product Ideation
Phase 3 — S17–S24: Build & Present
Average session quality score rose from 3.1 (S1–S8) to 4.2 (S17–S24). Lowest two sessions were S1 and S2 as class norms were being established. Peak sessions correlate with: the AI-evaluates-student-work moment (S15), the first student website deployment (S22), and the room-scan demo breakthrough (S23). The drop in S24 reflects Demo Day technical setbacks, not a curriculum failure.
Coaching Opportunities — Before Next Course
Student 5 — Confidence Recovery
The demo failure in Session 24 followed the most celebrated individual moment of the course in Session 23. This student needs to hear — directly and specifically — that the idea was the best in the class and the demo failure doesn’t change that.
Ask: “What would the game have looked like if it worked perfectly on Demo Day? Walk me through it.” Then: “That’s what I saw in Session 23. That product is real.”
Student 6 — Owning the “Why”
The strongest version of this student’s product — the commentator prep use case — was surfaced by peer questions on the last day, not by the student. The idea is better than the student has internalized.
Ask: “Why would someone use your app instead of Google?” Then: “The answer you gave in the last session, when your classmate asked you about prep for commentary — that’s the answer. Why don’t you lead with it?”
All Students — Human Dignity Session
Class average of 3.6 on Human Dignity is the most important curriculum signal. The gap isn’t knowledge — students can define dignity correctly. The gap is application: almost no one independently asked “what if my product harms someone?” about their own project.
Build into next iteration: A required “steel-man your failure” exercise in Session 9 or 10. Each student writes the worst-case story of what their product does to a vulnerable user, then presents it to the class.
This dashboard is illustrative. In the live system, all data is generated from session transcripts. Student identities are anonymized before analysis — this view uses Student 1–6 labels for the demo. In the real teacher view, first names are used for internal reference only. Project names and descriptions in this demo have been altered. Discussion share percentages are approximated from transcript contribution counts, not timed audio.