CV4Edu - Computer Vision for Education

Computer vision (CV) plays a central role in multimodal human-centered AI, yet most models are trained on web-scale benchmarks that poorly reflect real classrooms. Educational data are noisy, private, small-scale, and multimodal (e.g., video, audio, text). Students’ cognitive/behavioral states (e.g., engagement, mind-wandering) and learning processes (e.g., self-regulation, collaboration) can be inferred from subtle multimodal cues (e.g., gaze, pose, facial features). Still, today’s models struggle to generalize to classroom data, limiting reliability in deployed human-centered applications (e.g., assistive technology, collaborative AI). CV4Edu brings together computer vision, natural language processing, human-computer interaction, and educational researchers to chart a community agenda for efficient, privacy-aware multimodal data-driven models that are more reliable in low-resource, real-world classroom settings — potentially launching shared datasets, metrics, and unified practices.

Our goal is to support research that bridges CV, NLP, HCI, cognitive science, and the learning sciences/education communities. We welcome submissions both within and beyond education contexts—such as multimodal modeling, sensing, behavior forecasting, cognitive state inference, robotics, and embodied AI—provided they discuss transferability to classroom settings (e.g., what may break or carry over under noise, occlusions, viewpoints, multi-person dynamics, privacy constraints, limited annotations, distribution shift, hardware variability).

Topics

The workshop topics include (but are not limited to):

Multimodal classroom perception
  • Face, gaze, pose, gesture, posture, affect, and prosody
  • Video, audio, gaze sensors, and wearables (egocentric and exocentric)
  • Multimodal fusion, representation learning, and cross-view / multi-camera setups
Language-centered multimodal learning analytics
  • Linking speech/text to video events, gaze/attention, and instructional context
  • Classroom NLP: ASR robustness, diarization, evaluating and mitigating bias, discourse modeling, dialogue/tutoring interactions, simplification, misconception detection
  • Retrieval-augmented classroom analytics, model adaptation, evaluation for learning-aligned outcomes
Robustness & generalization
  • Domain shift beyond the lab, occlusions, noisy data, and missing modalities
  • Few-/low-shot learning, continual and on-device adaptation
  • Generalization across classroom layouts and populations
Human behavior modeling for learning
  • Engagement, attention, affect, confusion, self-regulation, and metacognition
  • Collaboration, group dynamics, and teacher–student interactions
  • Gaze-informed models, saliency/scanpath prediction, activity recognition
Temporal modeling & intervention
  • Sequential/temporal models of learning processes
  • Behavioral forecasting, early-warning systems, and interventions
  • Real-time inference, feedback, and human-in-the-loop systems
Interpretability, reliability & evaluation
  • Interpretable models, uncertainty estimation, and calibration
  • OOD detection, fairness, and bias analysis
  • Evaluation protocols aligned with learning outcomes
Privacy-aware AI, datasets & deployments
  • Privacy-preserving data collection, anonymization, de-identification, and governance
  • Annotation strategies, construct-aligned labeling, active learning, synthetic data, and dataset curation
  • Classroom-ready systems, scalable multimodal data-collection frameworks, edge/on-device inference, and real-world deployments

We encourage general computer-vision, visually grounded NLP, and human-centered, collaborative AI submissions (e.g., behavioral modeling, pose/activity recognition, gaze estimation, attention modeling, multimodal learning, methods “in the wild”, cognitive state inference and forecasting) that make a clear connection to educational/learning environments (even if primarily in the discussion).

Call for Papers

The workshop invites submissions presenting original research, emerging ideas, datasets and benchmarks, systems, applications, and position papers advancing computer vision for real-world educational settings. We welcome both archival and non-archival contributions.

Submission Tracks

All submissions must follow the CVPR 2026 paper template and official style guidelines.

Archival Track (Full Papers)
Papers submitted to the Archival Track must present original, unpublished work and will be considered for inclusion in the official CVPR 2026 workshop proceedings. The main text must be 6–8 pages in length and formatted according to the CVPR 2026 submission guidelines. References and appendices are not subject to a page limit.
Non-Archival Track (Extended Abstracts + Short / Position Papers)
We invite non-archival submissions describing ongoing/early-stage work (e.g., preliminary results, datasets or benchmarks in progress, negative results, lessons learned), position papers, and work previously published elsewhere (including papers on arXiv or at other venues). Our goal is to foster discussion and community building. These submissions will not be included in the official proceedings. Extended abstracts may be up to 2 pages and short/position papers up to 4 pages (excluding references), formatted according to the CVPR 2026 submission guidelines.
Review Process
  • Each submission will undergo a rigorous double-blind review by at least three reviewers. Conflicts of interest will be managed through the OpenReview platform.
  • Submissions must comply with CVPR policies.
  • An ethics/IRB checklist is required where applicable, and an optional ethics and broader impact statement may be included.
Important Dates (AoE)
  • Archival Paper Submission Deadline: March 12, 2026 March 16, 2026
  • Archival Notification of Decision: April 3, 2026 March 24, 2026
  • Archival Camera-Ready Deadline: April 10, 2026
  • Non-Archival Submission Deadline: May 10, 2026
  • Non-Archival Notification: May 15, 2026

Note: The archival notification date is earlier this year due to updated IEEE metadata submission requirements.

Submission Site

Papers can be submitted through the OpenReview Submission Site.

Note: Submission for non-archival papers will open soon.


Tentative Schedule

Opening & Goals
Keynotes
Poster Session 1 + Coffee Break
Keynotes
Poster Session 2 + Coffee Break
Panel: From Lab to In‑The‑Wild
Community Discussion
Closing & Next Steps

Venue

Denver Convention Center
700 14th Street
Denver CO 80202

The workshop will be held together with CVPR 2026.