Full event details coming soon — please save the date!
Title: AI for Eyetracking
Date & Time: Thursday, December 4, 2025 | 6:30 – 8:30 PM
Duration: 2 Hours
Description:
🔹 1. Eye & Gaze Tracking
Old approach (non-AI): Relied on infrared + geometric models of the eye to calculate gaze. Very precise but hardware-heavy.
Now (AI-powered): Uses deep learning gaze estimation models trained on eye images.
Example: WebGazer.js uses ML to track gaze with just a webcam.
Tobii and Pupil Labs integrate AI for calibration-free and adaptive tracking.
🔹 2. Facial Tracking
Old approach: Landmark detection (geometry-based) → track nose, eyes, mouth with simple math.
Now: AI/ML models detect facial landmarks, micro-expressions, and emotions with much higher accuracy.
MediaPipe Face Mesh (by Google) uses neural nets.
Affectiva uses deep learning to recognize emotional states.
🔹 3. Object Tracking
Old approach: Color filters, optical flow, bounding box matching.
Now: AI models like YOLO, Detectron, DeepSORT track objects robustly in real-world conditions (cars, people, animals).
🔹 4. Body & Pose Tracking
Old approach: Motion sensors or reflective markers.
Now: AI pose estimation (OpenPose, MediaPipe, DeepMotion) learns body joint positions from millions of images/videos → no sensors needed.
🔹 5. Behavior Tracking
Now: Purely AI-driven — models analyze eye, face, and body data together to infer engagement, attention, fatigue, or emotion.
Platforms like iMotions, Affectiva, Realeyes rely on machine learning pipelines.
What to Bring:
To participate, bring your laptop with Figma and GPT set up and ready to go!
Location:
NYIT, 16 W 61st St, NY, NY 10023 (11th Floor Auditorium).
Light drinks will be provided
Hosted by: NYIT + Kevin & the NYC UX & Digital Product Designers Team