Skip to content

Details

This hackathon focuses on Semantic Scene Segmentation, a core computer vision problem where models assign a class label to every pixel in an image. Participants will design, train, and evaluate deep learning models that can accurately understand complex outdoor scenes at a pixel level.
Teams will work with a synthetic image dataset representing realistic outdoor environments containing multiple object classes such as terrain, vegetation, obstacles, and background elements. The challenge lies not only in achieving high accuracy, but also in building models that generalize well to unseen yet similar environments, reflecting real-world deployment constraints.
The problem statement closely mirrors industry-grade perception systems used in autonomous vehicles, robotics, drones, and environment-aware AI systems.

Recommended Tech Stack
Participants are encouraged to use the following tools and frameworks:

  • TensorFlow / Keras (Primary framework)

  • TensorFlow Datasets (TFDS) or TF-style data pipelines

  • DeepLab and related semantic segmentation architectures (Google-origin research)

  • Google Colab / Vertex AI (optional experimentation track)

  • IoU and mIoU metrics, aligned with standard Google research benchmarks
    Hackathon Structure

  • Format: Hybrid (Offline + Online)

  • Duration: Continuous 50-hour hackathon

  • Team Size: 3 members per team

  • Track: Machine Learning / Deep Learning (Semantic Segmentation)

This event is supported by Google for developers.

Emails will be sent to selected Students

Related topics

You may also like