September 2, 2012
I use depth cameras (along with 2D IR cameras) for gesture/body motion input, as well as head tracking for VR.
I use Kinects and Xtions (preferred for its 60fps mode), in "vvvv" using OpenNI-based plugins and custom C# code. I do not use the built-in skeleton tracking (too much latency), instead doing direct processing on the point-cloud data; so I'm currently interested in learning more about various methods of focused data extraction, and arbitrary camera orientation calibration for combining point-clouds from multiple depth cameras.
Yes, I have a number of experiments where a person directly interacts with abstract sound and light in an immersive 3D VR environment that I think people would find tons of fun, and I would love to get feedback on.
I create immersive interactive visual and sound environments, that provide a space for exploration, creation, and play via direct interaction with sound and light.
Hello Lorne! Welcome to "Lets Do Something About Climate Change" Thanks for joining us! We are looking forward to working together to fight climate change. Please feel free to make a suggestion about activities you'd like to pursue. Thanks!
Lorne- Welcome to ur group and glad you can attend our first Meet Up!
hey when do we get to see an installation????