RSVPs ARE NOT COMPLETE UNTIL YOU COMPLETE THE REGISTRATION AT THE LINK:
With the barrage of mainstream press headlines around Uber’s autonomous vehicles and Google’s Chauffeur project, there is a lot of hype – and hopefully much promise – around autonomous transportation. This is a tectonic technology shift with wide-reaching ramifications, including making jobs potentially obsolete and hopefully making Bay Area traffic a bit more sufferable.
Before we get to the point of ubiquitous Level 4 automobiles, nimble delivery drones and autonomous AI bots helping us in home and factory, we need to make sure these cars, drones and robots can get around efficiently and safely and truly understand the objects with which they are interacting. This includes understanding their environment (computer vision + perception), pinpointing location within that environment and mapping it (SLAM), offering sense and avoid and other redundant safety systems among other requirements in a compute- and power-constrained environment that will have to adapt and learn in dynamic and highly unstructured environments. Thankfully, the increasing ubiquity and falling price of embedded sensors and processing capability coupled with advanced machine vision, vSLAM and deep learning approaches is accelerating the adoption of autonomous navigation even in smaller and lower-end devices. Today, you can find highly capable sensing systems and vSLAM capabilities in devices ranging from household vacuums, prosumer drones (e.g. DJI Phantom) and even mid-level cars and this is going to increase with time, opening up many opportunities for surveillance, security, manufacturing, transportation and logistics.
This The Hive Think Tank panel will explore some of the approaches to helping these machines get around safely and with increasing autonomy and adaptability. We will also cover edge cases, remaining problems and some of the business models and investing opportunities around these emergent machines.
Meet the Speakers:
Grant Allen, ABB Technology Ventures, Managing Director (moderator)
Grant leads ABB’s innovation efforts in Silicon Valley and has led investments in Industrial Defender (acq. by Lockheed Martin for $165M), Validus DC Systems (acq. by ABB), SoftRobotics, Persimmon Technologies, Scotrenewables and TaKaDu. Areas of interest include robotics, advanced manufacturing, IoT and advanced energy. Before joining ABB, Grant was with Core Capital Partners, a $400M venture fund focused on enterprise software.
Prior to that, he was with Microsoft in their Mobile & Embedded Devices division; Jingle Networks (acq. for $63M); and several start-ups including a web services firm he co-founded at 19. Grant is an active angel investor (investments include AltSpace VR, Avizia, Earnest, EquityZen, LivingSocial, Munchery, NextForce, SendWithUs, Skillshare, Verbling, Visually and Uber) and a founding member of NextGen Venture Partners.
He holds degrees from Duke University’s Pratt School of Engineering and The Wharton School and has completed executive education at IMD in Lausanne, Switzerland.
Hiroshi Saijou, Yamaha Motor Ventures, CEO and Managing Director
Hiroshi “Hiro” Saijou is CEO and Managing Director at Yamaha Motor Ventures & Laboratory Silicon Valley. Prior to founding YMVSV, Hiro was a Division Manager at Yamaha Motor Corporation, USA where he led exploratory efforts in Silicon Valley. Hiro started his career at Yamaha Motor Co., Ltd. (Iwata, Japan) where he worked for almost two decades on a broad array of surface mount technology and robotics efforts in addition to new business development efforts.
Hiro enjoys exploring the California Bay Area, sometimes with his golf clubs. He speaks at conferences frequently on bold, ambitious, sometimes crazy corporate innovation.
Hiro earned a software engineering degree from Kyushu University, one of Japan’s National Seven Universities.
Nima Keivan, CANVAS Technology, CTO
Nima Keivan is Co-Founder and CTO of CANVAS Technology. He is finalizing his PhD in Computer Science at the Autonomous Robotics and Perception Lab at CU Boulder. His research focus has been primarily in visual-inertial dense and sparse SLAM as well as planning and control for agile autonomous ground vehicles.
He has contributed to both Google Project Tango and the Toyota Autonomous Driving Team.
Gabe Sibley, Chief Science Officer, Zoox
Gabe Sibley is a Chief Science Officer at Zoox and an Assistant Professor in Computer Science at the University of Colorado, Boulder. Before joining Zoox, Gabe was an Assistant Professor in Computer Science at George Washington University and Director of the Autonomous Robotics & Perception Lab. Previously, Gabe was a Junior Research Fellow at Oxford University, and a post-doctoral research assistant in the Mobile Robotics Group of the Oxford University Engineering Department working with Professor Paul Newman. Gabe was a PhD student at the Robotic Embedded Systems Laboratory at the University of Southern California under the supervision of Professor Gaurav Sukhatme, and a Robotics Engineer in the Computer Vision Group at NASA-JPL under Dr. Larry Matthies. At NASA-JPL, Gabe worked on long-range data-fusion algorithms for planetary landing vehicles, unmanned sea vehicles and unmanned ground vehicles.
Gabe is interested in robot perception and how it enables effective autonomous behavior. That is, the synthesis of perception, planning and control. He uses probabilistic perception algorithms and estimation theory that enable long-term autonomous operation of mobile robotic systems, particularly in unknown environments. He has experience with vision based, real-time localization and mapping systems, and is interested in fundamental understanding of sufficient statistics that can be used to represent the state of the world. His research uses real-time, embodied robot systems equipped with a variety of sensors — including lasers, cameras, inertial sensors, etc. — to advance and validate algorithms and knowledge representations that are useful for enabling long-term autonomous operation.
Vit Goncharuk, Augmented Pixels, Founder and CEO
Augmented Pixels creates a world where drones and robots can see and navigate as humans do.
Our unique computer vision algorithms (SLAM) allow drones/robots/mobile/hmd navigate outdoor and in GPS-denied environment with high accuracy (centimeters) using just basic hardware (RGB Camera + IMU). Depth and LiDARs are supported as additional sensors too.
Highly precise autonomous navigation dramatically decrease labor costs of robot/drone based services.
6:00 - 6:30 Registration and Networking
6:30 - 6:40 Introduction by The Hive
6:40 - 6:50 Introduction by Glenn Schuster, Nvidia
6:50 - 7:00 Panelists' Introduction
7:00 - 8:00 Panel Discussion
8:00 - 8:15 Q&A
JOIN THE CONVERSATION! @HIVEDATA