- Learn all about visual navigation in robots
In our February meetup, sponsored by Intel, we’ll talk about navigation and robots. We’ll get to hear two different perspectives - one from the company who built a product and second from a company who supplied the chipset. Location: Intel-Santa Clara. SC-12 auditorium. 3600 Juliette Lane, Santa Clara, CA Date: February 13, 2019 Time: 6PM -845PM Agenda: 6-7 PM: Networking and Pizza 7-705PM: Welcome message[masked]PM: First presentation and QA[masked]PM: Second presentation and QA[masked]PM: Networking and wrap up Presentation 1: Designing navigation on a real life robot: lessons learned Speaker 1: Rachit Bhargava, Marble.io Marble provides robots based solutions for delivery of grocery, meals and parcels and makes it affordable, environmentally friendly and reliable. This involves developing robots to deal with complexities associated with autonomously navigating on sidewalks. This talk focuses on experiences in adopting RealSense to address these challenges and an overview of how it contributed to the various pieces of the robot's navigation system. The presentation will also talk about Marble's experiences with integrating Real Sense in the software stack from a stability and reliability perspective. Speaker Bio: Rachit Bhargava is a founding robotics engineer at Marble Robot Inc where he works on developing perception algorithms for the navigation system. He graduated from the University of Pennsylvania with a Master's degree in Robotics and was part of the GRASP laboratory. Before that he was part of the Robotics Research Center at IIIT Hyderabad working on planning algorithms for space robots and SLAM for ground robots. Presentation 2: Visual navigation for wheeled autonomous robots Speaker 2: Philip Schmidt, Intel Robotics navigation is a key challenge that needs to be addressed by anyone designing autonomous robots. Philip Schmidt from Intel will provide a quick introduction to SLAM and sensor-fusion algorithms. He will then provide an overview of latest Intel RealSense cameras and demo how they are being used on a robot to navigate and perform obstacle avoidance. He will also walk through some sample code for better understanding of these capabilities. Speaker Bio: Phillip Schmidt is a software engineer in the Intel RealSense Group where he has been working on algorithms for SLAM and sensor fusion for robotic applications. He holds a master’s degree in control engineering from the University of Stuttgart, Germany, and, before joining Intel, was a research engineer at the German Aerospace Agency (DLR) where he developed algorithms for robot vision, estimation & control.
- Recent updates in computer vision markets and technology
Computer vision has gained rapid adaption in recent years and in this year ending event being hosted at AMD, we'll talk about latest advances in computer vision. Anand Joshi, independent consultant and analyst at Tractica, will be talking about the updates in computer vision/AI market, what is selling, what is not, what is working, what is not, who is buying etc. He will also talk about markets and applications that are gaining traction and provide short to mid-term market outlook. Jeff Bier from BDTI and the Embedded Vision Alliance will share his unique perspective on the key trends driving the accelerating progress of computer vision technology and applications, including developments in algorithms, processors, development tools and sensors. Speaker Bio: Anand Joshi, independant consultant and principal analyst Tractica Anand Joshi is an independent consultant with focus on computer vision and deep learning chipsets. He is actively involved in advising Tier 1 semiconductor and device companies in product definition, development, market strategy and research. He is also a principal analyst at Tractica where he’s published numerous market reports on AI chipsets, AI infrastructure, Computer Vision, Video Analytics and 3D Technologies. Anand is a technology industry veteran with more than 20 years of experience. During the course of his career, he has successfully built startup businesses as well as business units within larger companies. He holds an MBA from the University of California-Irvine, and MS in electrical engineering from Virginia Polytechnic Institute and State University. Jeff Bier – Founder, Embedded Vision Alliance and President, BDTI Jeff Bier is the founder of the Embedded Vision Alliance (www.embedded-vision.com), a partnership of 90+ technology companies that works to enable the widespread use of practical computer vision. He is also the General Chairman of the Embedded Vision Summit — the only conference devoted to enabling product creators to build better products using computer vision, at the edge and in the cloud. In addition, Jeff is president of BDTI (www.BDTI.com). For over 25 years, he has led BDTI in helping hundreds of companies choose the right processors and develop optimized software for demanding applications in computer vision, deep learning, audio and video. Jeff is a frequent keynote and invited speaker at industry conferences, and writes the popular monthly column “Impulse Response”. Jeff earned B.S. and M.S. degrees in electrical engineering from Princeton University and U.C. Berkeley. Please join us on December 4 at 6 PM. Agenda: 6PM-7PM: networking 7-710PM: Welcome and sponsor’s message[masked]PM: Presentation 1 - Anand to talk about state of the Computer Vision/AI markets[masked]PM: Presentation 2 - Jeff to talk about Computer Vision Technology and Trends[masked]PM: Q&A and wrap up
- Innovations in sensing and application developments for autonomous cars
In this meetup, we'll feature two speakers. Indu from AEye will talk about autonomous cars and how Lidars can be used for better decision making in autonomous driving. Deepak from Lattice Semiconductor will be talking about building edge based AI applications in automotives using FPGA. Agenda: 6-7: Pizza and networking 7-7:10: Welcome message 7:10-7:45: Presentation 1: iDAR - The speed of light in perception 7:45-8:30: Presentation 2: Developing low power, context-aware, personalized in-car infotainment solutions using FPGA Topic 1: iDAR - The speed of light in perception Speaker: Indu Vijayan, Product Manager, AEye Indu will speak about AEye's iDAR sensor and how it can be combined with LiDar to come up with innovative sensing. Speaker Bio: Indu is a specialist in systems, software, algorithms, and perception for self-driving cars. As the technical product manager at AEye, she leads development for the company's leading-edge artificial perception system for autonomous vehicles. Prior to AEye, Indu spent five years at Delphi/Aptiv, where, as a senior software engineer on the Autonomous Driving team, she played a major role in bridging ADAS sensors and algorithms, and extending them for mobility. She holds a BS, Technology in Computer Science from India's Amrita University, and an MS in Computer Engineering from Stony Brook University. Topic 2: Developing low power, context-aware, personalized in-car infotainment solutions using FPGA Speaker: Deepak Boppana Senior Director, Product and Segment Marketing Lattice Semiconductor Deepak will give an overview of how FPGAs can be used for tasks such as human presence detection, local face recognition, and key phrase detection to personalize infotainment systems in the car. Speaker Bio: Deepak Boppana is the senior director of product and segment marketing at Lattice Semiconductor. Since joining Lattice in 2012, Deepak has been driving corporate growth initiatives focused on Edge connectivity and computing, including wireless heterogeneous networks, embedded vision, and artificial intelligence/machine learning. He has more than 15 years of semiconductor product management and business development experience, including prior strategic marketing roles at Intel (Altera). Deepak holds a Master of Science degree in Electrical Engineering from Villanova University, and has been quoted extensively in various technical articles, analyst interviews and press/trade publications.
- From Computer Vision to Robotic Vision
In this meetup being sponsored by Wave Computing and RSIP Vision, we'll have three talks. Majid Bemanian from Wave Computing will talk about the current state of computer vision technology. Following, Max Allen from Intuitive Surgical will discuss his company's revolutionary approach to combine computer vision with robotics in medical surgeries. Finally Pulkit Agrawal, a PhD student at UC Berkley & Chief Architect of SafelyYou, will present a new approach of robots learning via biological sensorimotor techniques. Tentative agenda is: - 6:00 - 6:45 Pizza and networking - 6:45 - 7:00 Welcome and sponsor message - 7:00 - 8:30 Presentations and Q&A - 8:30 - 8:45 Additional Q&A and wrap up - See further description below - Majid Bemanian: Advances in Computer Vision In this talk, Majid will discuss the latest advancements in computer vision – from processing needs to hardware and software requirements – for applications spanning the datacenter to the edge. He’ll address key topics including parameters for both training and inferencing, as well as highlight the best practices to evaluate and fine-tune these parameters for computer vision applications. Bio: Majid is the Director of Marketing for Wave Computing, where he is responsible for leading the market strategy for Wave’s MIPS IP Business Unit. He also co-chaired the prpl Foundation’s security working group, focused on developing open standards and APIs around next-generation security solutions. Majid has more than 30 years of high-tech industry experience for companies including Amdahl Communications, Ascom-Timeplex, Encore Video, Raytheon Semi, LSI Logic, AppliedMicro, and many early-stage startups. He is also an inventor on more than 10 U.S. patents. Max Allen: Using computer vision for robotics surgery The talk will focus on the history and current state-of-the-art in robotic minimally invasive surgery and how computer vision and machine learning can potentially revolutionize this field. Bio: Max Allan works as computer vision engineer at Intuitive Surgical in Sunnyvale. His work is focussed on applying computer vision and machine learning algorithms to build products for the da Vinci surgical robot. He is originally from the U.K. where in 2017 he received a PhD from UCL on the detection and tracking of surgical instruments for laparoscopic and robotic surgery. Pulkit Agrawal: Computational Sensorimotor Learning An open question in artificial intelligence is how to endow agents with common sense knowledge that humans naturally seem to possess. A prominent theory in child development posits that human infants gradually acquire such knowledge through the process of experimentation. According to this theory, even the seemingly frivolous play of infants is a mechanism for them to conduct experiments to learn about their environment. Inspired by this view of biological sensorimotor learning, I will present my work on building artificial agents that use the paradigm of experimentation to explore and condense their experience into models that enable them to solve new problems. I will discuss the effectiveness of my approach and open issues using case studies of a robot learning to push objects, manipulate ropes, finding its way in office environments and an agent learning to play video games merely based on the incentive of conducting experiments. Bio: Pulkit is a Ph.D. Student in the department of computer science at UC Berkeley. He is advised by Dr. Jitendra Malik and his research spans robotics, deep learning, computer vision and computational neuroscience. Pulkit completed his bachelors in Electrical Engineering from IIT Kanpur and was awarded the Director’s Gold Medal. His work has appeared multiple times in MIT Tech Review, Quanta, New Scientist, NYPost etc. He is a recipient of Signatures Fellow Award, Fulbright Science and Technology Award, Goldman Sachs Global Leadership Award, OPJEMS, Sridhar Memorial Prize and IIT Kanpur’s Academic Excellence Awards.
- Learn all about choosing right dataset for AI apps and modern object detection
• What we'll do In this meetup being sponsored by AMD, we'll have two great speakers. Mike Schmit from AMD will talk about how to make your AI application better with a proper dataset and the common problems from training with bad data. Mike will show numerous examples of how errors in your dataset can lead to bad results. Waleed Abdulla will talk about modern object detection and instance segmentation network. Tentative agenda is: - Pizza and networking 6-7PM - Welcome and sponsor message 7PM - 7-745: First presentation and Q&A -[masked]: Second presentation and Q&A Presentation 1: How to Create the Right Dataset for Your Application When training deep neural networks, having the right training data is key. In this talk, I will explore what makes a successful training dataset, common pitfalls and how to set realistic expectations. I’ll illustrate these ideas using several object classification models that have won the annual ImageNet challenge. By analyzing accurate and inaccurate classification examples (some humorous and some amazingly accurate) you will gain intuition on the workings of neural networks. My results are based on my personal dataset of over 10,000 hand-labeled images from around the world. Speaker bio: Mike Schmit is the Director of Software Engineering for computer vision and machine learning at AMD. Mike has been immersed in code optimization for many years. He was the chief software architect for the first 8086 to control experiments in the Space Shuttle, authored the formative book on optimizing code for the Pentium and developed and managed the team that built the first software DVD player. Shortly after that he joined ATI and managed the software video codec team, for many years, which eventually began working on computer vision optimizations and then the OpenVX computer vision standard. Mike has given many industry talks on his team’s optimizations for massively parallel GPUs including recent talks on 360 Video stitching at VRLA, SVVR and Oculus OC3. Presentation 2: Learn how Modern Object Detection and Instance Segmentation Networks Work In this presentation, Waleed will explain how object detection and instance segmentation models work, and will cover lessons learned from his experience building such model that got picked up and used by thousands of deep learning developers. Speaker bio: Waleed Abdulla is a deep learning engineer focusing on computer vision applications. He writes about deep learning and builds open source projects. His most recent project, Mask RCNN, is one of the top instance segmentation tools on github, used by thousands of deep learning developers. He's an independent consultant, while also working on his next startup project. Before getting into deep learning, he built a startup in the news and social media space, raised VC funding, and was in 500Startups and the Facebook fbFund before that. He also served on the board of Hacker Dojo, a non-profit co-working space. And he's often active in organizing technology events, meetups, and hackathons.
- Learn about localization techniques used for autonomous vehicles
• What we'll do In this meet-up, we'll hear from two speakers, Ramona Stefanescu, a self-driving car engineer who is working on level4 autonomous cars. She'll talk about different techniques for using visual odometry to achieve level 4 autonomy. This will be followed by talk from San who will talk about using LIDAR datasets to protect people and the environment. Agenda: 6PM-7PM: Networking and pizza 7-7:10PM: Welcome message 7:10-8PM: Presentations 8-8:30PM: Q&A Speaker Bios: Ramona Stefanescu is a self-driving car engineer and is working on implementing a localization and mapping framework to enable Level 4 autonomous driving. She received her Ph.D. and M.S. in Computational Mechanics at University at Buffalo and B.S. in Computer Science and Mechanical Engineering at Politechnica University, Bucharest. She has pioneered a framework for using high fidelity simulations and innovative statistical analysis for large datasets and distributed systems. Much of her work has been published in premier scientific journals and presented at numerous international conferences. San Gunawardana is co-founder and CEO of Enview, a startup that automates the extraction of insight from massive LiDAR data sets. After finishing a PhD in aerospace engineering at Stanford, San went to Afghanistan where he combined data analytics and remote sensing to detect threats and prevent incidents. San is excited to apply those insights to help solve impactful problems that benefit people and the environment. Previously, San has done computer vision at NASA, built imaging satellites with the Air Force, and was an early employee at the aviation startup ICON Aircraft. If you are driving, * Find the blue and white Walmart Visitor Parking signs. * If there is a cone blocking the visitor parking, you can move it. * Make note of your license plate number to provide to the guard * If there’s no parking by the 600 building, please look near the 860 building.
- Image Processing for Human and Computer Vision
Computer Vision and Deep Learning Meet up's next event will be hosted on April 25 at 6 PM. Judd Heape from Apical (now part of ARM) and Dave Tokic from Algolux will be presenting about image processing and tuning for computer vision. They will cover topics such as: - Image signal processing and image quality considerations for human perception and computer vision - Pre-processing approaches and challenges (de-noising, HDR, blur, distortion, color, illumination ....) - Performance and power considerations - Tuning and optimization The event will take place at Renesas offices in Santa Clara. Tentative agenda is: 6-7 PM: Networking and Pizza 7-715: Welcome remarks [masked]: Presentation [masked]: Q&A Speaker Profiles: Judd Heape is Sr. Director, Marketing in the IVG (Imaging & Vision Group) at ARM. Before being acquired by ARM in 2016, Judd served for 2½ years at Apical Inc. as VP of Product Applications and two years at Apical Limited, in Loughborough UK, as VP of Engineering. Prior to Apical, Judd served three years at Altera Corporation and was responsible for developing the “ASSP business model” for video surveillance solutions and also conceived and specified the MAX 10 family of low-cost FPGAs. Before joining Altera, Judd spent eight years at QuickLogic Corporation in roles ranging from FAE for the South-central U.S. region to senior director of systems engineering. Prior to QuickLogic, Judd spent nine years at Texas Instruments engaging in FPGA design for DLP products and DSP-based printing engines, including two years in Tsukuba, Japan engaging in multimedia research projects for the ‘C6x DSP. Judd has five issued U.S. patents and holds a BE in Electrical Engineering from the Georgia Institute of Technology. Dave Tokic is vice president of marketing and strategic partnerships at Algolux, Inc. Dave has over 20 years of experience in the semiconductor and electronic design automation industries. He most recently served as senior director of worldwide strategic partnerships and alliances for Xilinx, driving solution and services partnerships across all markets. Previously, he held executive marketing and partnership positions at Cadence Design Systems, Verisity Design, and Synopsys and has also served as a marketing and business consultant for the Embedded Vision Alliance. Dave has a degree in Electrical Engineering from Tufts University.
- Recent Developments in Embedded Vision: Algorithms, Processors, Tools and Applic
Folks, Jeff Bier from BDTI, one of the well known names in embedded vision industry, will be speaking at our March event that will also include pizza. See below for more details. Note: Please register by Monday, March 6, 5 PM to comply with Synopsys security requirements. -- Recent Developments in Embedded Vision: Algorithms, Processors, Tools and Application It’s now clear that embedded vision is on its way to becoming a ubiquitous technology. From automotive safety to retail analytics to trash collection, the ability to deploy visual intelligence at scale is changing industries. Recognizing this opportunity, enabling technology suppliers are introducing new offerings – processors, sensors, algorithms, development tools, services and standards – at an unprecedented rate. In this presentation, Jeff will provide an update on important recent developments in these enabling technologies. Jeff will also highlight some of the most interesting and most promising end-products and applications incorporating vision capabilities. You will learn: 1. Recent developments in embedded vision processors and sensors 2. New trends and standards in embedded vision algorithms, development tools and APIs 3. Challenges and techniques in embedded vision product development 4. Interesting recent applications incorporating embedded vision 5. Trends driving the future of embedded vision for product development Who should attend: 1. Anyone wanting to learn about the expanding applications of embedded vision and the technologies enabling it 2. Developers, engineers, designers, and managers considering the use of embedded vision in new products or services 3. Current embedded vision users and developers who want to better understand trends driving vision applications and technology 4. OEMs and end-product developers
- Computer vision and self driving cars
In the first meet up on September 28, CEVA, one of the top computer vision processor IP provider will be presenting on the state of the art for self-driving cars and how computer vision is enabling that. Agenda 6-7PM: Networking 7-715PM: Welcome remarks and introduction to computer vision applications in auto motives [masked]PM: Presentation from CEVA [masked]PM: Q&A and concluding remarks