• How GumGum Serves its CV at Scale

    GumGum

    Given the rapidly growing utility of computer vision applications, how do we deploy these services in high-traffic production environments to generate business value? Here we present GumGum’s approach to building infrastructure for serving computer vision models in the cloud. We’ll also demo code for building a car make-model detection server. Topics: -Multivitamin: an open-sourced Python framework for serving library-agnostic machine learning models -Containerization: packaging everything you need into a single portable artifact -CI/CD: automating builds and releases with Drone CI -Custom auto-scaling: using AWS Lambda to scale our infrastructure based on business metrics Speakers: Greg Chu is a Senior Computer Vision Scientist at GumGum, where he works on both the training and large-scale deployment of object detection and recognition models. These models are applied within GumGum's products for contextual advertising and sports sponsorship analytics. Greg has a background in biomedical physics. In his Ph.D research he developed tumor segmentation models to assess the clinical progression of patients in FDA clinical drug trials. Corey Gale is a Senior DevOps Engineer at GumGum. He works on automating cloud infrastructure for highly-scalable systems using open-source technologies. With his background in Robotics Engineering, Corey is a believer that through automation, anything is possible. He is also obsessed with process (measure all the things!), cost-reduction and entrepreneurship (Corey actually created a food delivery app in 2012, well before they became mainstream). Location:[masked]th St. Santa Monica, 4th Floor Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: begin 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Parking: Santa Monica Library Parking Garage. An underground parking structure can be accessed from 7th Street between Santa Monica Blvd. and Arizona Ave. The first thirty minutes are free. Rates are $1 per hour for the first two hours and thirty minutes. After that, the rate is $1 per thirty minutes. Weekdays the daily maximum is $10. Weekends the daily maximum is $5. Parking on level P1 is restricted to stays of three hours or less. Parking spaces for those displaying disabled placards or plates are on P1. Visitors parking for three hours or longer should park on levels P2 or P3. The Library and GumGum do not provide validation for parking. Street-level parking lot. A limited number of metered one-hour parking spaces are available. Entry to the lot is from 7th Street. Meters. Located on 7th street outside our office.

    1
  • The Age of Artificial Content: An Open Discussion about Content Authenticity

    Every technological development, as we know, has unintended consequences. The growth of generative modeling has led to the rise of “deepfakes”—fake images and video that pass as legitimate to the untrained eye. DARPA has spent $68 million in research to detect fraudulent content as this technology is expected to be leveraged for everything from corporate espionage to government propaganda. On the flipside, this same technology is fueling a revolution and opening the door for groundbreaking advances in computer vision. It's been a while since we've met! In the spirit of getting our group back in motion we're holding an open discussion on the arms race between generative modeling and content authenticity detection. Please come prepared with your ideas and opinions on the latest research related to these topics. Background: -Overview of generative modeling -Overview of image forensics Forward-thinking: -Advancing generative techniques -Advancing image forensics The impact of generative modeling: -Potential applications of generative modeling -Ethics of advancing generative techniques Location:[masked]th St. Santa Monica, 4th Floor Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: open discussion 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Parking: Santa Monica Library Parking Garage. An underground parking structure can be accessed from 7th Street between Santa Monica Blvd. and Arizona Ave. The first thirty minutes are free. Rates are $1 per hour for the first two hours and thirty minutes. After that, the rate is $1 per thirty minutes. Weekdays the daily maximum is $10. Weekends the daily maximum is $5. Parking on level P1 is restricted to stays of three hours or less. Parking spaces for those displaying disabled placards or plates are on P1. Visitors parking for three hours or longer should park on levels P2 or P3. The Library and GumGum do not provide validation for parking. Street-level parking lot. A limited number of metered one-hour parking spaces are available. Entry to the lot is from 7th Street. Meters. Located on 7th street outside our office.

  • AI on the Edge

    GumGum

    Overview: We are in massive transition from cloud-centric AI-based Computer Vision systems to edge. How do resource intensive AI applications work in resource-constrained edge devices? Who is driving the standards like OpenFog which is defining much of the 5G networks? Topics: -Training on cloud and deployment on edge -Move to optimized platforms that leverage edge, cloud, and hybrid environments -Resource-constrained AI optimization -Large-scale adoption with OpenFog standardization Speakers: Genquan Stone Duan is the Cofounder and CTO of WiZR, a video analytics platform that leverages IoT and edge computing alongside industry-leading AI for business security and intelligence. After receiving his Ph.D. in Computer Vision from Tsinghua University, Stone led facial algorithm initiatives, such as detection; tracking; alignment; attribute; and recognition, for Microsoft Research Asia. He has published multiple papers in the field of computer vision. Andrew Pierno leads Engineering for WiZR, a video analytics platform that leverages IoT and edge computing alongside industry-leading AI for business security and intelligence. Prior to WiZR, Andrew led development for Zuma Ventures, built the first peer-to-peer dog walking app and worked with Techstars alumni Sync on Set, who recently won an Emmy for their software. A UC Berkley grad, he has built 20+ projects throughout his career and is passionate about the intersection of art and AI. Location:[masked]th St. Santa Monica, 4th Floor Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: main presentation, followed by a healthy Q&A 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Parking: GumGum is happy to offer a parking lot for attendees of this meetup. The lot is located at 1430 Lincoln Blvd, which you can access via alleyway off of Broadway. The lot is between Lincoln Blvd and the alleyway. There is also paid parking at the Santa Monica Library. Please note that the open parking lot does not have any sign indicating it's association with GumGum.

    4
  • A Simple, Remote, Video Based Breathing Monitor

    Overview: Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. We presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well-known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults. Speaker: Nir Regev Research scientist, with over 21 years of experience in developing algorithms. Loves engineering problems that are insoluble. Specializing in computer vision, radar signal processing, multi-target tracking and deep learning classification, radar micro-Doppler deep learning based target classification, optimization and statistical signal processing. Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: main presentation, followed by a healthy Q&A 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Parking: GumGum is happy to offer a parking lot for attendees of this meetup. The lot is located at 1430 Lincoln Blvd, which you can access via alleyway off of Broadway. The lot is between Lincoln Blvd and the alleyway. There is also paid parking at the Santa Monica Library. [masked]th St. Santa Monica, 4th Floor Please note that the open parking lot does not have any sign indicating it's association with GumGum. Here is a Google Maps street view of the lot: http://bit.ly/2ciVHdi

    2
  • ML for Autonomous Driving

    GumGum

    Overview: I will discuss the ML design space for autonomous driving with focus on deep convolutional neural networks. In an autonomous driving system, what subsystems (if any) should and could be substituted by an implicit algorithm trained on data? If so, how should these ML systems be designed? Can ML methods be used outside the car itself in the larger context of a future transportation system? Speaker: Oscar Beijbom Oscar Beijbom is the Machine Learning lead at nuTonomy (http://www.nutonomy.com/), a Boston-based startup developing a full-stack autonomous driving solution. Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: main presentation, followed by a healthy Q&A 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Parking: GumGum is happy to offer a parking lot for attendees of this meetup. The lot is located across the street from GumGum office at: [masked]th St. Santa Monica, 4th Floor Please note that the open parking lot does not have any sign indicating it's association with GumGum. Here is a Google Maps street view of the lot: http://bit.ly/2ciVHdi

    6
  • Efficiently Compressing Deep CNNs

    GumGum

    Overview: While going deeper has been witnessed to improve the performance of convolutional neural networks (CNN), going smaller for CNN has received increasing attention recently due to its attractiveness for mobile/embedded applications. It remains an active and important topic how to design a small network retaining the performance of large and deep CNNs (e.g., Inception Nets, ResNets). Albeit, there are already intensive studies on compressing the size of CNNs, the considerable drop of performance is still a key concern in many designs. This paper addresses this concern with several new contributions. First, we propose a simple yet powerful method for compressing the size of deep CNNs based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of 1 × 1 convolutions and k × k convolutions ( k>1 ), where we only binarize k × k convolutions into binary patterns. The resulted networks are referred to as pattern networks. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of 1 × 1 (data projection/transformation) and k × k convolutions (pattern extraction), we propose a new block structure that combines both concatenation and addition of 1 × 1 and k × k convolved features maps, based on which we design a small network with 1 million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets. Speaker: Dr. Xiaoyu Wang Dr. Wang is currently a Research Scientist at Snapchat. https://www.linkedin.com/in/xiaoyu-wang-a2364014/ Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: main presentation, followed by a healthy Q&A 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Parking: GumGum is happy to offer a parking lot for attendees of this meetup. The lot is located across the street from GumGum office at: [masked]th St. Santa Monica Please note that the open parking lot does not have any sign indicating it's association with GumGum. Here is a Google Maps street view of the lot: http://bit.ly/2ciVHdi

    2
  • A Panel Discussion on VR/AR

    GumGum

    Overview With Facebook's announcement of their Augmented Reality platform at the F8 developer's conference, as well as Apple's announcement of their AR platform at their Worldwide Developer's Conference, AR/VR and "mixed reality" are at the forefront of exciting developments in the tech community. GumGum, Umojify, and the VR/AR Association of Los Angeles are proud to host a panel discussion on this topic, which will tackle some of the following questions: How are Computer Vision and Machine Learning currently being utilized to develop AR/VR experiences? What specific methods are companies using to develop their AR/VR experiences and what are the present-day, technical hurdles? What does the future hold for AI's impact on advancements in the AR/VR space? Speakers Rob Vickery, Co-Founder of Stage Venture Partners - Moderator Born and raised in the West Country of the United Kingdom, Rob is the Co-Founder of Stage Venture Partners. Stage Venture Partners is a venture capital fund that invests in enterprise software startups. We saw an opportunity to build a differentiated venture capital fund focused on the seed stage of the technology market. We invest in founders who are uniquely credible, startups that can only be built today and companies where we can be the best investor on the cap table Prior to founding Stage Venture Partners, Rob created the Entertainment & Technology Practice for BNY Mellon in Los Angeles. At BNY Mellon, Rob was responsible for building a wealth management offering suitable for professionals in the entertainment and technology markets. His clients included a number of A-list talent and serial entrepreneurs. Before BNY Mellon, at the age of 29, Rob was North America Director for Lloyds International, whereby he built and managed the strategy for all aspects of the Lloyds’ business, including compliance, business strategy and managing a team of over 100 colleagues. As a result of his strategic direction, Lloyds North America went from being the worst performing jurisdiction to the best, by a large margin. Before moving to the US in 2010, Rob was part of a team of four at Lloyds that developed white-labeled financial services propositions, one of which was Sainsbury’s Bank (a grocery brand), which is now one of the leading providers of financial products in the UK. Before joining Lloyds, Rob secured positions on the graduate programs of Saatchi & Saatchi and Ecclesiastical Insurance. A dedicated philanthropist, Rob serves on the boards and committees of the British Academy of Film and Television Arts Los Angeles (BAFTA), International Trade Education Programs (ITEP), the British American Business Council (BABC), South Central Scholars and ran the Academy of Business & Entrepreneurship at Dorsey High School, which is based in Compton, Los Angeles. With a keen interest in education reform, Rob pioneered a new linked-learning program, that combined real life entrepreneurs with actual curriculum, at a South-Central Los Angeles-based high school, which resulted in a graduation rate increase from 64% to 85%, amongst his students. Rob still guest lectures at numerous schools in the UK and US. In his downtime, Rob is an amateur paleontologist, outdoors enthusiast, snowboarder and addicted to all forms of media (especially video games). Dulce Baerga, CEO of River Studios Dulce Baerga has been producing interactive content for over 20 years She’s produced and built sites ranging from entertainment based portals to product based community site. She’s a self-taught developer who’s particularly passionate about augmented reality and virtual reality platforms. She's a full stack developer, frontend engineer and network/IT manager. Her skill set includes developing concepts, programming demos, creating UI/UX design, scripting user interactivity, building 3D simulations, creating virtual products, and building cloud networks. In May of 2017, she was made the CEO of River Studios an AR/VR/MR content startup in the River Accelerator. Andrew Couch, CEO of Candy Labs AR Andrew Couch is the founder and CEO of Candy Lab AR, a location-based Augmented Reality technology company headquartered in Orange County, CA. Andrew has worked in the Augmented Reality space since 2012 when the Candy Lab team launched their first location-based augmented reality app CacheTown. With his knowledge in land navigation and experiences with location-based products such as Blue Force Tracker from his days in the U.S. Infantry, Andrew knew that the technology would be successful with consumers and help brands not only drive engagement, but also enhance the user experience along the way. With Andrew's passion and vision over the years, the Candy Lab AR team has built the only Augmented Reality engine that combines GPS, Beacons and a Real-Time Content Management System. *Basically, you just need to have a concept & use case then we wrap that around our code like a Southern California Burrito, extra jalapeños then [BAM] apps are published. An avid speaker on Augmented Reality trends and the future growth of AR, Andrew is also a board advisor for the global VR/AR Association and it's Co-Chair for the location-based services and mapping committee. Final speaker TBA Schedule 7:00 pm: meet, greet & eat 7:30 (ish) pm: main presentation, followed by a healthy Q&A 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Parking GumGum is happy to offer a parking lot for attendees of this meetup. The lot is located across the street from GumGum office at: [masked]th St. Santa Monica Please note that the open parking lot does not have any sign indicating it's association with GumGum. Here is a Google Maps street view of the lot: http://bit.ly/2ciVHdi

    2
  • The State of the Art in Similarity Learning

    Overview: For applications such as Facial Recognition/Verification or Image-Duplicate Detection, it is important to be able to learn a measure of similarity between two images. We will focus on the state-of-the-art techniques being employed to accomplish this task - such as siamese networks, triplet losses, and their many variations - and we will discuss the difficulties that present themselves in using each of these methods. Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: main presentation, followed by a healthy Q&A 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Speaker: Jacob Richeimer Jacob has been a Computer Vision Scientist at GumGum since 2015. Before that, he worked at a start-up in the 3D scanning industry and at several internships in the Computer Vision field. Parking GumGum is happy to offer a parking lot for attendees of this meetup. The lot is located across the street from GumGum office at: [masked]th St. Santa Monica Please note that the open parking lot does not have any sign indicating it's association with GumGum. Here is a Google Maps street view of the lot: http://bit.ly/2ciVHdi

    13
  • The Recent Evolution of Object Detection

    Overview: Techniques for object detection have advanced tremendously in the recent past, tracing from the use of Bag-of-Visual-Words-based representations and Support Vector Machines to Convolutional Neural Networks and further, networks making use of region proposals, such as Faster R-CNN and Single Shot MultiBox Detector. At GumGum, we are continuously exploring the state-of-the-art in order to enhance our applications which rely on detection. This talk will trace the evolution of object detection over the last few years and discuss the inner workings of several neural network architectures that are now widely used for tackling the detection problem. We will compare the performance of these techniques and discuss each of their pros and cons. Schedule: 7:00 pm: meet, greet & eat 7:30 (ish) pm: main presentation, followed by a healthy Q&A 8:00 pm: doors close to new-entrants - so make sure to arrive before this time! Speaker: Kunal Saluja Kunal has previously worked at several early-stage robotics and CV startups. After completing his Masters in Robotics from Johns Hopkins University, Kunal joined GumGum in 2016 as a Computer Vision Scientist. His current projects revolve around deep learning and computer vision with a focus on object detection. Parking GumGum is happy to offer a parking lot for attendees of this meetup. The lot is located across the street from GumGum office at: [masked]th St. Santa Monica Please note that the open parking lot does not have any sign indicating it's association with GumGum. Here is a Google Maps street view of the lot: http://bit.ly/2ciVHdi

    6