- Advanced Eigenvalue Algs, Julia Metaprogramming, and Common Lisp for Julians
We are happy present a great three-speaker event at Rigetti Computing  in Berkeley; at the beginning of the event we will hear how Julia is used in Rigetti's development of quantum computers. This will be a wonderful event; see details below. Please join us! Dan Girshovich is a software engineer on the Quantum Computing team at Rigetti; we'll start with an overview from him of Julia's metaprogramming facilities, including expressions, macros, and generated functions. This overview will include real-world examples of metaprogramming in the Julia ecosystem, as well as a tips for most effectively leveraging these features. Robert Smith  is Director of Software Engineering at Rigetti; he will speak on "Common Lisp for Julia programmers", and possible directions for Julia's future based on a decade of experience writing Lisp. Robert's abstract: "Common Lisp is an old language by today's standards. It has its roots in the early eighties, and was formally standardized through ANSI in 1994. Despite its age, it has several high-quality free and commercial implementations, and it has many language features still not commonly found. Some of these language features, like multiple dispatch and macros, have made their way into Julia." We'll then have a talk from our only non-Rigetti speaker; Brendan Gavin is a Machine Learning Research Scientist at Pilot.ai ; he'll talk on "Exploring advanced eigenvalue algorithms with Julia". His abstract: "The traditional workflow for discovering new numerical algorithms consists of doing math on paper, and then implementing that math using a high performance programming language like C++ or Fortran. This workflow requires a researcher to spend a lot of time and attention on software design, which often distracts from the work of algorithm development. Julia makes this process easier by allowing one to implement algorithms in a natural way, without having to sacrifice computational performance. In this talk I'll discuss how this worked out for me when I used Julia for exploring variations of the FEAST algorithm, which is an advanced technique for solving eigenvalue problems." Julia code is available , as well as an associated paper , and Brendan's dissertation , which used the code. We are very grateful to Rigetti Computing  for hosting us. Please thank them at the event!  https://www.rigetti.com/  https://www.linkedin.com/in/stylewarning  http://pilot.ai/  https://github.com/brendanedwardgavin/feastjl/  https://arxiv.org/abs/1801.09794  http://www.bgavin.net/wp-content/uploads/2018/07/dissertation.pdf
- Tim Wheeler on Algorithms for Optimization in Julia; Jane Herriman on NumFOCUS
BNY Mellon Innovation Center
We're happy to have Tim Wheeler talk to us on how Julia is used in his new book "Algorithms for Optimization". Tim is an engineer working on flying autonomous cars at Kitty Hawk. He got his Ph.D. in Aeronautics and Astronautics from Stanford, and co-authored Algorithms for Optimization with his mentor, Prof. Mykel Kochenderfer. Jane Herriman will also talk at the start of the event on NumFOCUS , which sponsors Julia and other science-focused open source software projects. Tim's abstract: "Algorithms for Optimization is a full-length optimization textbook that has Julia between the pages and under the hood. Forget pseudocode - because we are using Julia, every code block is real, executable code. Figures are automatically generated during compilation using Julia. Even CI testing is achieved using Julia. This talk covers what makes Julia a great language for authoring textbooks, how we wrote ours, and why you might be interested in these techniques too." Tim will also discuss POMDPs.jl , a Julia package for sequential decision making. Julia allows for a unique package structure that caters to problem authors, solver authors, and users simultaneously, while addressing a wide variety of sequential decision making problem forms. This event is hosted by the BNY Mellon Innovation Center, who will also provide food for the event; thank you! We'll eat at 6pm, and start the presentations at 6:30.  http://mitpress.mit.edu/books/algorithms-optimization  https://numfocus.org/community/mission  https://github.com/JuliaPOMDP/POMDPs.jl
- Growing a compiler: Getting to ML from the general-purpose Julia compiler
We're happy to have Viral Shah , one of the original designers of Julia and the CEO of Julia Computing talk to us. Viral's abstract follows: Since we originally proposed  the need for a first-class language, compiler and ecosystem for machine learning (ML) - a view that is increasingly shared by many, there have been plenty of interesting developments in the field. Not only have the tradeoffs in existing systems, such as TensorFlow and PyTorch, not been resolved, but they are clearer than ever now that both frameworks contain distinct “static graph” and “eager execution”interfaces. Meanwhile, the idea of ML models fundamentally being differentiable algorithms – often called differentiable programming – has caught on. Where current frameworks fall short, several exciting new projects have sprung up that dispense with graphs entirely, to bring differentiable programming to the mainstream. Myia, by the Theano team, differentiates and compiles a subset of Python to high-performance GPU code. Swift for TensorFlow extends Swift so that compatible functions can be compiled to TensorFlow graphs. And finally, the Flux ecosystem is extending Julia’s compiler with a number of ML-focused tools, including first-class gradients, just-in-time CUDA kernel compilation, automatic batching and support for new hardware such as TPUs. This talk will provide the current state of our work , also recently presented  at the CGO conference by Keno Fischer and Jameson Nash, and the way forward. We'll have pizza after the event.  https://en.wikipedia.org/wiki/Viral_B._Shah  https://julialang.org/blog/2017/12/ml&pl  https://julialang.org/blog/2018/12/ml-language-compiler  https://t.co/blURWb5xlC
- Full-Stack GPU computing for AI using Julia
We're very happy to have Tim Bessard and Mike Innes talk about "Full-Stack GPU computing for AI using Julia". Tim  is a PhD student at the University of Gent in Belgium working on compilers; Mike  is an engineer with Julia Computing working on machine learning  and GPU kernels. Mike and Tim are co-authors of an article  that describes the ways that Julia is a great language for machine learning. We will have pizza at 6 and the talk at 6:30. Their abstract: "Learn how the Julia programming language can be used for GPU programming, both for (1) low-level kernel programming, and (2) high-level array and AI libraries. This full-stack support drastically simplifies code bases, and GPU programmers can take advantage of all of Julia's most powerful features: generic programming, n-dimensional kernels, higher order functions and custom numeric types. We'll give overview the compiler's implementation and performance characteristics via the Rodinia benchmark suite. We'll show how these techniques enable highly flexible AI libraries with state-of-the-art performance, and allow a major government user to run highly computational threat modeling on terabytes of data in real time."  https://www.linkedin.com/in/tim-besard-6b766031/  http://mikeinnes.github.io/  https://www.youtube.com/watch?v=vWaHDS--s-g  http://www.sysml.cc/doc/37.pdf
- Solve Project Euler problems in Julia together!
• What we'll do Join us on UC Berkeley's campus on the 21st to code in Julia! We'll break into groups to solve some project euler problems. Newcomers to the language are welcome! We'll have pizza so when you sign up, please indicate if you have any food allergies we should plan for (via the google form provided). :) https://goo.gl/forms/DSEBWTzJbL3RnAxS2 • What to bring Bring a laptop! • Important to know
- Intro to Julia + Solving Project Euler problems in Julia
We're happy to have Jane Herriman of Julia Computing present to us! Jane will begin the event with a one-hour introductory tutorial so that new users can jump right in and code. Then she will guide us in hands-on solving of Project Euler (https://projecteuler.net/) problems in Julia. Bring your laptop! This is a great chance to get acquainted with the newest language to achieve peta-scale performance (https://www.nextplatform.com/2017/11/28/julia-language-delivers-petascale-hpc-performance/). Please join us! We (Julia Computing) would really appreciate your response to the following (four question) survey so we can learn who we're reaching and how to accommodate any dietary restrictions you may have: https://www.surveymonkey.com/r/RSYJNJG Schedule: 10:30AM to 11:30AM -- tutorial for new users 11:30AM to 1:30PM -- solving Project Euler problems 12:00PM -- Pizza
- Partially Observable Markov Decision Processes in Julia
We're very happy to have Zach Sunberg (http://asl.stanford.edu/people/zachary-sunberg/) from the Stanford Intelligent Systems Lab (http://sisl.stanford.edu/) (SISL) speak to us on partially observable Markov decision processes in Julia. Before Zach's talk, Chris Peel (https://twitter.com/christianpeel?lang=en) will briefly review the LLLplus (https://github.com/christianpeel/LLLplus.jl) package for lattice reduction and ask for feedback. We will also briefly review the roadmap to Julia 1.0 before Zach's talk. Zach's abstract: Safe and flexible autonomy, especially in systems like self-driving cars, is one of the most immediately important goals in artificial intelligence. The partially observable Markov decision process (https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process) (POMDP) is a tool for modeling decision making problems with uncertainty. In a POMDP, an agent must make a series of decisions based on stochastic observations to try to maximize a reward function. Though a POMDP is a very expressive tool for formalizing a real world problem, the approach is rarely used in practice because of extreme computational demands. Julia is an ideal tool for studying and solving POMDPs because it simultaneously provides the computational power to tackle large problems and the expressiveness to easily implement a wide range of problems and solution techniques. However, it also lacks some features (e.g. interfaces) that would make communication between programmers easier. This talk will focus on SISL's POMDPs.jl (https://github.com/JuliaPOMDP/POMDPs.jl) package and the challenges and successes we have had in building it and other related packages. Come 15 min early for pizza!
- Data Science Workbench and Julia
- Intel Labs on ParallelAccelerator.jl
Ehsan Totoni (https://github.com/ehsantn) will present the ParallelAccelerator (https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB4QFjAAahUKEwipvP2pj__IAhUI02MKHb9eA7s&url=https%3A%2F%2Fgithub.com%2FIntelLabs%2FParallelAccelerator.jl&usg=AFQjCNH_ZxD6SD35gQ1qhmRQFc8uw73fKQ&sig2=YDjjQG4Cs8Oq0kgjBjstCQ) Julia package, a compiler that performs aggressive analysis and optimization on top of the Julia compiler.It can automatically eliminate overheads such as array bounds checking safely, and parallelize and vectorize many data-parallel operations.He will describe how it works, give examples, list current limitations, and discuss future directions of the package. Ehsan is a Research Scientist at Intel Labs working on the High Performance Scripting project.
- JuliaStats, What's new in Julia 0.4, and Julia Documentation
John Myles White (https://github.com/johnmyleswhite/) will introduce JuliaStats, then describe what is needed to move JuliaStats forward. JuliaStats (http://juliastats.github.io/) is a group of statistics and machine-learning packages that John has worked extensively on. Tony Kelman (https://github.com/tkelman) will describe the new features of Julia 0.4, and also features that are being worked on now and that may make it into the next version of Julia. Tony is a PhD student at UC Berkeley. Andy Hayden (https://github.com/hayd) will talk about generating documentation for your Julia project, including Docile and documentation in Julia 0.4