Please join us for a presentation by UW professor Dr. Jignesh Patel on some of his most recent research. Here is the presentation description.
Big data platforms today largely employ data processing kernels that were developed for a now bygone hardware era. Modern hardware has made a fundamental shift in recent years, driven by two dominating factors: power consumption limits for hardware components, and the transformation of the traditional memory hierarchy because of large main memory configurations and flash-based storage. In this talk I argue that because of this shift, we are now building a “deficit” between the pace at which the hardware is evolving and the pace that is demanded by data processing kernels to keep up with the growth of big data. This deficit is unsustainable in the long run as it requires building larger and larger data centers to keep up with the anticipated growth in data volumes. One way to “pay off” this deficit is to have hardware and software co-evolve to exploit the full potential of the hardware. Luckily, even on current hardware, there is potential for an order-of-magnitude (or more) in improvement if one uses this perspective to redesign data processing kernels. I will provide some examples of recent work from our Quickstep project that demonstrates the merit of this line of thinking. I will then speculate on other data processing mechanisms that will likely be needed in the future to continue to keep this hardware deficit under control.
I hope everyone can join us!