In the wake of all the rigmarole going on with the Snowden leaks and details of the NSA intelligence gathering apparatus, it's suddenly very clear to the average person just how much data is out there, and how difficult it must be to recognize, organize and filter it for a usable purpose. Even if we try to minimize our digital footprint, each of us nonetheless generates an incredible amount of data that represents US in the digital realm.
To talk us through what systems are used to parse such vast quantities of data into a usable format, we are happy to welcome Stuart Geiger to this month's BkkSci. Stuart's current research is on the intersection of data science and artificial intelligence (AI) that is often branded as "Big Data." These systems collect massive, diverse, and complex data sets, and then use this data to teach computers how to identify patterns and make decisions. Stuart will talk about his work both in building these automated agents to support the production of knowledge, and in studying how these systems are changing how scientists, governments, businesses, and ordinary people like you and me come know the world.
Stuart is a computer scientist and philosopher of science who researches how information technology supports the production of knowledge. He is currently a Ph.D student at the School of Information at the University of California, Berkeley, and has previously worked for the Federation of American Scientists and the Wikimedia Foundation. His previous projects have focused on the emergence of computer simulations in biochemistry, data sharing systems in ecology and climate science, and collaboration and conflict resolution in Wikipedia