Black box forever? Regulation and the limits of AI explanations

Are you going?

40 spots left

Share:
Location image of event venue

Details

An “invisible hand” of algorithms is influencing us ever more. Some algorithms are using our personal data while other algorithms are using the data of people similar to us. How can current and future regulation help us understand and control how we are influenced? Will such regulation let the most unscrupulous actors dominate the field? Is it always possible to give an explanation of why an algorithm made a decision which makes sense to the person affected by the decision? Can such explanations be verified? Might such explanations reveal trade secrets? Also, will we be able to predict whether actions we take will cause an algorithm to make a different decision about us in the future?

The lecture will be held by Gudbrand Eggen, Data Scientist and Lab Lead at StartupLab DNN lectures.