Past Meetup

AI Town Hall - AI Generated Fake News Reaches Human Quality - Now What?

This Meetup is past

84 people went

Algolia

55 Rue d'Amsterdam · Paris

How to find us

Ask the reception desk for location of group.

Location image of event venue

Details

Join us for a special evening event. We'll have a peer-to-peer "Cafe Philo" style discussion about the controversial OpenAI GPT2 AI.

OpenAI claims they have created an AI capable of creating fake news. The AI, called GPT2, creates text which is difficult to distinguish from human generated fake news. They believe their model is so powerful that, despite their previous practice, should not be made available due to safety concerns.

In their own words: "Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

Cafe Philo Ettiquette:

1. If the demand for the microphone is high, limit your microphone moments to less than 20 seconds. Avoid the temptation to follow up with a rebuttal.

2. If you come late, be aware that your questions may have been discussed.

3. Avoid quoting experts to support your thesis. Just say what YOU think. The problem with quoting experts is there may be multiple interpretations of context, intent, conviction, accuracy, applicability, etc.

Discussion:

1. What is OpenAI's official position? What is being made available? What is being withheld? How difficult is it for others to clone their work (time, money, labor)? Who are the stakeholders in this discussion?

2. Do we agree with their claims of near human quality? Which metrics should be used to evaluate quality; grammar, word choice, coherence, novelty? What progress can we expect from future versions? What would super-human fake news generation look like?

3. Can this AI for be used for good or just evil? Can't humans do as much evil? What about future versions? Is this a localized threat or does it scale to an existential threat?

4. Is it right to use safety as an excuse for not releasing important research?

5. What does this mean for other AIs? Must we judge each case or is their a defined line that must not be crossed?

6. Is it better, as OpenAI says, to withhold research if there's risk of danger? Where's the threshold? Can withholding do harm? Will it really make a difference? Are risks due to text faking different than image faking or voice conversion (Obama-talk)?

7. Can an AI be so powerful that it is criminal to create or distribute? What about using, quoting, copyrighting? Who bears the liability for damage? Does AI need to be legislated?

8. Should OpenAI change its name to SometimesOpenAI? [Update: they just created a for-profit named OpenAI LP].

Primary Paper:

Language Models are Unsupervised Multitask Learners (OpenAI): https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf

Resources:

1. OpenAI Charter: https://blog.openai.com/openai-charter

2. Towards Data Science: https://towardsdatascience.com/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8

3. Jeremy Howard of FastAi: https://www.fast.ai/2019/02/15/openai-gp2

4. GPT-2 As Step Toward General Intelligence: https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence

5. The Verge: https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2

6. The Gradient: https://thegradient.pub/openai-please-open-source-your-language-model

7. Modulate.ai: https://modulate.ai/blog/002

8. Delip Rao: http://deliprao.com/archives/314

9. Approximately Correct: http://approximatelycorrect.com/2019/02/17/openai-trains-language-model-mass-hysteria-ensues

10. Robert Munro: https://towardsdatascience.com/should-i-open-source-my-model-1c109188b164

11. SyncedReview: https://syncedreview.com/2019/03/11/openai-establishes-for-profit-company

Videos:

1. Siraj: https://www.youtube.com/watch?v=0n95f-eqZdw

2. Yannic Kilcher: https://www.youtube.com/watch?v=u1_qMdb0kYU

3. TWiML (with OpenAI and other experts): https://www.youtube.com/watch?v=LWDbAoPyQAk