We're back: Large Language Models
Details
We'll discuss this paper:
* BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Please bring your own printout/digitial copy.
Related topics
Events in Wien
High Performance Computing
High Scalability Computing
Concurrent Programming
Software Development
Academics
