NLP Transformers for Information Extraction From Large Documents

AppliedAI
AppliedAI
Public group

Online event

This event has passed

Details

The last few years have seen tremendous progress made in language modeling with the representation of words and sentences, the use of contextual embeddings, and the development of the efficient seq2seq models using the Transformer architecture. In addition to providing contextual embedding, transformer-based techniques like BERT, RoBERTa, and others also provide adaptation for fundamental language processing tasks such as document (or paragraph) classification and named entity recognition. There is also considerable excitement and interest in the application of these techniques to real-world applications in large documents in general and to loan document families in particular. We show that because of unique problems such as OCR errors and the prevalence of domain-specific vocabularies not included pre-trained transformer models, different entities exhibit better accuracies with either using transformer models for feature embeddings or for fine-tuning. Hence ensemble of both techniques and their unique advantages will be discussed.

Our presenters this month are:

Soumitri Kolavennu - Currently an Artificial Intelligence Leader at U.S. Bank where he leads the AI team in Enterprise Research and Analytics and prior to his current role as a senior research fellow having spent 21 years at Honeywell working with the executive leadership team in developing the strategic plans (STRAP) for identifying emerging technologies, new markets, and their intersections.

and

Tina Nguyen - A 13 year veteran at U.S. Bank where she is currently AVP, Enterprise AI & Analytics focusing on creating the vision and innovation strategy for AI/ML products to drive step-change impact across the corporation.