AI @ Thomson Reuters
Artificial intelligence research
Thomson Reuters Labs has a rich history in applied research activities focused on exploring cutting-edge technology applied to concrete business problems. Our research is driven by customer’s need for trustworthy information and at the same time inspired by recent breakthroughs in Machine Learning and Artificial Intelligence research.
Natural Language Processing (NLP)
Natural Language Processing (NLP) focuses on designing algorithms to parse, analyze, mine, and ultimately understand and generate human language. NLP with a focus on text data, is one of our core enabling technologies given our customers’ work in information heavy segments, and business needs.
Human-Centric AI (HCAI)
Multidisciplinary approach to the challenges we face in terms of AI adoption and building trust in our solutions. We explore concepts such as interpretability, explainability, transparency, fairness, privacy and security – all of which are central to our AI Principles and our company purpose.
AI DevOps (ModelOps)
We are exploring methods and technologies related to the emerging domain of ModelOps. This field combines AI development and IT operations with the objective to shorten the "AI Lifecycle", provide continuous delivery, and increase the quality of what we deliver to our customers.
Information Retrieval & QA
Our customers need the right information, in the right context and often under tight time-constraints. We adopt a comprehensive approach to the information findability problem, using a combination of search technologies, recommendation systems, and navigation-based discovery.
Multi-label legal document classification
Multi-label document classification has a broad range of applicability to various practical problems, such as news article topic tagging, sentiment analysis, medical code classification, etc. A variety of approaches (e.g., tree-based methods, neural networks and deep learning systems that are specifically based on pre-trained language models) have been developed…
Assessing the usefulness and impact of added explainability features in legal document summarization
This study tested two different approaches for adding an explainability feature to the implementation of a legal text summarization solution based on a Deep Learning (DL) model. Both approaches aimed to show the reviewers where the summary originated from by highlighting portions of the source text document.
Active curriculum learning
This paper investigates and reveals the relationship between two closely related machine learning disciplines, namely Active Learning (AL) and Curriculum Learning (CL), from the lens of several novel curricula.
Using transformers to improve answer retrieval for legal questions
Transformer architectures such as BERT, XLNet, and others are frequently used in the field of natural language processing. Transformers have achieved state-of-the-art performance in tasks such as text classification, passage summarization, machine translation, and question answering. Efficient hosting of transformer models, however, is a difficult task …
Build a career without boundaries. Do work that matters, with the flexibility to pursue your passion wherever it leads. Bring your ambition to make a difference. We’ll bring a world of opportunities.