1. Home
  2. Artificial Intelligence
  3. Events - See the Labs Work in Action
Microphone in focus against unrecognizable crowd

AI @ Thomson Reuters

Events: See the Labs Work in Action! 

Our team regularly participates in industry conferences, seminars, and workshops. We also host events with AI ecosystem leaders and customer audiences. Check out our upcoming activity below.  


December 7-10, 2021

Mining and Learning in the Legal Domain 

(MLLD2021)

Date: December 7-10, 2021
Location: Auckland, New Zealand
Event Details: View Here

Abstract

In conjunction with the 21st IEEE International Conference on Data Mining, December 7-10, 2021, Auckland, New Zealand

The increasing accessibility of large legal corpora and databases create opportunities to develop data driven techniques as well as more advanced tools that can facilitate multiple tasks of researchers and practitioners in the legal domain. While recent advancements in the areas of data mining and machine learning have gained many applications in domains such as biomedical, healthcare and finance, there is still a noticeable gap in how much the state-of-the-art techniques are being incorporated in the legal domain. Achieving this goal entails building a multi-disciplinary community that can benefit from the competencies of both law and computer science experts. The goal of this workshop is to bring the researchers and practitioners of both disciplines together and provide an opportunity to share the latest novel research findings and innovative approaches in employing data analytics and machine learning in the legal domain.

Previous events


June 17, 2021

AI Ethics: It’s Time 

Date: June 17, 2021
Location: Virtual
Event Details: View Here

Abstract

As Artificial Intelligence (AI) and autonomous decision-making systems take hold and rapidly develop, how can Canada ensure this powerful technology upholds our most important values? Fairness, inclusion, democracy, individual privacy, economic security, public safety and sustainability.  Ethical AI: It's time  — brought to you by Thomson Reuters; Communitech, the champions of Tech for Good®; and CityAge — brings together some of the top minds in AI, business and society to drive conversation around how we can ensure this transformative and rapidly evolving technology benefits everyone. We have a duty in front of us to leverage these technologies to solve some of the world’s most pressing challenges, but we must do so safely. 


May 31-June 04, 2021

Symposium on AI and Law  

Date: May 31-June 04, 2021
Location: Virtual
Event Details: View Here

Abstract

The Symposium on Artificial Intelligence and Law (SAIL) is a five-day, virtual event that aims to bring together experts from Industry and Academia to discuss the future of AI and Law. The Symposium aims to provide a venue for academic and industrial/governmental AI-Law researchers and law professionals to come together, present and discuss research results, use cases, innovative ideas, challenges, and opportunities that arise from applications of AI in the Legal Domain. The symposium is also meant to foster collaborations between the Legal and the Artificial Intelligence, Data Mining, Information Retrieval, Natural Language Processing, and Machine Learning communities.


May 11, 2021

CHI 2021: Towards Explainable AI

Assessing the Usefulness and Impact of Added Explainability Features in Legal Document Summarization

Date: May 11, 2021
Location: Virtual
Event Details: View Here

Abstract

This study tested two different approaches for adding an explainability feature to the implementation of a legal text summarization solution based on a Deep Learning (DL) model. Both approaches aimed to show the reviewers where the summary originated from by highlighting portions of the source text document. The participants had to review summaries generated by the DL model with two different types of text highlights and with no highlights at all. The study found that participants were significantly faster in completing the task with highlights based on attention scores from the DL model, but not with highlights based on a source attribution method, a model-agnostic formula that compares the source text and summary to identify overlapping language. The participants also reported increased trust in the DL model and expressed a preference for the attention highlights over the other type of highlights. This is because the attention highlights had more use cases, for example, the participants were able to use them to enrich the machine-generated summary. The findings of this study provide insights into the benefits and the challenges of selecting suitable mechanisms to provide explainability for DL models in the summarization task.