2018 AI predictions
- Introduction: Beyond the hype
- 1. AI brings a new set of rules to knowledge work
- 2. Newsrooms embrace AI
- 3. Lawyers assess the risks of not using AI
- 4. Deep learning goes mainstream
- 5. Smart cars demand even smarter humans
- 6. Accountants audit forward
- 7. Wealth managers look to AI to compete and grow
- 8. Law firm innovation meets AI
- 9. AI gets a reality check
- 10. Machine bias and algorithmic diversity come into view
- 11. Conclusion
The below excerpt of a sonnet titled “Gate” was written by a machine.
And be very careful crossing the streets.
How fair an entrance breaks the way to love!
Left, doors leading into the apartments.
Just then a light flashed from the cliff above.
The fields near the house were invisible.
Objects of alarm were near and around.
The window had only stuck a little.
From the big apple tree down near the pond.
Not bad, right?
While public interest and media narratives around artificial intelligence (AI) have ebbed and flowed over the past couple decades, the conversation has been heating back up in recent years, due to advancing consumer technology and the need to process and understand ever increasing amounts of data. That buzz will likely continue into 2018 and beyond as new products and services built on AI seep into many aspects of our lives – be it in the home, on the commute, in the workplace, or elsewhere. At times an oversaturated topic, the term “AI” has become shorthand for several specific technologies – including cognitive computing, machine learning, natural language processing, and data analytics, among others.
To move beyond the hype and look to the immediate future, we asked 10 Thomson Reuters technologists and innovators to make their AI predictions for the year ahead.
AI brings a new set of rules to knowledge work
When things go digital, they start following a new set of rules.
The rules of the physical world are either not applicable or are severely diminished. Things move from sparsity to abundance, where consumption does not lead to depletion. To the contrary, the more an object is consumed, the more valuable it becomes. Cost of production and distribution is no longer critical, and the concept of inventory is no longer applicable.
When things go digital, they also move from linear to exponential – a world in which new technologies and new players can enter and dominate an industry in just a few years.
Consider that each year more people take online courses offered by Harvard than the number of students who attended Harvard in its 380-year history. Each year, three times more people use online dispute resolutions to resolve disputes on eBay® than lawsuits filed in the United States. Each day, five billion videos are watched on YouTube®. For context, the first YouTube video was uploaded in 2005. I was talking to a gentleman at Facebook® a few weeks ago who said, “I joined Facebook three years ago and 70 percent of the company started after me.” Talk about hyper-growth businesses!
This is the environment that we operate in: Not only must we adapt, but we must help our customers adapt as well.
In the information industry and at Thomson Reuters, AI and machine learning (ML) are already driving innovation and transformation. They are embedded in how we sift through large volumes of data and content, and how we enhance, organize, connect, and deliver content and information. They are the engines underlying many of our products and services.
In the long term, our objective is to build personal digital assistants for knowledge workers. An assistant is an application that:
- Knows what you (want it to) know
- Knows what you like (if you want)
- Knows how you do things (if you wish it to)
- Interacts naturally with you
- Is both responsive and proactive (without being intrusive)
- Is always on (but can be turned off)
- The collection of all of your professional experiences
- Available with a few words and a click
- Learns from you as well as others (via their digital assistants)
Its purpose is not to replace you, but to augment you, to scale you, and to help you focus on more interesting tasks.
It will probably take a decade or two to build some of these digital assistants – but the near term is also full of interesting opportunities to transform, through simplification, automation and machine assistance.
Research and discovery
Research, discovery, and investigation represent a significant portion of what knowledge workers do. These are complex and time-consuming tasks, making them easy contenders for simplification, automation, and machine assist.
Our world is connected and information rich. The cycle of information creation is continuous and instant, and staying informed can be a daunting task. One of our primary objectives is to pivot away from customers finding information to the information finding the customer.
Risk and compliance
This theme focuses on helping our customers comply with relevant laws and regulations, discover risks that could disrupt their businesses, and respond appropriately when things happen.
Knowledge work requires making sense of data in order to make time-sensitive and business-critical decisions. Whether it is a single document or collection of documents, an event, a work product, or an “abnormal” pattern, making sense is hard and time-consuming. AI can help.
This is just a selection of key focus areas based on analysis and discussions with our customers and business partners. The predictions in this report dive deeper into each of these opportunities. What is clear is that AI and machine learning are already here and their potential to assist knowledge workers is being realized.
Newsrooms embrace AI
What does the future of news hold? Could anyone have predicted, five or 10 years ago, the rise of social platforms as a primary conduit for news, or even the phenomenon of “fake news”?
One thing is clear: Technology will increasingly be part of the news business, and that will open up all sorts of new opportunities – both to improve the quality of journalism, as well as the way news is produced and delivered to audiences. Already, we have rapidly adopted tools to help us find news faster.
We use Reuters News Tracer, a technology developed by the Thomson Reuters Research & Development team that algorithmically detects newsworthy events breaking on Twitter® and rates the likelihood that they’re true.
It’s built using machine-learning algorithms that have been trained by Reuters journalists to home in on events that are of interest to them, giving us a head start on chasing down and verifying news. In an age when news is witnessed by ordinary people everywhere – and then posted to social media within seconds – Reuters News Tracer is an invaluable tool to improve our journalism.
The cybernetic newsroom
Language analysis and generation systems – technologies that can understand documents, analyze data, and create text – have so far been largely focused on creating short stories at lightning speed or turning out vast numbers of relatively simple, routine stories. But machines can do much more, not least analyzing huge amounts of data and finding patterns and outliers. This is the cybernetic newsroom.
This capability means we can think about delivering news that is not only more insightful, but personalized.
Imagine market reports that were written on demand and not just when the market closed. These reports could be more than just a simple recap of market performance, but a comparison of a how a reader’s portfolio performed against the broader market, as well as key reasons why.
For example: “It’s 3:35 pm. The market is currently up 1%, but your portfolio is down 2%. This is attributed in part to the purchase of XX stock last week, which has fallen sharply since …”
How close are we to being able to do this, and much more? The technologies are very nearly here already, and newsrooms are starting to embrace the possibilities.
What if we could understand what kind of news was most likely to move markets, based on an analysis of past trends? What if machines could dive into analyst or CEO statements and flag patterns that indicated deeper underlying issues? What if we could harness the power of the Internet of Things and alert journalists to anomalies in the world of sensors?
Ultimately, it's about using the power of AI and machine learning to help us better inform the world.
Lawyers assess the risks of not using AI
AI is not a single, all-encompassing program that will one day be “switched on.”
It is a set of related technologies that are already in use in legal organizations today, in applications such as legal research, contract analysis and due diligence, business development, litigation strategy, and e-discovery.
To date, 28 U.S. states have adopted a duty of technology competence. Specifically, they have adopted some form of the language in Comment 8 of ABA Model Rule 1.1 pertaining to technology:
To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education, and comply with all continuing legal education requirements to which the lawyer is subject.
At some point, it will become ethically irresponsible for a lawyer to be unaware of “relevant technology” and its everyday application for their practice. The fact that many legal professionals don’t understand the extent to which these technologies are already in use across the industry is itself a cause for ethical concern.
Equally, legal professionals should be concerned about the ethical implications of the application of AI technologies to their practice, including:
- Duties of supervision and independent judgment when using tools that help suggest answers to legal questions
- Confidentiality of hosted data used in AI applications and the risk of data breaches
- Risks related to confidentiality, privilege, and commingling of multiple clients’ data when using AI to analyze law firm billing data
As these concerns are addressed, there will be a gradual recognition of the ethical duty to use and apply appropriate technology, and this will be reflected in other trends coming to the legal profession.
Multidisciplinary legal education
That gradual recognition will hasten the development of multidisciplinary legal education, in which new “relevant technologies” are woven into the curriculum. Law schools such as Stanford, Suffolk University, Michigan State, and Chicago Kent are already headed down that path.
Increasing reliance on allied professions
Data scientists, project managers, legal practice engineers, and other roles will become increasingly important. Learning how to recognize and reward those allied professions will go a long way to helping lawyers fulfill their duty of technology competence.
Deep learning goes mainstream
The ultimate goal of artificial intelligence is not to take our jobs, but to assist us in doing them better. This is “augmented” intelligence, rather than “artificial” intelligence. Machines will do this by helping us better understand the data that is critical to our everyday decision making, particularly in the financial markets.
For all our advances in technology, knowledge workers still spend the majority of their time reading and analyzing documents to assess what is important. Not only that, but the number of information sources that need to be assimilated has multiplied tenfold.
By teaching machines to understand unstructured data and filter out the most relevant, they lessen the burden of time-intensive processing work and allow experts to spend more time on high-value work like talking to clients.
Imagine the current workflow of a wealth or asset manager assessing a potential investment. They will separately analyze reams of news, company filings, broker research reports, and expert opinion on websites, blogs, and social media. Once filtered, the various insights are assimilated into the reporting or decision-making process. What if an AI assistant could help with that entire process – the identification, filtering and assimilation? The impact across the industry would be as significant as the introduction of email.
Today’s AI tools are not yet able to do this – their algorithms are highly tuned to the specific set of content that they have been trained on. They cannot take the insights learned in the domain of “news” and then apply them to “company filings.” The only way to get quality, accurate results from machine learning is with relentless training on each category of content, plus each subject area, to create deep domain knowledge. When that work is done, you then have the challenge of overlaying the many languages that systems also need to be trained on.
The greatest challenge, therefore, to reach the goal of the intelligent research assistant, is solving for “domain adaptability.” In 2018, we will see advances in deep learning algorithms that will help us accelerate this domain adaptability.
At Thomson Reuters we continue to implement a wide spectrum of machine learning techniques. Thomson Reuters Knowledge Graph and Thomson Reuters Intelligent Tagging expose not only the data and tools that we use, but also our overall methodology and approach in creating successful machine learning projects. In 2018, we will continue this by making our TRIT training management workflow available to customers.
Smart cars demand even smarter humans
“These cars are dependent and, as such, require a larger conversation about what the rules and expectations of dependency should look like. Once a car belongs to a network, you have to worry about whether the network is safe. Once an algorithm is in command, you have to worry about how the algorithm thinks.”
In his trademark no-nonsense style, Gladwell has captured the enormity of the autonomous vehicle challenge, expanding our thinking beyond the excitement of being able to watch movies while our cars drive us to work and introducing some of the social, technological, regulatory, and business issues that accompany this innovation.
Take, for example, the simple issue of liability, which has been a cornerstone of the driving experience since 1898, when Travelers Insurance Co. sold its first-ever auto policy. Fast-forward 119 years, and property casualty insurers, regulators, government agencies, and auto manufacturers are wrestling with liability in a case where a Tesla Model X operating in Autopilot mode crashed into the back of a police motorcycle.
The event raises plenty of questions:
- If a self-driving car is speeding, should the owner of the car or the software developer get the ticket?
- Can a passenger in a self-driving car be charged with a DUI?
- Will “drivers” still need licenses?
These are just the beginning. Once artificial intelligence was introduced to hunks of steel that hurtle down the highway at 70+ miles per hour, everything got a lot more complicated for the humans that must sort out the details of how the technology will be implemented and governed.
Consider, for example, the intellectual supply chain challenges this innovation presents for automakers. Not only must automakers source new technologies and form new supplier partnerships with tech companies that have not traditionally been a part of the auto supply chain, but they must also hire the technologists, data scientists, and programmers who can leverage this technology in their vehicles.
That’s why Ford announced earlier this year that it was investing $1 billion over five years into an artificial intelligence start-up called Argo AI, which will develop the brains of the manufacturer’s self-driving cars. General Motors invested $500 million to form a partnership with ride-sharing service Lyft® to create a national network of self-driving cars. Meanwhile, tech companies such as Google®, Apple®, and Intel® are also engaged in high-profile autonomous driving projects.
As autonomous vehicle technology grows increasingly complex, the one thing that will separate winners and losers will be a strong intellectual supply chain. Businesses of every type will need to weigh everything from legalities and regulations to choosing the right strategic partnerships to catalyze new growth. And, they’ll need to do it while the wheels are rolling and the final destination is still many miles away.
Accountants audit forward
What if you could audit things that haven’t even happened yet?
It may sound crazy at first blush, but the idea that accounting is relegated to retrospective analysis is an artifact of the profession’s history that is no longer relevant.
Tax and accounting is the beating heart of any business. It is the hub for any piece of data that will affect the bottom line and the lynchpin to understanding how different tax reforms, regulatory regime changes, and trade agreements will impact profitability, sales volumes, and free cash flow. It’s a powerful collection of information, but it’s never truly been exploited to its fullest potential.
We’ve always been looking in the rearview mirror for tax reporting, audit, and compliance. Now that we have AI embedded in all of our systems, that’s going to shift. We’re going to be able to learn, predict, and produce forward-looking “what-if” analyses as we digest information in real time.
The idea that a large corporation looks backwards, pays tax, and is audited is going away. Today’s AI-powered predictive analytics give the tax department the power to make relevant projections, conduct analyses that forecast the various tax impacts of different business decisions, and detect anomalies and red flags that can be signals for fraud risk, all using real-time company financials.
It’s not just large corporations that are awakening to this possibility, either. Forward-looking tax analysis is being driven by tax authorities who increasingly demand this level of visibility into corporate finance.
To date, tax authorities in the UK and Brazil have been among the first to initiate major movements to digital tax reporting, catalyzing corporate tax technology development. The UK announced that it will soon require quarterly corporate tax reporting, a pace requiring real-time focus for corporate tax professionals. Brazil has taken things even further with its Nota Fiscal Eletronica, which requires companies to submit electronic invoices to the government to receive clearance before goods are shipped.
In the short term, the evolution from retrospective to real-time and forward-looking analysis will create some challenges. Longer term, it opens up amazing opportunities.
No one has more detailed data on the performance of each component of a corporation than the tax and accounting departments. Imagine what’s possible when they also have the power to tap into that data to project the future and analyze risk! Add the power of a virtual assistant who can tap into this information at will and the prospects become limitless.
Within a span of five years, 50% of our interactions will involve AI. That’s going to unlock an entirely new workflow for the accounting profession and an entirely new set of capabilities that will drive the back office to the forefront of business strategy.
Wealth managers look to AI to compete and grow
The wealth management industry is undergoing transformation driven by digitalization and a significant transfer of generational wealth. The dynamic pressures are forcing advisors to compete effectively, improve their customers’ experiences, and provide new value-added services.
To compete and survive, advisors have to be more efficient. Only then will they have the capacity to provide higher-value services. Efficiency is about cutting costs, saving time, and scaling services.
Automation of repetitive tasks (RPA) and client self-service capabilities are part of the answer, but automation, of course, is a dual-edged sword. It can take over processes that represented much of the value the professional historically provided. Think portfolio construction and rebalancing – tasks where robo-advisors have become prevalent.
Those same capabilities will be available to competitors, making differentiation based on service excellence alone more difficult.
To provide differentiated, high-value service, wealth managers will need to do their current jobs better and grow their capabilities. They need to learn how to adapt and make better use of their clients’ data. AI technologies powering the next generation of tools will help them augment and expand their services.
This will enable them to:
These are the first fulfillments of the promise of intelligent machines making sense of data and actively aiding the human cognitive and decision-making process. Truly intelligent assistants are much further off. They will ultimately support advanced dialogue and perform complex analysis and reasoning using domain knowledge and contextual info. Such assistants will be more proactive and even able to operate independently. Significant research problems must be solved to fulfill this vision.
RPA and AI-enabled technologies will enable wealth managers to evolve and thrive, even as the business environment undergoes disruption.
Law firm innovation meets AI
Conventional wisdom holds that law firms are reluctant to innovate and slow to adopt new technology. A growing number of firms – and not just the largest firms – would object loudly to this characterization. Law firms have been actively modeling their behavior on corporations who have faced similar competitive challenges, with outcomes that are anything but conventional law firm thinking.
While some firms have followed the traditional corporate strategy of merging to maintain profitability, others have actively invested in successive waves of improvement, including introducing artificial intelligence (AI) technology into legal workflows, such as augmenting attorneys' research capabilities and automating repetitive legal work.
Legal AI capabilities
The capabilities of AI include the acquisition and recording of knowledge from natural language interactions, the understanding of a legal context and answering related questions, and generating hypotheses and evaluating alternatives. Application of these capabilities in a legal context range from simple contract clause analysis to risk profiling, legal argument analysis, and research recommendations.
The real impact on the law
The popular press overestimates the near-term impact of AI on the law. Although industry studies claim that ~70% of paralegals and a quarter of attorneys will be "impacted" by AI within 10 years, the future is far more nuanced.
At best, artificial intelligence provides only a passable alternative to human intellect, and an out-of-the-box generalized thinking machine remains a distant promise. The robo-lawyer is more fiction than a certain future.
The challenges facing traditional firms
No rational law firm is going to outsource thinking to artificial intelligence, but many firms around the world are actively partnering with or even investing in AI companies to apply these technologies across all practice areas.
Each firm has had to master the introduction of these technologies by facing change management challenges throughout the organization. A sampling of focus areas is:
Understanding the capabilities and ethical constraints on machine intelligence; knowing the transition where software shifts from decision support to decision making
Acquiring those critical new skills needed to apply AI technologies and establishing a formal approach to firm content curation
Evolving KM from a focus on the reuse of prior work product into knowledge mining to support the firm
Capitalizing on AI investments in client pitches and vision papers
How law firms can master AI
To successfully apply artificial intelligence, law firms will have to understand the hard technical work and robust content requirements needed to establish AI mastery. Additionally, firms will have to strike a balance between using AI for task automation for cost containment and applying it to augment their legal practice for value creation.
AI gets a reality check
AI technology will be transforming and impacting the work of professionals in the years to come. Expectations are high. Setting the right expectations, however, is important. We do not want to oversell the capabilities, nor do we want to fall into the trap of fear-mongering. What is needed is realistic expectation setting.
AI users become realistic about its limitations
If I tell Siri, "Play music I like," my favorite list of music will be played in random order. If I say "Play music my wife likes," Siri will simply return the result of a Web search, even though my wife and I share music via an Apple music family plan. My expectation is that the natural language understanding module captures the small difference in the questions I asked, but it fails.
While Siri still fails to answer seemingly simple questions, public perception is being influenced by articles that indicate AI could get out of control (e.g., Facebook AI creates its own language in creepy preview of our potential future or Facebook shuts down robots after they invent their own language). Even technology leaders such as Elon Musk recently warned that AI poses an even larger risk than North Korea.
On the one hand, we are about to be enslaved by a super-intelligent robot race, and on the other hand, AI cannot even model cognitive behavior shown by three- to five-year-old children.
A realistic view of AI’s current capabilities is presented by Gary Marcus, a psychology and neural science professor at New York University. He recently authored an article in the New York Times describing how current AI systems have achieved so-called "micro-discoveries" but have not made much progress in terms of understanding human cognition.
Successful AI-powered products are built on realistic expectations
Marcus argues that we need a top-down as well as a bottom-up approach to address the problems AI systems still tend to fail at. He describes top-down approaches as cognitive models or representations we form about our environment. I agree that this is a good approach for improving theories on cognition and making progress in the long term.
In addition, we want to set realistic expectations for this new technology. AI has clearly shown fascinating results in various niche applications or at winning games, as in Go and Poker, but failed at simple everyday tasks, such as avoiding a fountain in a shopping mall.
AI systems address clearly defined problems
Let's be more transparent and concrete about what the problems are that we want to solve. At Thomson Reuters, we work on improving the search experience for professionals instead of solving natural language understanding in general. Or, we work on extracting specific information from tweets that provide our customers with relevant information instead of reading and understanding every tweet people write.
Solving those well-defined problems are not major breakthroughs for the theory of human cognition but are of high value for a professional's day-to-day work.
At the same time, we are also following the advances in the AI research community that aim higher, because they will eventually find their way into our products as well.
In order to advance AI for business applications, it is important to set the expectations right and formulate the problem in a way that is solvable with current techniques and adds value to the user of an AI system.
In order for an AI system to identify problems better, expect it to ask you more questions in the near future. Siri may start asking who your spouse is and posing further clarifying questions in order to exceed your expectations.
Machine bias and algorithmic diversity come into view
The artificial intelligence revolution has unleashed a seemingly endless stream of ethics questions. How, for example, does an autonomous vehicle “choose” between hitting an oncoming vehicle or swerving into a pedestrian on a sidewalk? Should social networks allow posts from automated political bots? What if the software we’re using is racist?
In fact, we’ve been so focused on specific examples and use cases where misdirected AI can create ethical and moral dilemmas for our society, that we’ve largely ignored a major foundational shortcoming in the technology’s development. The algorithms that power the AI we’re using to detect fraud, optimize logistics, and streamline research are being programmed by human technologists. And, right now, human technologists (at least in the United States) are predominantly white men.
A 2014 study conducted by the U.S. Equal Employment Opportunity Commission found that the U.S. high-tech workforce is made up primarily of white males. Specifically, the population is 68.5% white, compared with 63.5% across the entire private sector. African Americans account for 7.4%, Hispanics represent 8%, and Asian Americans hold roughly 14% of all tech jobs in the United States. Men account for 64.3% of the tech workforce, versus 51.8% in the private sector as a whole.
Why is this such a big deal? The beauty of AI is that it is technology that understands nuance and learns as more inputs are collected. That’s what makes the technology “smart.” But it’s also what makes it dangerous because that ability to discern is governed by algorithms that are being built by humans.
For example, researchers at Stanford University published a study outlining an algorithm they had built that can determine an individual’s sexual orientation through facial recognition. In a word, the researchers built an AI-powered gaydar.
In another recent example, the AI-powered blink detection software inside a digital camera committed an unwitting lapse in racial sensitivity by suggesting that an Asian woman in a photo was blinking when she clearly was not.
These examples illustrate a much larger point that cannot be ignored. What happens when these kinds of biases are programmed into AI software designed to treat cancer or predict diabetes risk? How will the facial recognition technology currently being used by the Department of Justice and the National Institute of Corrections to conduct criminal risk assessments treat a photo of a young white man and one of a young black man when all other information is equal?
The issue is called machine bias, and unless we take a hard look at the preconceived notions and entrenched perspectives that are being loaded into our AI algorithms and weigh them against a multitude of different perspectives, we run the risk of creating a tech-enhanced version of the same old dogma.
Over the next year, we’ll see an awakening to the fact that we need AI programming (and programmers) that cuts across age, race, socioeconomics, and gender to truly capture all of the inputs that go into making smarter, more rational, and unbiased decisions.
Let’s not forget, AI is being created in our image. It’s time we started programming some diversity into our algorithms.
Above all else, what’s clear is that artificial intelligence and its individual associated technologies will continue to transform how we interact with information and machines.
At Thomson Reuters, we are proud to be at the forefront of developing technology that will revolutionize how people access and use information by interacting with intelligent machines.
At the heart of our efforts is the Center for AI and Cognitive Computing, a team of award-winning scientists, engineers, and designers with specialized skills in cognitive technologies, dedicated to advancing the state of the art in machine perception, reasoning, knowledge management, and human-computer interfaces.
For the industries we serve, we find ourselves in a unique position to deliver cognitive solutions.
Our people have the deep domain expertise, especially when combined with the knowledge of our customers and partners.
Our content is deep, authoritative, and often unique.
We are leaders in the development of intelligent solutions.
This is our sweet spot: Providing the intelligence, technology, and human expertise our customers need to find trusted answers.
AI in action