The pace of technological change may seem to be ever increasing for professional services, but according to a recent panel, a little critical thinking goes a long way towards ensuring ethical AI use
Key takeaways:
-
-
-
Embrace value, risk, and execution — for good and bad — Professional services firms must weigh the value of AI applications against potential risks, embracing both successes and failures as learning opportunities to improve responsible adoption.
-
Ethical oversight is everyone’s responsibility — Ensuring responsible AI use in professional services requires active participation from all members of an organization, not just legal or IT teams.
-
Human creativity and feedback remain essential — While AI can generate ideas and accelerate processes, human judgment, creativity, and continuous feedback provide the proper pathways for ethical decision-making and successful integration.
-
-
AUSTIN, Texas — With the professional services world now squarely into the AI era, it’s clear that the speed of business is quicker than ever. Clients expect results in hours or even minutes rather than days, while generating documents can happen at the click of a button. Ask a research question, and a machine can intuit what you’re looking for with striking accuracy.
Alongside these business changes, however, it’s clear that the ethics of technology usage within professional services is shifting just as quickly. “Every time you come and do a talk with a group of people, within four weeks if not sooner, it’s changed,” says Betsy Greytok, Associate General Counsel in Responsible Technology at IBM. “So, it really does require you to keep on your toes.”
Ensuring that AI is used responsibly is paramount within professional services than in other professions, given the ethical and regulatory constraints placed on legal, tax, audit & accounting, financial services and risk, and more. During a recent session, A Unified Field: Ethical Considerations amid AI Development and Deployment, at the Thomson Reuters Institute’s 2025 Emerging Technology and Generative AI Forum, panelists describe an ethical world that should be tackled as a challenge, rather than shied away from as an unsolvable risk.
Or, as Paige L. Fults, Head of School at the AI-centric Alpha School & 2-Hour Learning program, put it: “Not being afraid of replacement, but leaning into repurpose.”
Embracing success — and failure
John Dubois, the Americas AI Strategy Leader at Big 4 consultancy Ernst & Young, says he regularly gets questions from customers about AI and how they should use it, given that there are new AI applications arising seemingly every day. “The way we describe it is a balance,” Dubois explains. “Let’s start with value. If we know there’s value in something, then we can figure out the risk behind it, then we can figure out how we can execute.”
Just as importantly, however, this focus on value, risk, and execution can also aid professional services firms when an AI plan fails. For example, Dubois cites an MIT report from August 2025 that showed 95% of GenAI pilots fail, often because of flawed integration. Embracing the value, risk, and execution strategy from the beginning not only allows for better chances of success, but even in the event of failure, “we actually have a better shot at mitigating, when it does fall down.”
This sort of planning is not limited to just one group, Dubois says, noting that ethical oversight is seen as a key responsibility of everyone in the organization. He explains that E&Y has an internal implementation of OpenAI that has 150,000 distinct users each month. Because of an internal process called SCORE that removes customer data at the source, E&Y’s instance of OpenAI is largely clear of customer data — but it’s still not perfect.
E&Y has set a culture so that if someone sees proprietary data when using GenAI to develop a proposal or create a PowerPoint, they not only delete the data before use, but work to scrub it from the system entirely. “It is all of our job to ensure that whatever you’re putting into that system or extracting out of that system, you’re cleansing,” Dubois says. “It’s not the job of the general counsel, or the risk team, or the IT team, it’s all of our job.”
When it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity.
IBM’s Greytok agreed, noting that she’s part of an internal review board that examines major AI-related projects for ethical issues. There is a board review at the beginning of the development process to determine how risky a use case is, and then the system will give a response, considerations, and steps. If there is an issue, the board is empowered to stop development, even on a major project.
She drew an analogy to writing a paper in high school, in which there is a marked difference between simply turning in the paper, proofreading your own work, and asking a friend for peer review feedback. “That’s what you want, is that disagreement, because that’s critical thinking.”
She adds: “The researchers sometimes get so excited about what they’ve discovered that they forget to look at the other side of what can happen. You should want that. You shouldn’t be punished for saying, Is this the right thing or not?”
The importance of feedback
Fults says that at the Alpha School, AI is not only baked into the curriculum, it functions as the teacher. Students spend just two hours a day on academics, led by AI tools that are supplemented by off-line learning on a variety of subjects by in-person instructors that fill in the gaps that AI is not able to provide.
It’s a revolutionary concept but not a static one. Fults notes that “the two-hour learning model has already changed so much since I’ve been part of the school,” and the instructors have a Slack channel on ways to find improvement that receives hundreds of messages a day.
It’s through this marrying of human intuition and the possibilities of the technology that Fults says she believes the school has found success and used AI ethically within education. “Even though we have this tool, the human levers, the motivational levers that are happening day to day, actually make it work,” she says, insisting that she “can’t just hand [the technology] to any school” without the corresponding processes in place.
Dubois and Greytok also call feedback a crucial part of the process in order to overcome AI barriers. Dubois tells the story of a large retailer that bought satellite images to determine footfall within a store. Shoppers, however, felt that was a privacy risk, and the idea was almost scrapped. Then, however, the legal and IT teams worked together to come up with an idea: Can you track clothing, but not faces, to get the same information of where within the store shoppers were going?
“It’s a creative workaround to get us to the same thing,” Dubois explains. “When you have a constraint, what’s a clever way to work around this so we’re not taking a brand risk or a compliance risk?”
Indeed, when it comes to keeping up with AI ethics in a rapidly advancing space, professionals can rely on the same methods they have been employing for years to solve ethical quandaries: human creativity. AI can provide information and context more rapidly than ever before, but ultimately, professionals themselves will be the ones relied upon to make sure AI is used ethically and responsibly.
“AI is an idea generator,” Greytok says. “The solution comes from the human.”
You can find out more about how emerging technologies are impacting professional services here