ARTICLE

AI data security is critical for professional-grade AI

It should be clear by now that artificial intelligence (AI) is here to stay for professionals in numerous industries. Need evidence?

Of the 2,200-plus professionals surveyed in the recent Future of Professionals Report, 77% believe that AI tools will have a high or transformational impact on their work over the next five years, especially when managing and analyzing ever-increasing volumes of data.

However, the professionals surveyed in the report also express serious concerns about AI’s ability to protect sensitive company and client data from being improperly shared or even stolen. Survey respondents say that demonstrable data security — or the lack thereof — will influence how quickly their organizations adopt AI-powered tools.

Data security and the ethical use of AI have been critical considerations for us as we develop CoCounsel, our professional-grade AI assistant for professionals.

Ensuring robust AI data security is paramount for our AI solutions, particularly to protect user information from cyber threats and comply with evolving data privacy regulations. This approach is multifaceted, encompassing governance, risk management, and proactive measures.

Implementing comprehensive data protection measures

A cornerstone of AI data security involves conducting what we call "data impact assessments" for projects that create and utilize AI and data use cases. The scope is extensive, typically incorporating data governance, model governance, privacy considerations, input from legal counsel, intellectual property issues, and information security risk management. The development process for these assessments may integrate existing privacy impact assessments as a foundational element.

Within a data impact assessment, a "use case" describes a specific business project or initiative. The assessment process typically seeks answers to several critical questions:

  • What are the specific types of data involved in the use case?
  • What kinds of algorithms will be employed?
  • Which jurisdictions' regulations apply to the use case?
  • What are the ultimate intended purposes of the resulting product or service?

This detailed inquiry is crucial for identifying potential risks for AI data security, especially where privacy and governance issues intersect.

Once risks are identified, clear mitigation plans and techniques are developed. These include processes for data anonymization where appropriate, establishing robust access controls and security measures, and ensuring data-sharing agreements are in place. From a privacy standpoint, it is vital to understand the sensitivity of the data, particularly when personal data is involved. Based on this understanding, necessary controls are then applied to safeguard the information and outputs.

Auditing and updating security measures continuously

The landscape of AI data security is dynamic, requiring continuous auditing and updates to security measures. The emergence of technologies like generative AI (GenAI), for instance, has prompted the development of specific guidance to manage its unique implications. Procedural documents related to data security are regularly updated throughout the year and include predetermined mitigation responses for various risk scenarios. Standard statements detailing AI security practices are often mapped to specific sets of controls relevant to different risk profiles, and these statements themselves undergo frequent review and assessment.

To foster trust and transparency, we have created an internal resource known as our Responsible AI Hub. This hub serves as a centralized repository for all relevant policies, guidelines, and best practices. Some comprehensive audits and updates to AI security measures might be conducted annually. Many others, particularly those related to active risk mitigation, are performed with much higher frequency — often on a weekly or even daily basis — depending on the specific task and team.

Safeguarding against unauthorized access and data misuse

For our AI systems, our data access security and management standards directly inform our data governance policy. Simply put, we’re ensuring that the owner providing access to their data set is disclosing the least amount of information necessary for the use of whoever is requesting it. We’ve built many of our AI data security controls into our data platform environment, and we have a specific tool that creates role-based security access.

Key achievements in advancing AI risk management

Data risks in the ethics space are challenging to identify clearly. They're difficult to define all the way through end-to-end risk management, and we built our Responsible AI Hub from the ground up. Our experts spent a considerable amount of time in conversations about identifying and discussing the breadth and depth of AI risks — including sensitive data vulnerabilities, cybersecurity adversarial attacks, and data breaches. We’ve spent even more time bringing those risks to life. We explored how we can take action on them and what that action looks like from a risk-mitigation perspective — all to achieve strong AI data security.

The work we've put in over the past three years has enabled us to get a handle on AI risks more quickly than most companies.

You can learn more about how AI has been changing the future of professionals and how they work.

CoCounsel

Revolutionize your work

Trained by industry experts, CoCounsel is the only generative AI assistant built on 150 years of authoritative content