fbpx
Search
Close this search box.

6 Best Practices to Develop a Corporate Use Policy for Generative AI

Breaking new ground in the world of technology, generative AI is a cutting-edge field that’s rapidly evolving. With seemingly endless possibilities, the potential for its future is thrilling. However, it’s critical to approach this innovative technology with caution, mindfulness, and integrity to build models that are equitable, responsible, and ethical.

By Nicholas D. Evans

Consider the scope of emerging AI capabilities before mapping out what’s best for the organization.

While there’s an open letter calling for all AI labs to immediately pause training of AI systems more powerful than GPT-4 for six months, the reality is the genie is already out of the bottle. Here are ways to get a better grasp of what these systems are capable of, and utilize them to construct an effective corporate use policy for your organization.

6 best practices to develop a corporate use policy for generative AI - Business people working in A conference room.
CREDIT: SHUTTERSTOCK / UFABIZPHOTO

Generative AI is the headline-grabbing form of AI that uses un- and semi-supervised algorithms to create new content from existing materials, such as text, audio, video, images, and code. Use cases for this branch of AI are exploding, and it’s being used by organizations to better serve customers, take more advantage of existing enterprise data, and improve operational efficiencies, among many other uses.

But just like other emerging technologies, it doesn’t come without significant risks and challenges. According to a recent Salesforce survey of senior IT leaders, 79% of respondents believe the technology has the potential to be a security risk, 73% are concerned it could be biased, and 59% believe its outputs are inaccurate. In addition, legal concerns need to be considered, especially if externally used generative AI-created content is factual and accurate, content copyrighted, or comes from a competitor.

As an example, and a reality check, ChatGPT itself tells us that, “my responses are generated based on patterns and associations learned from a large dataset of text, and I do not have the ability to verify the accuracy or credibility of every source referenced in the dataset.”

The legal risks alone are extensive, and according to non-profit Tech Policy Press they include risks revolving around contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.

In fact, it’s likely your organization has a large number of employees currently experimenting with generative AI, and as this activity moves from experimentation to real-life deployment, it’s important to be proactive before unintended consequences happen.

“When AI-generated code works, it’s sublime,” says Cassie Kozyrkov, chief decision scientist at Google. “But it doesn’t always work, so don’t forget to test ChatGPT’s output before pasting it somewhere that matters.”

A corporate use policy and associated training can help to educate employees on some of the risks and pitfalls of the technology, and provide rules and recommendations for how to get the most out of the tech, and, therefore, the most business value without putting the organization at risk.

With this in mind, here are six best practices to develop a corporate use policy for generative AI.

Determine your policy scope – The first step to craft your corporate use policy is to consider the scope. For example, will this cover all forms of AI or just generative AI? Focusing on generative AI may be a useful approach since it addresses large language models (LLMs), including ChatGPT, without having to boil the ocean across the AI universe. How you establish AI governance for the broader topic is another matter and there are hundreds of resources available online.

Involve all relevant stakeholders across your organization – This may include HR, legal, sales, marketing, business development, operations, and IT. Each group may see different use cases and different ramifications of how the content may be used or mis-used. Involving IT and innovation groups can help show that the policy isn’t just a clamp-down from a risk management perspective, but a balanced set of recommendations that seek to maximize productive use and business benefit while at the same time manage business risk.

Consider how generative AI is used now and may be used in the future – Working with all stakeholders, itemize all your internal and external use cases that are being applied today, and those envisioned for the future. Each of these can help inform policy development and ensure you’re covering the waterfront. For example, if you already see proposal teams, including contractors, experimenting with content drafting, or product teams experimenting with creative marketing copy, then you know there could be subsequent IP risk due to outputs potentially infringing on others’ IP rights.

Be in a state of constant development – When developing the corporate use policy, it’s important to think holistically and cover the information that goes into the system, how the generative AI system is used, and then how the information that comes out of the system is subsequently utilized. Focus on both internal and external use cases and everything in between. By requiring all AI-generated content to be labelled as such to ensure transparency and avoid confusion with human-generated content, even for internal use, it may help to prevent accidental repurposing of that content for external use, or act on the information thinking it’s factual and accurate without verification.

Share broadly across the organization – Since policies often get quickly forgotten or not even read, it’s important to accompany the policy with suitable training and education. This may include developing training videos and hosting live sessions. For example, a live Q&A with representatives from your IT, innovation, legal, marketing, and proposal teams, or other suitable groups, can help educate employees on the opportunities and challenges ahead. Be sure to give plenty of examples to help make it real for the audience, like when major legal cases crop up and can be cited as examples.

Make it a living document – As with all policy documents, you’ll want to make this a living document and update it at a suitable cadence as your emerging use cases, external market conditions, and developments dictate. Having all your stakeholders “sign” the policy or incorporate it into an existing policy manual signed by your CEO will show it has their approval and is important to the organization. Your policy should be just one of many parts of your broader governance approach, whether that’s for generative AI, or even AI or technology governance in general.

This is not intended to be legal advice, and your legal and HR departments should play a lead role in approving and disseminating the policy. But hopefully it provides some pointers for consideration. Much like the corporate social media policies of a decade or more ago, spending time on this now will help mitigate the surprises and evolving risks in the years ahead.

To access the original article, please click here. 

Share this post

Share this post

Learn more
about our
services

Related

JOIN THE TEAM

You’ve been searching for a place like WGI. We look forward to meeting you soon.

We're
Nearby

Enter your zip code, and we’ll personalize your experience with local projects, office locations, team members, and more.

WGI's success starts with our Associates

WGI supports its associates with meaningful opportunities for growth, strong benefits and perks, while we work collaboratively with clients and co-consultants to shape and improve communities.

Our Team in Action

Join the Team

WGI is a dynamic organization with opportunities nationwide for engineers, land surveyors, landscape architects, environmental scientists, and architects.

Find a team member:

Let's talk about your next project.