The Definitive, One-Size-Fits-All AI Policy Guidelines

One size fits all? We’re being cheeky, of course. While we frequently receive requests for a “model” AI policy, there is no singular approach that covers all use cases and still provides meaningful guardrails. Still, we’ve found that asking three main questions is a good starting point:

What types of AI do you have?

Monitoring AI


For organizing, accessing or analyzing data

  • Data analytics
  • Data scanning and processing
  • Data lake management
  • Cybersecurity
  • Camera and sensor monitoring
  • Fraud alerts

Predictive AI


For detecting and predicting trends

  • Energy usage
  • Market demand
  • Staffing needs and scheduling
  • Inventory and supply chain assessment
  • Maintenance
  • Transport logistics

Decision-Making AI


For assisting with or automating decisions

  • Hiring and employment
  • Consumer loans
  • Targeted advertising
  • Insurance coverage
  • Quality control

Text-Generating AI


For explaining or creating written content

  • Conversational chatbots
  • Virtual assistants
  • Research (e.g., market, legal, scientific)
  • Templates (e.g., reports, contracts)
  • Data entry automation
  • Document drafting

Code-Generating AI


For generating software code

  • Creating software routines
  • Code optimization
  • Bug fix assistance
  • Code quality auditing

Image-Generating AI


For creating logos, graphics, videos and other visual content

  • Advertising and marketing collateral
  • Technical illustrations
  • Product design
  • Presentations
  • Entertainment
  • Interactive content

Despite recognizing the risks involved, only 20% of companies have established any AI policies. Be ahead of the curve.

How are you using AI?

AI guidelines depend just as much on how the AI is used as on what AI is used. Even when different departments employ the same technology, the types of data being used and the outcomes of those uses implicate different legal and business risks. For example, use by the marketing department will have a different risk profile than use by research and development. Consider the examples below of how different departments in an organization may use AI for different purposes.

“AI” is a blanket term that covers far more than headline-grabbing conversational chatbots. An AI policy that does not distinguish among the types of AI your organization may use risks being overly restrictive, or so broad that it provides limited guidance to your employees or end users. So, the first step is to ask what types of AI you contemplate using. Some examples are shown below.

R&D


  • Product development
  • Design assistance
  • Competitive intelligence
  • Synthetic data generation

IT


  • Code generation
  • Technology primers
  • Cybersecurity risk identification and response

Sales & Marketing


  • Analysis of customer base and product demand
  • Visual asset generation
  • Targeted advertising
  • Buy/sell consumer data
  • Video editing
  • RFP response generation

HR


  • Resume filtering, hiring, promotion and discipline decision assistance
  • Skill assessments
  • Employee monitoring

Accounting


  • Financial market assessments
  • Investment management
  • Consistency checks
  • Tax research

Logistics


  • Inventory management
  • Fleet management
  • Predictive maintenance

Legal


  • Legal research
  • Legal memoranda and agreement drafting
  • Brief writing
  • Contract analysis
  • Drafting compliance policies

Operations


  • Consumer loan decisions
  • Safety monitoring
  • Quality control
  • Task streamlining

Customer Relationships


  • Customer service
  • Voice assistants
  • Chatbots AI avatars

Where do you fall on the AI industry continuum?
What are your legal and risk concerns?

This last question frames your relationship to other players involved in creating, deploying or using AI technology. For any given use case, you may fulfill one or more of the following roles: providing training data, building foundational models, building AI technology on top of foundational models or using that AI technology. An organization may even fill all four roles at once, for example by using its own data to customize an existing foundational model to build technology to streamline its business operations when used by its employees.

Even within a given industry, for a given planned use, the legal and risk concerns vary depending on your AI industry role. For example, the technology deployer offering legal research solutions is going to have different concerns than the legal department relying on that solution. And looking at the same use case from the perspective of your customer or your vendor will inform how you negotiate contracts for AI technology. The table below illustrates how common legal issues vary across this continuum.

Data ProviderCurates data or content to train AI Foundation Architect Builds and trains foundational models; may provide APIs to access such models Technology Deployer Leverages foundational models to build AI services End User Uses AI technology
Legal Framework: What, if any, special laws apply based on the nature of the data? Legal and Policy Framework: What mandatory and voluntary AI policies apply? Do any self-regulatory, legal or contractual obligations apply? Legal Framework: What laws or regulations (e.g., data privacy laws, consumer rights laws, employment laws) will govern the technology? Legal Framework: What laws or regulations will govern the planned use cases?
Bias: How was the data collected, and what controls are in place to protect against bias? Bias: How are datasets assessed for bias prior to use in the model? What controls are in place to ensure outputs do not carry forward bias resulting from data collection practices or downstream user inputs? Bias: What controls are in place to ensure outputs do not carry forward bias resulting from data collection practices or downstream user inputs? Bias: What controls are in place to ensure outputs do not carry forward bias resulting from data collection practices or our inputs?
Auditability: What technological and contractual protections or audit rights do we need to confirm downstream users comply with obligations? Transparency: What records can we create (or do we need to) to track the generation or usage of our technology to comply with existing and anticipated AI policies? Is our AI model "explainable," and can we summarize that for downstream users? Transparency: How do we verify the operability and safety of the technology? How transparent should we be about this information?

Disclosure Requirements: What types of disclaimers and disclosures regarding the use of AI are needed?
Auditability: How do we verify the operability and safety of the technology? What records can we create (or do we need) to track the usage of AI or AI-generated content?

Disclosure Requirements: What types of disclaimers and disclosures regarding the use of AI are needed?
Data Privacy and Data Rights: Have we obtained appropriate consents or provided required notices necessary to share the data? What mechanisms do we need to comply with privacy requests? Data Privacy and Data Rights: What personal data is in the dataset, and what is the risk of the model leaking that data? Do we need rights to data generated or accessed by downstream users? Do any data privacy obligations need to be required of the downstream users? Data Privacy and Data Rights: Do the privacy terms for the foundational model align with our privacy policy? Do we need rights to or ownership of usage data? How does the foundational model use our data? Data Privacy and Data Rights: Do the privacy terms for the foundational model align with our privacy policy? Do we need rights to or ownership of usage data? How does the foundational model use our data?
Intellectual Property: What rights do we have to share data? What representations about such rights do we provide? What rights do we want in derivative datasets? Intellectual Property: What rights do we need to use the datasets to train our models? How do we mitigate the risk that models will output potentially infringing or defamatory content? Intellectual Property: How do we protect the IP in our technology? What rights do our End Users want? What rights, if any, are we obligated to provide to the Foundation Architects? Intellectual Property: What rights do we have to the content or output we generate? What level of protectability is afforded under traditional IP laws, and what steps are required? What is the IP infringement risk?
Dataset Curation and Accuracy: What level of data hygiene and data normalization are we obligated to provide? What legal or contractual obligations do we have regarding data accuracy?

Termination: What rights continue following agreement termination? Is it possible to remove data from the model?
Dataset Curation: How do we contract for datasets? What protections and representations do we need from the Data Provider? Quality Assurance: How do we implement procedures to ensure we verify our technology operates as intended? Quality Assurance: How do we implement procedures to ensure we verify the veracity of the AI outputs? How do we clear our rights to use specific AI outputs?
Use Restrictions: What contractual limitations do we put on use of datasets? How are we protected when prohibited use occurs? Use Restrictions: What limitations should we place on use of the models? How are we protected when prohibited use occurs? Use Restrictions: What limitations should we place on how our technology is used? How are we protected when prohibited use occurs?

High-Risk Uses: In use cases where misuse or malfunction could cause serious harm, what liability could arise? What insurance coverage do we need?
High-Risk Uses: In use cases where misuse or malfunction could cause serious harm, what liability could arise? What insurance coverage do we need?
Downstream Liability: What type of secondary liability may arise from downstream uses? How can contractual provisions mitigate these risks? Downstream Liability: What is our risk of product liability? What type of secondary liability may arise from downstream uses? How can contractual provisions mitigate these risks?

Upstream Protections: What contractual and technological measures are in place to mitigate our liability arising from use or deployment of the foundational technology?
Employee Policies: What guardrails do we provide our employees that strike the right balance among allowing use of AI technology, mitigating risk and supporting ease of compliance with such policies?

Upstream Protections: What contractual and technological measures mitigate our liability arising from our planned use? What vendor due diligence has been completed and documented?
Data ProviderCurates data or content to train AI
Legal Framework: What, if any, special laws apply based on the nature of the data?
Bias: How was the data collected, and what controls are in place to protect against bias?
Auditability: What technological and contractual protections or audit rights do we need to confirm downstream users comply with obligations?
Data Privacy and Data Rights: Have we obtained appropriate consents or provided required notices necessary to share the data? What mechanisms do we need to comply with privacy requests?
Intellectual Property: What rights do we have to share data? What representations about such rights do we provide? What rights do we want in derivative datasets?
Dataset Curation and Accuracy: What level of data hygiene and data normalization are we obligated to provide? What legal or contractual obligations do we have regarding data accuracy?

Termination: What rights continue following agreement termination? Is it possible to remove data from the model?
Use Restrictions: What contractual limitations do we put on use of datasets? How are we protected when prohibited use occurs?
Foundation Architect Builds and trains foundational models; may provide APIs to access such models
Legal and Policy Framework: What mandatory and voluntary AI policies apply? Do any self-regulatory, legal or contractual obligations apply?
Bias: How are datasets assessed for bias prior to use in the model? What controls are in place to ensure outputs do not carry forward bias resulting from data collection practices or downstream user inputs?
Transparency: What records can we create (or do we need to) to track the generation or usage of our technology to comply with existing and anticipated AI policies? Is our AI model "explainable," and can we summarize that for downstream users?
Data Privacy and Data Rights: What personal data is in the dataset, and what is the risk of the model leaking that data? Do we need rights to data generated or accessed by downstream users? Do any data privacy obligations need to be required of the downstream users?
Intellectual Property: What rights do we need to use the datasets to train our models? How do we mitigate the risk that models will output potentially infringing or defamatory content?
Dataset Curation: How do we contract for datasets? What protections and representations do we need from the Data Provider?
Use Restrictions: What limitations should we place on use of the models? How are we protected when prohibited use occurs?
Downstream Liability: What type of secondary liability may arise from downstream uses? How can contractual provisions mitigate these risks?
Technology Deployer Leverages foundational models to build AI services
Legal Framework: What laws or regulations (e.g., data privacy laws, consumer rights laws, employment laws) will govern the technology?
Bias: What controls are in place to ensure outputs do not carry forward bias resulting from data collection practices or downstream user inputs?
Transparency: How do we verify the operability and safety of the technology? How transparent should we be about this information?

Disclosure Requirements: What types of disclaimers and disclosures regarding the use of AI are needed?
Data Privacy and Data Rights: Do the privacy terms for the foundational model align with our privacy policy? Do we need rights to or ownership of usage data? How does the foundational model use our data?
Intellectual Property: How do we protect the IP in our technology? What rights do our End Users want? What rights, if any, are we obligated to provide to the Foundation Architects?
Quality Assurance: How do we implement procedures to ensure we verify our technology operates as intended?
Use Restrictions: What limitations should we place on how our technology is used? How are we protected when prohibited use occurs?

High-Risk Uses: In use cases where misuse or malfunction could cause serious harm, what liability could arise? What insurance coverage do we need?
Downstream Liability: What is our risk of product liability? What type of secondary liability may arise from downstream uses? How can contractual provisions mitigate these risks?

Upstream Protections: What contractual and technological measures are in place to mitigate our liability arising from use or deployment of the foundational technology?
End User Uses AI technology
Legal Framework: What laws or regulations will govern the planned use cases?
Bias: What controls are in place to ensure outputs do not carry forward bias resulting from data collection practices or our inputs?
Auditability: How do we verify the operability and safety of the technology? What records can we create (or do we need) to track the usage of AI or AI-generated content?

Disclosure Requirements: What types of disclaimers and disclosures regarding the use of AI are needed?
Data Privacy and Data Rights: Do the privacy terms for the foundational model align with our privacy policy? Do we need rights to or ownership of usage data? How does the foundational model use our data?
Intellectual Property: What rights do we have to the content or output we generate? What level of protectability is afforded under traditional IP laws, and what steps are required? What is the IP infringement risk?
Quality Assurance: How do we implement procedures to ensure we verify the veracity of the AI outputs? How do we clear our rights to use specific AI outputs?
High-Risk Uses: In use cases where misuse or malfunction could cause serious harm, what liability could arise? What insurance coverage do we need?
Employee Policies: What guardrails do we provide our employees that strike the right balance among allowing use of AI technology, mitigating risk and supporting ease of compliance with such policies?

Upstream Protections: What contractual and technological measures mitigate our liability arising from our planned use? What vendor due diligence has been completed and documented?