Print this page

Artificial Intelligence Application Usage Policy

Policy Number: IT-15

Effective:10/24/2023

Last Revised:10/24/2023

Responsible Executive: Executive VP & CFOO

Contact Information: 765-677-2605

I. Scope

This policy applies to all employees, contractors, student workers, agents of the university and individuals who have access to Indiana Wesleyan University ("IWU") data and utilize Artificial Intelligence ("AI") applications and user interfaces for analytical, predictive, administrative task assistance or decision-making purposes. This policy does not apply to IWU students.

II. Policy Statement

IWU is dedicated to a security-centric artificial intelligence policy that prioritizes rigorous reviews of AI-driven solutions prior to procurement. This policy requires a comprehensive evaluation of the security measures and vulnerabilities inherent in potential AI acquisitions, focusing on data protection for IWU-Protected and IWU-Sensitive data, access control, and threat mitigation. IWU is committed to making informed, risk-aware decisions to ensure the resilience and trustworthiness of AI technologies integrated into our systems and operations.

III. Reason for the Policy

Considering the rapid advancements in AI technology and its integration into various aspects of the day-to-day life of IWU employees, it is imperative to establish a comprehensive AI policy for employee use. This policy is essential to ensure responsible, ethical, and secure deployment of AI applications that align with our organizational values and objectives.

The following key points highlight the significance of implementing an AI policy:

A. Data Privacy and Security: An AI policy sets guidelines for handling the categories of IWU data, ensuring its protection from unauthorized access or breaches. This approach is crucial for maintaining the trust of our stakeholders and complying with data protection regulations.

B. Transparency and Accountability: Implementing an AI policy establishes transparency in AI-based development, deployment, and decision-making processes. This approach fosters accountability among those responsible for AI applications and ensures clear understanding of how AI systems arrive at decisions.

C. Risk Mitigation: An AI policy helps identify potential risks associated with AI applications and guides strategies for the mitigation of said risks. By addressing risks early, we can minimize unintended consequences and negative impacts.

E. Compliance: The higher education industry is subject to regulations regarding data usage and privacy. An AI policy ensures our AI initiatives comply with these regulations, avoiding legal and financial repercussions.

F.  Awareness & Training: An AI policy ensures that employees are educated about AI-related best practices, data ethics, and potential challenges. This approach empowers individuals to make informed decisions while working with AI technologies. Training will be a shared responsibility with a focus on data security spearheaded by the Information Security Office.

G. Stakeholder Trust: Demonstrating a commitment to responsible AI practices through a comprehensive policy builds trust among customers, partners, and the public. This trust is essential for maintaining positive relationships and a reputable image.

H. Strategy: An AI policy guides the organization's strategic approach to AI adoption. It helps us define clear goals, allocate resources effectively, and measure the success of our AI initiatives over time.

IV. Procedures

A. Approval and Review

  1. Prior to engaging with third-party AI vendors, a thorough assessment of their data privacy and security practices must be conducted by the Information Security Office. No contract can be signed or application installed onto a device until a security review is completed.  Please refer to the SaaS, Software, Cloud Hosting and Hardware Procurement Policy for the appropriate contacts.
  2. Contracts with AI vendors should include provisions ensuring compliance with organizational data protection policies, confidentiality agreements, and data breach notification protocols.
  3. Periodic  reviews of AI application vendors will be conducted to assess their security measures. The vendor will need to abide by security best practices and align with IWU privacy policies and security frameworks.
  4. Prior to the approval of this policy, there may have been instances of AI usage, contracts etc. with vendors already in place.  Any pre-policy contract or usage should be discussed with the Information Security Office to ensure compliance, a data inventory is taken if applicable and a use case documented.

B. Data Classification and Storage

  1. Avoid entering any IWU-protected or IWU-sensitive data into any AI user interface (UI) for analysis, querying or training purposes. Please review the IWU Data Classification Policy for details about these categories of data.

C. Data Anonymization

  1. Utilize anonymized or de-identified data for AI analysis to minimize the risk of exposing personal or sensitive information.  An example of this is the utilization of generative AI (UI) applications such as ChatGPT and/or Google Bard.

D. Data Security

  1. Ensure that AI applications comply with IWU data security standards and encryption protocols.
  2. Protect data in transit and at rest, preventing unauthorized access or breaches.
  3. When using generative AI applications on work-issued devices, employees should be advised as to recommended settings or permissions associated with the large language model (LLM) or generative AI to ensure that data on that device is protected against unwanted access by the application.  This includes full understanding of the potential for local system data access and sharing with the vendor application.
  4. Employees are restricted from installing AI browser extensions or downloading AI applications on IWU owned devices unless the application is reviewed in advance by both the IT Support Center and the Information Security Office.

E. Incident Reporting

  1. Any suspected or actual breaches, unauthorized access, or unusual activities involving AI applications must be reported to the Information Security Office immediately.
  2. IWU must have a well-defined incident response plan to address AI-related breaches or security incidents promptly.

F. Employee Training

  1. Employees should be informed of the implications and consequences of using generative AI tools in the workplace, including providing training and resources on responsible use and risk.
  2. Employees should review and understand the generative AI system’s terms of use and other relevant materials, including privacy policies, to understand how data is handled, processed, and protected.
  3. Software developers and data scientists accessing generative AI models through APIs or building applications that use these models should be trained on ethics, data inaccuracy, security concerns, intellectual property impacts, trade secrets, and data minimization.
  4. Coding outputs by generative AI should be checked and validated for security vulnerabilities.
  5. Given the speed at which generative AI technologies are developing, leadership should designate personnel responsible for staying abreast of regulatory and technical developments and ensure that university policies and employee practices reflect such changes. The contact information for these personnel should be available to all employees, and employees should be reminded of the appropriate points of contact for the organization's privacy and/or data protection policies (e.g., data protection officers) should they have any questions or concerns about the use of generative AI tools.
  6. Generative AI outputs can be incorrect, out-of-date, biased, or misleading. Employees are responsible for the content they create, regardless of the assistance of generative AI tools, and employees are encouraged to independently verify the accuracy of any outputs. Verification is particularly important when employees use AI in situations that require legal certification of accuracy, e.g., financial reports, court filings, and due diligence documents.

G. Data Usage and Retention

  1. When utilizing AI employees are responsible for validating data outputs for accuracy, timeliness, or possible infringement of intellectual property rights.
  2. IWU employees should not utilize applications that retain data beyond the necessary timeframe required for analysis or operational purposes. When applicable, employees should remove data from applications once it is no longer required.
  3. Use AI applications solely for authorized business purposes. The use of AI for personal or non-business activities is discouraged on IWU owned devices.  IWU owned devices should be utilized for business purposes only. 

H. Copyright and Intellectual Properties

When utilizing generative AI, employees need to remember that under current U.S. copyright law, only content which is authored by humans is eligible for copyright protection.  Until/unless the laws change, machine-generated content is ineligible for copyright protection, no matter how much human effort went into the prompts that led to the AI-generated result.

  1. Avoid creating works which were solely generated by an AI (which the university would not own the copyright for).  When working with AI, employees should work collaboratively and iteratively so that the resulting work may have been assisted by AI but is still primarily the work of a human author who is accountable for its final form.
  2. Employees should review and understand the status of copyright and intellectual property laws to ensure that (A) they do not adversely put the university at risk for infringement, and (B) they do not create works which the university relies upon for enrollment and revenue but does not own or control.
  3. To model ethical use of AI and avoid academic integrity challenges, faculty and staff who use generative AIs to develop content should acknowledge significant contributions from AI and follow best practices for citation and credits when and where applicable.

V. SANCTIONS

Violation of this policy may result in disciplinary actions, including but not limited to reprimand, suspension, termination of login rights, and legal action if warranted. Employees and contractors are expected to report any suspected violations of this policy to their supervisor and the Chief Information Security Officer as soon as possible.

VI. RELATED INFORMATION

A. Regulation of AI

FTC guidance regarding generative AI. Note the Commission’s warnings about representations of accuracy. 

Chatbots, deepfakes, and voice clones: AI deception for sale 

The Luring Test: AI and the engineering of consumer trust 

Keep your AI claims in check 

GPT, GDPR, AI Act: How (Not) To Regulate “Generative AI?”

B. Understanding Generative AI

Exploring Generative AI and Law: ChatGPT, Mid-journey, and Other Innovations | Pre Conference Primer

Managing the risks of generative AI - A playbook for risk executives - beginning with governance

C. Emerging EU Guidance

Although this document is primarily intended for a US audience, emerging guidance from EU regulators is useful for US and global audiences.