Guidance on Using Generative AI Tools
Generative AI tools offer incredible opportunities to enhance productivity, creativity, and research. However, their use comes with specific responsibilities to ensure compliance with privacy, security, and intellectual property regulations. The following guidelines are intended to help you navigate the safe and ethical use of generative AI tools at Jefferson Lab.
1. Sensitive Data Protection
- Do not input sensitive data: Always ensure that confidential, proprietary, export-controlled, personally identifiable information (PII), controlled unclassified information (CUI), sensitive S&T or other sensitive information is not entered into generative AI tools. This includes data that may be subject to privacy laws or other legal protections.
- Data Security: Before using generative AI, verify that the tool aligns with the Lab’s data security policies. Avoid sharing data or lab-specific details that could be sensitive.
2. Licensing and Terms of Use
- Understand the Terms of Service: Most generative AI tools, including services like ChatGPT, do not have special agreements with Jefferson Lab regarding data security or privacy. Always read and understand the terms of service of the tools you are using.
- Responsibility for Compliance: Users are personally responsible for adhering to the terms of use of the AI tools and ensuring they do not violate any terms regarding data usage or security.
3. Intellectual Property (IP) Considerations
- Originality and Ownership: Be mindful that generative AI tools might create outputs that resemble existing copyrighted works. Before utilizing AI-generated content, ensure that it does not infringe on intellectual property rights.
- Rights to Data: Only use data you have the right to input into AI systems.
4. Ethical and Responsible Use
- Consent to Audit: Interactions with AI tools will be monitored, inspected and audited for adherence to acceptable use policies.
- Prohibited Use: Generative AI should not be used for activities that violate ethical guidelines, the provider's terms of service, or applicable laws. Any illegal or unauthorized use of AI tools is strictly prohibited.
- Bias: Be aware of potential biases in AI models and ensure that any AI-generated content aligns with Jefferson Lab's ethical standards, particularly when used for research or public-facing communication.
- AI Hallucination: AI hallucination refers to instances where AI models, particularly large language models (LLMs), generate incorrect, nonsensical, or misleading outputs, often appearing confident and plausible. To address this, implement human verification or oversight processes for critical outputs to catch and correct hallucinations.
- Rely on human fact-checking: Encourage human review and fact-checking of AI-generated content to ensure accuracy and reliability.
- Create hallucination-specific controls and tests: Develop controls and tests to detect and mitigate hallucinations, such as connecting GenAI to context-specific data, creating guardrails, and testing multiple prompt scenarios.
5. Consult IT for Support
- If you are unsure whether a specific use case aligns with the Lab’s policies or if you need further assistance, please contact the IT Help Desk (helpdesk@jlab.org) or reach out to the appropriate department for clarification. It’s essential to stay informed and up-to-date with Lab policies regarding AI tools.
6. AI Tools
- To help you navigate through the available options, here's a curated list of recommended tools to use, as well as those to avoided.
- Tools available for use at Jefferson Lab:
- Tools prohibited from use at Jefferson Lab:
By following these guidelines, users can harness the benefits of generative AI tools while ensuring the security, ethical integrity, and legal compliance of their work.