Recent Baker Donelson Analysis
Click here to view the Baker Donelson AI Team’s recent analysis for enterprises deploying Artificial Intelligence tools like ChatGPT, API-based services, and integrated products.
According to authors Alexandra P. Moylan, CIPP/US, AIGP, and Alisa L. Chestler, CIPP/US, QTE, the recent changes in OpenAI’s Usage Policies have several implications for organizations deploying their own and similar models:
- Governance and Risk Management: Companies should review their AI governance frameworks to ensure that AI tools are not positioned or marketed as providing regulated professional guidance, including medical, legal, or financial advice. Policies should make clear that human expertise and oversight by a licensed professional are required for all professional recommendations.
- Acceptable Use Policies: Enterprises integrating AI models through APIs or custom applications should update internal acceptable use policies to ensure proper subject matter and professional oversight over AI-generated output that constitutes professional advice or “high-risk use cases.”
- Training and Education: Provide continuous employee training regarding permissible and prohibited uses of GAI tools in professional contexts.
- Disclaimers and Client Communication: Entities embedding AI technology into consumer- or client-facing interfaces (such as digital health chatbots or legal information tools) should update disclaimers and user terms to align with applicable use policies and maintain consistency with federal and state consumer protection laws.
- Regulatory Compliance Alignment: For health care users, these updates underscore HIPAA and FDA compliance boundaries, as outputs engaging in clinical interpretation or decision support may implicate regulatory oversight. Legal organizations must similarly ensure that reliance on AI-generated output does not constitute the unauthorized practice of law.