This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Our Take

| 2 minute read

Where AI, Employees, and the Law Intersect

Baker Botts and ACC Houston hosted a half-day seminar on January 29, 2026 that featured timely discussions on AI, employment law, and what’s ahead for the workplace. Partners Rich Harper, Paul Morico and Scott Nelson and Latasha McDade, Senior Counsel at Exxon Mobil Corporation, Tenley Krueger, Vice President, Global Intellectual Property at Technip Energies and Courtney Flores, Managing Compliance Counsel at Motiva Enterprises participated in a session titled “Where AI, Employees, and the Law Intersect.”

Key Takeaways 

  1. AI use in employment has shifted from experimentation to accountability. Employers of all sizes are using AI to inform hiring, pay, promotion, discipline, and workforce reductions. The central legal question is no longer focused on a manager’s intent. The focus is now on how AI tools are trained, governed, documented, and reviewed. Weak documentation, limited human involvement, or overreliance on vendor tools increases risk even where no discriminatory motive exists. Employers remain responsible for explaining employee outcomes and demonstrating meaningful oversight.  
     
  2. Governance failures drive trade secret and IP risk. Generative AI expands the risk of confidential data disclosure, loss of trade secret protection, and ownership disputes. Employee inputs into third‑party tools and unreviewed AI‑assisted work product create exposure tied to process breakdowns rather than bad intent. Employers should treat AI as a drafting assistant rather than an author. Employees must review, validate, and finalize AI‑assisted work to confirm originality, ownership, and compliance.  
     
  3. Recording and investigations demand clear limits and human control. Workplaces increasingly operate on the assumption activity may be recorded or summarized by AI‑enabled tools. Employers should address consent, transparency, and confidentiality directly in policy, particularly for internal investigations and privileged communications. Panelists cautioned against using generative AI for witness interviews, credibility assessments, or conclusions. AI tools support administrative tasks such as timelines and organization. Core fact‑finding and judgment remain human responsibilities.  
     
  4. Vendor reliance does not shift accountability. Emerging state and local AI laws reinforce the employer’s role as the deployer of AI systems. Brand‑name vendors and off‑the‑shelf tools do not reduce legal responsibility. Employers need structured diligence, defined controls, and clear documentation of testing, safeguards, and human review. These expectations apply equally to large enterprises and smaller organizations adopting AI tools.  
     
  5. Cautious adoption and continuous learning define current practice. Across company size and industry, employers are moving forward carefully, learning alongside large language models, and adjusting governance as real‑world use reveals risk. Adoption is no longer about whether to use AI. The focus is on control, accountability, and understanding how AI affects employee decisions. Effective AI programs focus on employee decision‑making, documented human oversight, clear training, and transparent communication about how AI is used and why.