Chevron Phillips Chemical is implementing generative AI with a focus on virtual agents and document processing, taking a measured approach to deployment. They formed a cross-functional team including legal, IT security, and data science to educate leadership and identify appropriate use cases. The company is particularly focusing on processing unstructured documents and creating virtual agents for specific topics, while carefully considering bias, testing challenges, and governance in their implementation strategy.
This case study explores Chevron Phillips Chemical's strategic approach to implementing Large Language Models (LLMs) and generative AI in their operations, offering valuable insights into how a major chemical manufacturing company is navigating the challenges of deploying AI in a regulated industry.
## Organizational Structure and Initial Approach
The company recently consolidated its data operations, bringing together data science, data engineering, and traditional business intelligence under one organization. This consolidation came at a crucial time as the company faced increasing pressure to develop a comprehensive approach to generative AI. Their response was to form a cross-functional team that included:
* Legal and Intellectual Property
* IT Security
* Digital Workplace
* Data Science
* Data Engineering
* Analytics
The primary initial focus was on education and demystification, particularly for the leadership team. This approach reflects a mature understanding that while generative AI offers significant potential, it's important to cut through the hype and present realistic capabilities to stakeholders.
## Use Case Selection and Implementation Strategy
The company is pursuing several use cases that demonstrate a pragmatic approach to LLM implementation:
### Virtual Agents
They are developing virtual agents focused on specific topics, aiming to surpass the capabilities of traditional chatbot technologies. This focused approach allows them to maintain control over the scope and reliability of the AI systems while delivering tangible value.
### Document Processing and RPA Integration
A significant focus is on processing unstructured information, particularly in areas where traditional coding approaches fall short due to variability in source materials. They're using LLMs to:
* Impose structure on variable PDF documents
* Extract information from unstructured data
* Process market intelligence
* Analyze internal documentation
* Enable predictive analytics on the processed data
## Technical Implementation and Challenges
The company has adopted a hybrid approach to model deployment:
* Utilizing existing models like Databricks' Dolly and OpenAI's GPT
* Working with their extensive internal documentation, including operations manuals
* Focusing on lower-risk applications for initial deployment
### Testing and Quality Assurance
One of the most significant challenges they've identified is testing these systems, particularly:
* Managing open-ended user interfaces
* Ensuring the system stays on topic
* Preventing unwanted responses or actions
* Developing appropriate testing methodologies for non-deterministic systems
## Bias and Fairness Considerations
Their approach to handling bias is multifaceted:
* Recognition of both technical and colloquial definitions of bias in the context of language models
* Focus on user training to understand model behaviors
* Development of prompt engineering guidelines
* Provision of real-world examples for users to follow
* Clear communication about the limitations of using these systems as sources of truth
## Governance and Infrastructure
The company has developed a robust governance framework:
### Platform and Infrastructure
* Utilization of Databricks as their primary platform
* Implementation of Unity Catalog for enhanced data governance
* Focus on building scalable platforms for self-service capabilities
### Policy Framework
They have recently completed their generative AI policy, which focuses on:
* Defining appropriate use cases
* Risk assessment and mitigation
* Testing requirements
* Traceability and accountability measures
* Emphasis on productivity enhancement
## Risk Management and Deployment Strategy
The company is taking a measured approach to risk management:
* Starting with lower-risk applications, particularly in documentation management
* Focusing on specific use cases rather than broad, general-purpose applications
* Maintaining strong governance and oversight
* Ensuring traceability of AI decisions and actions
## Future Directions and Considerations
While the company is actively moving forward with LLM implementation, they maintain a balanced perspective on the technology's capabilities and limitations. Their approach emphasizes:
* Practical application over hype
* Careful consideration of risks and limitations
* Focus on productivity enhancement
* Strong governance framework
* Continuous evaluation and adjustment of implementation strategies
The case study demonstrates a well-thought-out approach to LLM implementation in a regulated industry, balancing innovation with risk management. The company's focus on specific use cases, strong governance, and careful testing methodology provides a valuable template for other organizations looking to implement LLMs in similar environments.
Start your new ML Project today with ZenML Pro
Join 1,000s of members already deploying models with ZenML.