Many of us have been dazzled by the power and promise of generative AI since OpenAI ChatGPT burst onto the scene late last year. A real technological disruption is under way. Large Language Models (LLMs) are rapidly maturing and their mastery of natural human language is impressive. Generative AI is entering the workspace too. And that means challenges like security, privacy, and data access must be tackled to make sure IT service management (ITSM) solutions are secure and business relevant.
Here at OpenText, we’re building a generative AI virtual agent to meet enterprise needs. Our solution is based on an architecture that combines the power of new generative AI technology with proven capabilities for data security, content management, unstructured data analytics, and ITSM. Our solution rests on four value pillars:
- Privacy and security
- Enterprise intelligence
- Access control
- Ubiquitous interface
Let’s explore these pillars in detail.
Privacy and Security
When employees start using public LLM services, proprietary sensitive data can easily (and quickly!) enter the public domain. That’s what happened when three employees at an electronics company unwittingly prompted ChatGPT with confidential company data.
Work-related LLM interactions must never be at risk of exposure. And to provide true value for enterprises, LLM services must also interact with knowledge databases or other domain-specific content in a secure way. With these musts in mind, we are building a private, secure, OpenText-operated LLM that can be fully managed and controlled by our customers.
This private service leverages our research, testing, and analysis of the most advanced open source LLM technologies. It is fine-tuned with relevant use case and industry data. And it is founded on our deep experience delivering trusted, secure, and scalable cloud solutions for customers around the globe.
Another key pillar for providing essential value is enabling conversations that are enriched with domain-specific enterprise content. This content can span different enterprise functions. For example, some employees may ask HR-policy questions or request support for an IT issue. Sales may seek insights into customer accounts and contracts. And ITOps can benefit from an analysis of their IT network topology or application health.
A truly relevant virtual agent—one empowered by LLM—can cover all these scenarios by accessing enterprise data and documents in real time. We refer to this as enterprise intelligence, and it is made possible by an innovative architecture that combines OpenText content management, pervasive OpenText IDOL indexing capabilities, and the secure private OpenText LLM engine discussed above.
The conversation below illustrates enterprise intelligence at work. The virtual agent is answering questions about a contract based on real-time information. The contract is managed in the contract management module of SMAX, our ITSM solution. A PDF of the contract is stored in the OpenText content management service.
Generative AI virtual agents enabled with enterprise intelligence deliver clear value. But they also introduce a challenge. Enterprise information is never uniformly accessible. There are multiple layers of access control that govern information visibility for employees. These controls vary across dimensions according to use case:
- Information can vary by employee location. For example, an employee seeking answers about salary or medical leave policies should get answers that are accurate for that employee’s home country.
- Information may be restricted by role. For example, only managers may seek information about employee promotions or performance review policies—and this information should not be exposed to all enterprise levels.
- Information may be segmented by group membership. For example, members of a development team may tap the virtual agent for insights related only to the applications they are responsible for.
With our access control capabilities, the virtual agent has context about each user it interacts with. It knows the user’s location, role(s), and group memberships. In addition, our solution enables tagging of information with entitlement labels that reference these access control dimensions. Together, the user context and data entitlements are used to enforce real-time access control filtering to make sure answers provided by the virtual agent are relevant and permissible to a particular user.
Let’s look at another example. Below, a US-based employee asks about company-designated holidays for the calendar year and follows up with a question about sick leave.
Now another employee, this time located in France, asks the same questions. The generative AI virtual agent knows how to respond—according to company policies in France.
The pillars described so far deal with making sure that the virtual agent can provide the right level of relevant information in a way that meets security, privacy, and access requirements for the enterprise. With these pillars in place, the gates can almost open to unleash generative AI into the enterprise domain. But one final and important pillar remains—the virtual agent must be available to assist employees whenever and wherever they need it.
Usability and interface models can make or break virtual agent adoption. Expectations and channels for interactions can vary widely across functions. Typical options for users seeking advice, support, and collaboration include service portals and collaboration tools (such as Microsoft Teams) that offer access to channels and embedded bots. HR and service desk agents who need real-time answers to user questions often lean on virtual agents embedded in their support interfaces.
Access to and interactions with the virtual agent must therefore be multichannel. We are building a virtual agent architecture that can provide a variety of interfaces for smart conversations enriched by enterprise intelligence. This includes access from the SMAX service portal, Microsoft Teams, and via an open smart virtual agent API set that can be used for constructing bespoke interfaces by customers or partners. A plug-in virtual agent widget for quickly accessing agent-side and back-office applications is another type of interface in the works.
We are excited by the opportunities that generative AI will bring to the heart of the enterprise and how it will transform service management. As ITSM and ESM solution experts, our goal is to provide a strong architecture built on advanced LLM, data security, access control, and enterprise intelligence capabilities. We look forward to sharing more news and details in future blogs. Stay tuned!
- Modern ITSM: Work Is Different Now, Shouldn’t Your ITSM Be Different Too?
- Trends Shaping Service Management
- Getting the Digital Workplace Right
- Modern ITSM Delivers Happy Users, Efficient IT, Better Outcomes [Video]
- Why Organizations Around the World Choose SMAX for ITSM
Get the latest at the IT Operations blog.
We’d love to hear your thoughts on this blog. Comment below.