Release Date: 4/23/2025
Table of Contents
Ability to View Knowledge Base Article Generation Processing Steps 3
Ability to Create Bots with LLM Intentless Models 6
Data Source Ingestion Monitoring API 8
Support for Pre-Processing HTML Text 12
Zendesk Community Posts Integration 17
Enhanced Ticket Quality Visibility in Knowledge Base Article Generation 20
Features
The following features have been added to the Aisera GenAI Platform in this release.
KBA Generation
The following features have been added to Knowledge Base Article Generation in this release.
Ability to View Knowledge Base Article Generation Processing Steps
This release gives you visibility into data processing for content generation by presenting each stage visually as part of a processing funnel.
After you have integrated a Data Source with your Aisera tenant instance, associated a Data Source with your bot, and the KBA Generation job is complete, you can review a detailed breakdown of the processed data:
Steps to Access:
- Navigate to Content Generation > Knowledge Generation and select your required configuration.
- Click the Generate Knowledge button.
- Once the job completes successfully, you will see the knowledge clusters.
- Choose the Data Processing Funnel button to expand the funnel and view the details, as illustrated in the following screenshot.
Breakdown of Data Processing Stages:
-
Total Tickets:
- Represents the total number of tickets in the selected ticket data source/'s on the day of execution.
- Users can click on 'Total Tickets' to navigate to SOR -> Tickets, where they can see the list of tickets.
-
Filtered & Processed Ticket Set:
This stage displays the list of tickets that remain after applying filters in Action → Configuration and removing those that do not meet preprocessing criteria.
Preprocessing Criteria:- The ticket title and at least one of the following fields—comments or resolution notes—must not be empty.
- Comments must have a timestamp.
- Very Good and Good quality tickets that were already part of previous clusters will not be reconsidered.
You can click on Filtered & Processed Ticket Set to navigate to SOR → Tickets, where you can view the processed tickets.
The grey box beside this stage represents tickets that do not meet the filtering criteria. This section is non-clickable.
-
Tickets with Resolution:
- Represents the subset of filtered tickets that contain a solution.
- The grey box beside this stage represents filtered tickets that do not have a solution.
- Users can click on 'Tickets with Resolution' to navigate to the Ticket Details Page, as shown in the screenshot.
-
Total Clusters:
- Displays the number of ticket clusters formed, which are visible on the Knowledge Generation Page.
- Upon clicking on this box, the user can see the total clusters formed in the knowledge Generation page.
Limitations:
- The Data Processing Funnel is not available for past jobs (jobs that have already been executed).
- The funnel is not available when the job filter is set to ALL.
Conversational AI 2.0
The following feature has been added to Conversational AI in this release.
Ability to Create Bots with LLM Intentless Models
Most conversational platforms work based on predefined intents, which means they are designed to recognize specific user intentions and provide the appropriate responses. However, this approach has significant shortcomings. Such bots can only answer a predefined set of questions, they don't take into account the question context and finally, they don't exploit the conversation history. At the same time, the intent database generation is a time-consuming task, the final result lacks generalizability and is an error-prone process.
Welcome to Conversational AI 2.0 where you can experience more human-like conversations with history, context and disambiguation. No more defining and managing intents and mapping actions. Conversations AI 2.0 is intentless, meaning, it is easier to set up, easier to manage and provides faster time to value. It uses LLMs to collect input, reason and organize results from multiple fulfillments to enrich the user experience. New intentless apps do not try to interpret intent but provide responses based on the user input received, offering a more flexible and adaptable conversational experience
Since there are no pre-defined intents, the system has to be smart to know what user inputs may be required to execute workflows. This is achieved using LLMs to intelligently review flow descriptions and determine the mandatory and optional input data fields needed to execute the flow. This process of conversationally collecting required inputs is called slot filling.
Here is a high level architectural diagram of how this works.
For more information about setting up documents with Conversational AI 2.0, will appear in subsequent releases.
Analytics
The following features have been added to Analytics in this release.
User Retention Metric
The Total Returning Users Over Time metric in Users tab of the Prebuilt Analytics window depicts the number of returning users in a given period of time.
This allows you to determine whether the bot is receiving a lot of repeat visitors.
Time to Correct Response
The Prebuilt Analytics window now includes the Average First Resolution Rate Over Time metric in the Requests tab.
This metric measures the time taken in seconds from when a user initiates an actionable request or query (excluding casual conversations) until the bot provides a first relevant, actionable, correct response.
APIs
The following features have been added to APIs in this release.
Data Source Ingestion Monitoring API
This release includes the new /dsexecution API endpoint, designed to provide you with greater visibility into the status of your Data Source ingestion jobs. This enhancement allows you to programmatically check the execution status and key metrics for your configured data sources.
This API provides your with:
- Increased Transparency: Get clear insight into whether your data ingestion jobs are succeeding, failing, or currently running.
- Proactive Monitoring: Programmatically monitor your critical data pipelines to quickly identify and troubleshoot ingestion issues.
- Improved Control: Stay informed about the duration and success rate of your connector and overall ingestion processes.
How to Access the API:
You can access this API using a GET request to the following endpoint:
https://[your-aisera-platform-url]/dsexecution
Example:
https://abc.api.aisera.cloud/dsexecution?tenantId=123&datasourceId=456
Authentication:
Authenticate your request using Basic Authentication with your Aisera platform credentials (the same username and password you use to log in to the Aisera platform).
Request Parameters:
- tenantId: Your unique tenant ID.
- datasourceId: The ID of the specific data source associated with your bot.
You can find the data source ID on the right side of your Data Source Details window.
Example Request (Conceptual - replace with your actual details):
GET https://[your-aisera-platform-url]/dsexecution?tenantId=YOUR_TENANT_ID&datasourceId=YOUR_DATA_SOURCE_ID
Authorization: Basic [Your Baabse64 Encoded Credentials]
EX:
curl --location 'https://abc.api.aisera.cloud/dsexecution?tenantId=123&datasourceId=456' \
--header 'Authorization: Basic dGVzdEB0ZXN0LmNvbTpwYXNzd29yZGhlcmUK' \
--header 'Cookie: stickynesscookie=eb2de412f924f708'
Replace [your-aisera-platform-url], YOUR_TENANT_ID, YOUR_DATA_SOURCE_ID, and [Your Base64 Encoded Credentials] with your specific configuration.
API Response Details:
The API will return a JSON object containing the ingestionStatus for the specified data source's latest run.
The key fields you will find in the response include:
- jobName: The internal name given to the job executed for this data source.
- connectorDurationSecs: The time, in seconds, that the connector part of the ingestion job took to run.
- kbIngestionfailures: The number of Knowledge Base articles or documents that failed specifically at the connector level during this run.
- dataSourceId: The internal ID of the data source.
- ingestionPipelineStatus: The overall status of the entire ingestion pipeline for this job execution. Common values include SUCCEEDED, PENDING, RUNNING, FAILED, or KILLED.
- lastRunStatus: The status of the most recent run. (Note: There is a minor fix pending, so this value might not always be accurate in the very short term.)
- jobDurationSecs: The total time, in seconds, that the entire ingestion pipeline job took to complete.
- ingestionJobExecutionId: The unique internal ID for this specific execution of the ingestion job.
- jobStartedAtTimestamp: The timestamp indicating when this ingestion job execution began.
- kbIngestionCount: The total number of Knowledge Base articles or documents processed during this run (includes both successful and failed items at the connector level).
- ingestionRunMessage: An optional message related to the ingestion run, usually empty for successful runs.
- ingestionRunError: Contains an error message if the ingestion process encountered an error.
Data Sources
The following features have been added to Data Sources in this release.
Support for Pre-Processing HTML Text
This release includes an updated parser that incorporates Presidio v2 to accurately identify and protect sensitive data within the content you ingest.
A key benefit of this upgrade is the new support for pre-processing HTML text. Many data sources, such as Knowledge Bases and Blog articles, contain rich content formatted in HTML.
With this enhancement, your bot can effectively analyze and handle PII embedded within HTML, ensuring more comprehensive data privacy and compliance while preserving the original structure and formatting of your content.
Box Data Source
The Box Data Source, when you connect it using the File Data setup, now supports these key features:
-
Multiple Folder Ingestion: You are no longer limited to ingesting files from a single Box folder at a time. In the Data Source configuration, you can now enter multiple Box Folder IDs separated by commas in the Folder Id parameter. This allows you to include content from several specific folders within a single data source.
-
Recursive Folder Crawling: A new Enable Recursion checkbox has been added to the Box connector configuration. When selected, the connector will automatically crawl and ingest supported files from all sub-folders nested within the specified Folder IDs. This eliminates the need to manually add every sub-folder ID individually, when you want to include nested content.
- OAuth Client Credentials Authentication: For enhanced security and alignment with standard Box integration practices, we now support the OAuth Client Credentials authentication method for your Box Integration. This option provides a secure way for server-to-server communication between Aisera and Box.
This new connection configuration gives you:
-
Increased Flexibility: Easily ingest data from multiple, non-contiguous folders and their sub-folders.
-
Simplified Configuration: Reduce manual effort by specifying multiple folders and enabling recursion instead of creating numerous data sources or listing every single folder ID.
- Comprehensive Data Coverage: Ensure you capture all relevant files from deeply nested folder structures within Box.
To set up the Box multi-data source configuration:
- Create an Aisera Service User for your Box application, if you don’t already have one.
- Use Settings > Integration > + New Integration to create an integration between your Box application and the Aisera Gen AI platform, using the OAuth 2.0 Client Credentials Grant.
- Choose Box from the list of Integrations.
- Select OAuthClientCredentials and enter the OAuth parameters.
- Navigate to Settings → Data Sources → + New Data Source.
- Choose the File Data option, instead of the regular Box connector option.
- Select Cloud Files. You can now see the integrations that are available for your tenant.
- Choose the Box integration that you created in Step 2.
- After you enter the integration, you can see the updated Folder Id parameter where you can enter comma-separated Data Source IDs.
You can find the Data Source ID on the right side of the Data Source Details window. (Settings >Data Source > open the data source).
Note that Folder Id is a required field.
- Enable the Enable Recursion checkbox if you want to include sub-folders.
- Use the default configuration for the rest of the Configuration parameters.
Zendesk Community Posts Integration
You can now leverage the Generic Connector to ingest Community Posts (and their associated comment) directly from your Zendesk instance into your Aisera Gen AI tenant instance as Knowledge Base Articles.
The Community Post Integration allows you to:
- Expand Your Knowledge Base: Bring valuable discussions and solutions from your Zendesk community forums into Aisera, making this knowledge accessible for AI-powered answers and automation.
- Capture Richer Content: Ingest both the original community post and all its comments, providing a comprehensive view of the conversation and potential solutions.
To set up the Zendesk Community Integration:
- Create an Aisera Service User for your Zendesk application, if you don’t already have one.
- Use Settings > Integration > + New Integration to create an integration between your Zendesk application and your Aisera Gen AI platform instance, using the Generic Connector.
- Choose Generic from the list of Integrations.
- Specify the authorization option that you use with your Zendesk instance.
- Navigate to Settings → Data Sources → + New Data Source.
- Choose the Generic API Data Source option.
- Specify Knowledge Base Learning as the function for your Data Source.
- Configure the data source to connect to your Zendesk instance
- and specify the endpoints for the Community Posts and comments.
This new integration helps you unlock more knowledge from your support ecosystem, enhancing the overall effectiveness of your Aisera Gen AI platform.
Enhancements
The following enhancements have been added to the Aisera GenAI Platform in this release.
KBA Generation
The following enhancements have been added to Knowledge Base Article Generation in this release.
Enhanced Ticket Quality Visibility in Knowledge Base Article Generation
The release includes an enhanced transparency feature for the Knowledge Base Article Generation process that allows you to assess the quality of processed tickets.
Recognizing that knowledge quality is directly proportional to ticket quality, this feature classifies tickets into three categories: Very Good, Good, and Poor.
This classification allows you to evaluate the ticket quality and, consequently, the resulting Knowledge Base Article quality.
- Very Good – A ticket that has a resolution and also has acknowledgment from the end user.
- Good – A ticket that has a resolution but no acknowledgment from the end user.
- Poor – A ticket that has no resolution.
During KBA Generation, only tickets that are rated as Very Good and Good are considered, while Poor quality tickets are excluded.
Steps to Access Ticket Quality Details:
- Navigate to Content Generation > Knowledge Generation and select your desired configuration.
- Click the Generate Knowledge button.
- Once the job completes successfully, you will see the knowledge clusters.
- Choose the Tickets with Resolution(Purple Box)/ Without Resolution (grey box) in the data processing funnel or click on the ticket count at each cluster level, to access the Ticket Details Page, that provides comprehensive information about each ticket.
Ticket Details Page Overview:
- ID: Shows the unique identifier of the ticket.
- Title: Presents the ticket's title.
- Ticket Type: Indicates the type of ticket (such as, incident, problem, request, or alert).
- Priority: Reflects the priority level assigned to the ticket from customers' Ticket Management System.
- Quality: Specifies the quality classification of the ticket as determined by the KBA Generation job, based on the presence of a resolution.
- Resolution Category: Identifies the category under which the ticket's resolution falls.
The Fixed categories include:
- Cannot Be Self-Resolved – Internal Fix Needed: Issues arising from internal company systems that require specific personnel intervention like Developer.
- Agent Support Required for Resolution: Issues that the user can partially resolve but necessitate assistance from an agent for certain steps, such as granting permissions.
- Absence of Information: Cases where no resolution exists, which falls under Poor category.
- Resolution Without Feedback: Situations where a solution is provided, but the end user has not confirmed that the issue is resolved.
- Resolution with Feedback: A situation where a solution is provided, and the end user has confirmed that the issue is resolved.
- Quality Classification Reason: Provides the rationale for assigning a particular quality category to the ticket. example say for 'Resolution with Feedback" - The agent provided a resolution action which the user followed and confirmed that it resolved the issue.
- Created Date: Indicates when the ticket was created.
Additional Features:
- Filtering: You can filter tickets based on any of the columns mentioned above to streamline their review process.
- Exporting: There is an option to download the ticket information, allowing you to share details with the IT agents for further action, such as adding resolutions to tickets that don’t have a resolution.
This feature empowers you to monitor and improve the quality of your tickets, leading to more accurate and valuable knowledge base article generation.
Known Issues
This release addresses the following Known Issues.
Zendesk API Limitation
The Zendesk Search API has a known limitation - it only returns a maximum of 1000 results. This limitation can lead to incomplete data ingestion for large knowledge bases.
To address this, our Zendesk Data Source connector now intelligently leverages the Zendesk Articles API when supported filtering parameters, such as label_names or locale, are used in your data source configuration. The Articles API does not have this 1000-result limitation.
This enhancement ensures more comprehensive and reliable data ingestion of your Zendesk Knowledge Base articles, especially for large knowledge bases or when filtering by labels, by bypassing the Search API's result limit.