New updates include: Conversation Assist & Summary Prompt Update
➡️ Exact delivery dates may vary, and brands may therefore not have immediate access to all features on the date of publication. Please contact your LivePerson account team for the exact dates on which you will have access to the features.
🛑 The timing and scope of these features or functionalities remain at the sole discretion of LivePerson and are subject to change.
Conversation Assist
Features
More transparency and control when creating prompts
Back in May we rolled out some exciting enhancements to LivePerson’s Prompt Library, which is the tool that prompt engineers use to create and manage the prompts for Conversational Cloud solutions. You can learn about the enhancements in the release note published during the week of May 15th.
We are pleased to announce that we’ve updated Conversation Assist so that all of these same enhancements are now available to you as you create and manage Conversation Assist prompts.
Plus, we’ve added two more prompt settings:
- Max. tokens = Specify the maximum number of output tokens to receive back from the LLM. There are several possible use cases for adjusting this value. For example, you might want shorter responses. Or, you might want to adjust this to control your costs, as output tokens are more expensive than input tokens.
- Temperature = Specify a floating-point number between 0 and 1, inclusive. You can edit this field to control the randomness of responses. The higher the number, the more random the responses. There are valid use cases for a higher number, as it offers a more human-like experience. If you set this to zero, the responses are very deterministic: You can expect responses that are consistently the same every time.
So far, the Max. tokens and Temperature settings are only available for prompts accessed via Conversation Assist, but stay tuned. We plan to make them available elsewhere too, e.g., in prompts accessed via Conversation Builder bots.
Automated Conversation Summary
Enhancements
Enhancements for Out-of-the-box Summary Prompts
Brands using either the out-of-the-box paragraph style or structured style prompts will automatically benefit from these enhancements. If the brand has customized prompts, you can review the updates in the out-of-the-box prompts and make the necessary adjustments to your customized versions as needed.
Language Support for Structured Summaries
Structured summaries now support the same languages as the paragraph prompt. For more details on how the language is determined, please refer to here.
Improved Summary Details
The summary text will now distinguish between bot and human agents. Bots will be identified by their defined nicknames, while human agents will be referred to as "Agent 1," "Agent 2," etc., to indicate different agents handling the conversation.
Example:
Old summary
Gita placed an order for a queen mattress but needed a full-sized one. The agent informed her that she could not exchange it but could return it. The agent then created a return order for her and told her to wait for an update via email. Gita thanked the agent and said she had no further questions.
New summary
Consumer Gita placed an order for a queen mattress but needed a full-sized one. Helper Bot routed her to Agent 1, who informed her that she could not exchange the mattress but could return it. Agent 1 created a return order for her and assured her it would be taken care of. Agent 2 then asked for feedback on the interaction with Agent 1.