Chat Bot
Interactive AI chatbot
Chatbot
A chatbot is a computer program that simulates human conversation with an end user. Though not all chatbots are equipped with artificial intelligence (AI), modern chatbots increasingly use conversational AI techniques such as natural language processing (NLP) to understand the user’s questions and automate responses to them.
The value of chatbots
Chatbots can make it easy for users to find information by instantaneously responding to questions and requests—through text input, audio input, or both—without the need for human intervention or manual research.
Chatbot technology is now commonplace, found everywhere from smart speakers at home and consumer-facing instances of SMS, WhatsApp and Facebook Messenger, to workplace messaging applications including Slack. The latest evolution of AI chatbots, often referred to as “intelligent virtual assistants” or “virtual agents,” can not only understand free-flowing conversation through use of sophisticated language models, but even automate relevant tasks. Alongside well-known consumer-facing intelligent virtual assistants—such as Apple's Siri, Amazon Alexa, Google’s Gemini and OpenAI’s ChatGPT—virtual agents are also increasingly used in an enterprise context to assist customers and employees.
To increase the power of apps already in use, well-designed chatbots can be integrated into the software an organization is already using. For example, a chatbot can be added to Microsoft Teams to create and customize a productive hub where content, tools, and members come together to chat, meet and collaborate.
To get the most from an organization’s existing data, enterprise-grade chatbots can be integrated with critical systems and orchestrate workflows inside and outside of a CRM system. Chatbots can handle real-time actions as routine as a password change, all the way through a complex multi-step workflow spanning multiple applications. In addition, conversational analytics can analyze and extract insights from natural language conversations, typically between customers interacting with businesses through chatbots and virtual assistants.
Artificial intelligence can also be a powerful tool for developing conversational marketing strategies. AI chatbots are available to deliver customer care 24/7 and can discover insights into your customer’s engagement and buying patterns to drive more compelling conversations, and deliver more consistent and personalized digital experiences across your web and messaging channels.
Featured products
IBM watsonx Assistant
IBM Cloud Pak for Data
How chatbots work
The earliest chatbots were essentially interactive FAQ programs, which relied on a limited set of common questions with pre-written answers. Unable to interpret natural language, these FAQs generally required users to select from simple keywords and phrases to move the conversation forward. Such rudimentary, traditional chatbots are unable to process complex questions, nor answer simple questions that haven’t been predicted by developers.
Over time, chatbot algorithms became capable of more complex rules-based programming and even natural language processing, enabling customer queries to be expressed in a conversational way. This gave rise to a new type of chatbot, contextually aware and armed with machine learning to continuously optimize its ability to correctly process and predict queries through exposure to more and more human language.
Modern AI chatbots now use natural language understanding (NLU) to discern the meaning of open-ended user input, overcoming anything from typos to translation issues. Advanced AI tools then map that meaning to the specific “intent” the user wants the chatbot to act upon and use conversational AI to formulate an appropriate response. These AI technologies leverage both machine learning and deep learning—different elements of AI, with some nuanced differences—to develop an increasingly granular knowledge base of questions and responses informed by user interactions. This sophistication, drawing upon recent advancements in large language models (LLMs), has led to increased customer satisfaction and more versatile chatbot applications.
The time it takes to build an AI chatbot can vary based the technology stack and development tools being used, the complexity of the chatbot, the desired features, data availability—and whether it needs to be integrated with other systems, databases or platforms. With a user-friendly, no-code/low-code platform AI chatbots can be built even faster.
Related links
Build a chatbot with AI in 5 Mins. (05:33)
- This link plays a video
How to build a chatbot
Chatbots vs. AI chatbots vs. virtual agents
The terms chatbot, AI chatbot and virtual agent are often used interchangeably, which can cause confusion. While the technologies these terms refer to are closely related, subtle distinctions yield important differences in their respective capabilities.
Chatbot is the most inclusive, catch-all term. Any software simulating human conversation, whether powered by traditional, rigid decision tree-style menu navigation or cutting-edge conversational AI, is a chatbot. Chatbots can be found across nearly any communication channel, from phone trees to social media to specific apps and websites.
AI chatbots are chatbots that employ a variety of AI technologies, from machine learning—comprised of algorithms, features, and data sets—that optimize responses over time, to natural language processing (NLP) and natural language understanding (NLU) that accurately interpret user questions and match them to specific intents. Deep learning capabilities enable AI chatbots to become more accurate over time, which in turn enables humans to interact with AI chatbots in a more natural, free-flowing way without being misunderstood.
Virtual agents are a further evolution of AI chatbot software that not only use conversational AI to conduct dialogue and deep learning to self-improve over time, but often pair those AI technologies with robotic process automation (RPA) in a single interface to act directly upon the user’s intent without further human intervention.
To help illustrate the distinctions, imagine that a user is curious about tomorrow’s weather. With a traditional chatbot, the user can use the specific phrase “tell me the weather forecast.” The chatbot says it will rain. With an AI chatbot, the user can ask, “What’s tomorrow’s weather lookin’ like?” The chatbot, correctly interpreting the question, says it will rain. With a virtual agent, the user can ask, “What’s tomorrow’s weather lookin’ like?”—and the virtual agent not only predicts tomorrow’s rain, but also offers to set an earlier alarm to account for rain delays in the morning commute.
Generative AI-powered chatbots
The next generation of chatbots with generative AI capabilities will offer even more enhanced functionality with their understanding of common language and complex queries, their ability to adapt to a user’s style of conversation and use of empathy when answering users’ questions. Business leaders can clearly see this future: 85% of execs say generative AI will be interacting directly with customers in the next two years, as reported in The CEO’s guide to generative AI study, from IBV. An enterprise-grade artificial intelligence solution can empower companies to automate self-service and accelerate the development of exceptional user experiences.
FAQ chatbots no longer need to be pre-programmed with answers to set questions: It’s easier and faster to use generative AI in combination with an organization’s’ knowledge base to automatically generate answers in response to the wider range of questions.
While conversational AI chatbots can digest a users’ questions or comments and generate a human-like response, generative AI chatbots can take this a step further by generating new content as the output. This new content can include high-quality text, images and sound based on the LLMs they are trained on. Chatbot interfaces with generative AI can recognize, summarize, translate, predict and create content in response to a user’s query without the need for human interaction.
Enterprise-grade, self-learning generative AI chatbots built on a conversational AI platform are continually and automatically improving. They employ algorithms that automatically learn from past interactions how best to answer questions and improve conversation flow routing.
Common chatbot use cases
Consumers use AI chatbots for many kinds of tasks, from engaging with mobile apps to using purpose-built devices such as intelligent thermostats and smart kitchen appliances. Business uses are equally varied: Marketers use AI-powered chatbots to personalize customer experiences and streamline e-commerce operations; IT and HR teams use them to enable employee self-service; contact centers rely on chatbots to streamline incoming communications and direct customers to resources.
Conversational AI chatbots can remember conversations with users and incorporate this context into their interactions. When combined with automation capabilities including robotic process automation (RPA), users can accomplish complex tasks through the chatbot experience. And if a user is unhappy and needs to speak to a real person, the transfer can happen seamlessly. Upon transfer, the live support agent can get the full chatbot conversation history.
Conversational interfaces can vary, too. AI chatbots are commonly used in social media messaging apps, standalone messaging platforms, proprietary websites and apps, and even on phone calls (where they are also known as integrated voice response, or IVR).
Typical use cases include:
Timely, always-on assistance for customer service or human resource issues.
Personalized recommendations in an e-commerce context.
Promoting products and services with chatbot marketing.
Defining of fields within forms and financial applications.
Intaking and appointment scheduling for healthcare offices.
Automated reminders to for time- or location-based tasks.
Benefits of chatbots
The ability of AI chatbots to accurately process natural human language and automate personalized service in return creates clear benefits for businesses and customers alike.
Improve customer engagement and brand loyalty
Before the advent of chatbots, any customer questions, concerns or complaints—big or small—required a human response. Naturally, timely or even urgent customer issues sometimes arise off-hours, over the weekend or during a holiday. But staffing customer service departments to meet unpredictable demand, day or night, is a costly and difficult endeavor.
Today, chatbots can consistently manage customer interactions 24x7 while continuously improving the quality of the responses and keeping costs down. Chatbots automate workflows and free up employees from repetitive tasks. A chatbot can also eliminate long wait times for phone-based customer support, or even longer wait times for email, chat and web-based support, because they are available immediately to any number of users at once. That’s a great user experience—and satisfied customers are more likely to exhibit brand loyalty.
Reduce costs and boost operational efficiency
Staffing a customer support center day and night is expensive. Likewise, time spent answering repetitive queries (and the training that is required to make those answers uniformly consistent) is also costly. Many overseas enterprises
offer the outsourcing of these functions, but doing so carries its own significant cost and reduces control over a brand’s interaction with its customers.
A chatbot, however, can answer questions 24 hours a day, seven days a week. It can provide a new first line of support, supplement support during peak periods, or offload tedious repetitive questions so human agents can focus on more complex issues. Chatbots can help reduce the number of users requiring human assistance, helping businesses more efficient scale up staff to meet increased demand or off-hours requests.
Generate leads and satisfy customers
Chatbots can help with sales lead generation and improve conversion rates. For example, a customer browsing a website for a product or service might have questions about different features, attributes or plans. A chatbot can provide these answers in situ, helping to progress the customer toward purchase. For more complex purchases with a multistep sales funnel, a chatbot can ask lead qualification questions and even connect the customer directly with a trained sales agent.
Risks and limitations of chatbots
Any advantage of a chatbot can be a disadvantage if the wrong platform, programming, or data are used. Traditional AI chatbots can provide quick customer service, but have limitations. Many rely on rule-based systems that automate tasks and provide predefined responses to customer inquiries.
Newer, generative AI chatbots can bring security risks, with the threat of data leakage, sub-standard confidentiality and liability concerns, intellectual property complexities, incomplete licensing of source data, and uncertain privacy and compliance with international laws. With a lack of proper input data, there is the ongoing risk of “hallucinations,” delivering inaccurate or irrelevant answers that require the customer to escalate the conversation to another channel.
Security and data leakage are a risk if sensitive third-party or internal company information is entered into a generative AI chatbot—becoming part of the chatbot’s data model which might be shared with others who ask relevant questions. This could lead to data leakage and violate an organization’s security policies.
Best practices and tips for selecting chatbots
Selecting the right chatbot platform can have a significant payoff for both businesses and users. Users benefit from immediate, always-on support while businesses can better meet expectations without costly staff overhauls.
For example, an e-commerce company could deploy a chatbot to provide browsing customers with more detailed information about the products they’re viewing. The HR department of an enterprise organization might ask a developer to find a chatbot that can give employees integrated access to all of their self-service benefits. Software engineers might want to integrate an AI chatbot directly into their complex product.
Whatever the case or project, here are five best practices and tips for selecting a chatbot platform.
Pick a solution that can accomplish immediate goals but won’t limit future expansion. Why does a team want its own chatbot? How is this goal currently addressed, and what are the challenges that are driving the need for a chatbot? Does it offer templates to help organizations scale up and diversify chatbot offerings in the future, or will other teams need to develop something else from scratch? Does the interface enable superior chatbot design? Does the pricing allow for efficient internal expansion?
Understand the impact AI has on the customer experience. Chatbots are an expression of brand. The right AI can not only accurately understand what customers need and how those needs are being articulated, but be able to respond in a non-robotic way that reflects well on a business. Without the right AI tools, a chatbot is just a glorified FAQ.
Ask what it takes to build, train and improve chatbots over time. Does the organization need something simple and ready-made, or sophisticated API access for custom implementation? AI doesn’t train itself. Organizations need a clear sense of what content will arrive pre-built and what will need to be created in-house. Some chatbots offer the ability to use historical chatlogs and transcripts to create these intents, saving time. Those using machine learning can also automatically adjust and improve responses over time.
Look for ways to connect to, not replace, existing investments. Often, emerging channels or technologies seem like they will replace established ones. But instead, they become just another medium for an organization to manage. A chatbot that connects to these channels and customer case systems can provide the best of both worlds: Modernizing the customer experience while more accurately routing users to the information and individuals that can solve their problems.
Determine if the chatbot meets deployment, scalability and security requirements. Every organization and industry has its own unique compliance requirements and needs, so it’s important to have those criteria clearly defined. Many chatbots are delivered via the cloud to draw on the learnings and outcomes from other customer conversations, so if this requires an on-premises solution or a single tenant environment, the list of available providers is much shorter. It’s also important to understand if and how data is used, as it can have major impacts in highly regulated industries.
Local Chatbot
📚 Requirements.txt: LangChain, Llama2
This adventure dives into two powerful open-source tools: LangChain, your LLM orchestrator, and LLAMA2, a state-of-the-art LLM powerhouse. Together, they’ll empower you to create a basic chatbot right on your own computer, unleashing the magic of LLMs in a local environment.
from langchain_community.llms import LlamaCpp, CTransformers
from langchain_community.embeddings import LlamaCppEmbeddings, HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma, FAISS
from langchain_community.document_loaders import DirectoryLoader, TextLoader, PyPDFLoader, WebBaseLoader, PyPDFDirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, RetrievalQA
# Load the LlamaCpp language model, adjust GPU usage based on your hardware
llm = LlamaCpp(
model_path = "e:/models/llama/llama-2-7b-chat.Q6_K.gguf",
n_gpu_layers=40,
n_batch=512, # Batch size for model processing
)
# Define the prompt template with a placeholder for the question
template = """
Question: {question}
Answer:
"""
prompt = PromptTemplate(template=template, input_variables=["question"])
# Create an LLMChain to manage interactions with the prompt and model
llm_chain = LLMChain(prompt=prompt, llm=llm)
print("Chatbot initialized, ready to chat...")
while True:
question = input("> ")
answer = llm_chain.run(question)
print(answer, '\n')
This code demonstrates a basic chatbot built with LangChain and Llama-2. LangChain is a framework for building creative AI workflows, while Llama-2 is a powerful large language model (LLM) capable of generating text. Here's a breakdown of the steps involved:
1. Import Libraries: We start by importing the necessary libraries:
LlamaCpp from langchain_community.llms: This provides access to the Llama-2 LLM through a C++ interface.
PromptTemplate and LLMChain from langchain.prompts and langchain.chains respectively: These components help define the interaction flow with the LLM using prompts.
2. Load Llama-2 Model: The LlamaCpp object is created, specifying the path to the pre-trained Llama-2 model (models/llama-2-7b-chat.Q4_0.gguf) and adjusting the number of GPU layers (n_gpu_layers) and batch size (n_batch) based on your hardware capabilities.
3. Define Prompt Template: A PromptTemplate object is created. This template defines the structure of the prompt presented to the LLM, including placeholders for user input. Here, the template uses "Question:" followed by a placeholder {question} and an "Answer:" section.
4. Create LLMChain: An LLMChain object is created. This chain connects the prompt template and the Llama-2 model, essentially defining the workflow of how user questions will be processed by the LLM and how the answers will be generated.
5. Chatbot Loop: The code enters a loop where it:
- Prompts the user for a question with >.
- Uses the LLMChain.run(question) method to pass the user's question to the LLM chain.
- Prints the generated answer from the LLM.
This basic structure allows users to interact with the Llama-2 model through a question-and-answer format. You can further customize this code to explore different functionalities of LangChain and Llama-2.
Chatting with Your Own Data with RAG
📚 Resources: Llama2 + LangChain
For years, data has been trapped in text files, spreadsheets, databases. We’ve analyzed it through traditional means — text editors, scripts, charts, graphs. We can leverage a technique called Retrieval-Augmented Generation (RAG) to connect LLM to your own data.
Chatbot with your own PDF
Should I invest in Google today
📚 Resources: Llama2 + LangChain + Pinecone
How to ask ChatGPT or other LLMs for investment advice?
Chat for Investment Advices
Collecting Data
📚 Resources: Llama2 + LangChain + Pinecone
Download 5 years Google Annual Report to data folder.
We can now load those documents into memory with LangChain, using 2 lines of code:
from langchain.document_loaders import DirectoryLoader
loader = DirectoryLoader(
'./Langchain/data/', # my local directory
glob='**/*.pdf', # we only get pdfs
show_progress=True
)
docs = loader.load()
Split them into chunks.
📚 Each chunk corresponds to an embedding vector.
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
chunk_size=1000,
chunk_overlap=0
)
docs_split = text_splitter.split_documents(docs)
docs_split
Storing the data chunk to Pinecone
📚 We upload the data to the vector database.
The default OpenAI embedding model used in Langchain is 'text-embedding-ada-002' (OpenAI embedding models.) It is used to convert data into embedding vectors. We can also use other Embedding tools like Hugging Face Embeddings.
# we use the free embedding model
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2", model_kwargs={'device': 'cpu'})
doc_db = PineconeVectorStore.from_documents(
docs_split,
embeddings,
index_name='d384'
)
Chat with PDF directly
📚 Resources: Annual Report
We can now search for relevant documents in that database using the cosine similarity metric
query = "What were the most important events for Google in 2021?"
search_docs = doc_db.similarity_search(query)
search_docs
Using RetrievalQA to get LLM or OpenAI's help
📚 Resources: LLM
RetrievalQA is actually a wrapper around a specific prompt. The chain type “stuff“ will use a prompt, assuming the whole query text fits into the context window. It uses the following prompt template:
from langchain.chains import RetrievalQA
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type='stuff',
retriever=doc_db.as_retriever(),
)
query = "What were the earnings in 2022?"
result = qa.run(query)
result
> 'The total revenues for the full year 2022 were $282,836 million, with operating income and operating margin information not provided in the given context.'
Here the context will be populated with the user’s question and the results of the retrieved documents found in the database. You can use other chain types: “map_reduce”, “refine”, and “map-rerank” if the text is longer than the context window.
Online Chatbot
📚 Resources: Llama-2 + Streamlit + Replicate
Chainlit is an open-source Python library designed to streamline the creation of chatbot applications ready for production. It focuses on managing user sessions and the events within each session, like message exchanges and user queries. In Chainlit, each time a user connects to the application, a new session is initiated. This session comprises of a series of events managed through the library’s event-driven decorators. These decorators act as triggers to carry out specific actions based on user interactions.
The Chainlit application has decorators for several events (chat start, user message, session resume, session stop, etc.). For our chatbot, we’ll concentrate on writing code for two key events: starting a chat session and receiving user messages.
Initializing the Chat
📚 Resources: LLM
Chainlit is an open-source Python library designed to streamline the creation of chatbot applications ready for production. It focuses on managing user sessions and the events within each session, like message exchanges and user queries. In Chainlit, each time a user connects to the application, a new session is initiated. This session comprises of a series of events managed through the library’s event-driven decorators. These decorators act as triggers to carry out specific actions based on user interactions.
The Chainlit application has decorators for several events (chat start, user message, session resume, session stop, etc.). For our chatbot, we’ll concentrate on writing code for two key events: starting a chat session and receiving user messages.
Previously, our chatbot lacked context for any previous interactions. While this limitation can work in standalone question-answer applications, a conversational application typically requires the chatbot to have some understanding of the previous conversation. To overcome this limitation, we can create a memory object from one of LangChain’s memory modules, and add that to our chatbot code. LangChain offers several memory modules. The simplest is the ConversationBufferMemory, where we pass previous messages between the user and model in their raw form alongside the current query.
📚 Resources: LLM
📚 Resources: LLM
📚 Resources: LLM
📚 Resources: LLM