Building a Simple Conversational Chat Application Using Gemini-Pro
In this blog post, we’ll walk you through how to create a simple conversational chat application using Gemini-Pro, an advanced AI model provided by Google’s Generative AI platform. Whether you’re new to AI or experienced in building apps, this step-by-step guide will help you understand the basics and build your own chatbot in no time.
By the end of this guide, you'll have a fully functional chat app that lets users interact with Gemini-Pro to ask questions and get AI-powered responses.
What is Gemini-Pro?
Before we dive into the code, let's talk about what Gemini-Pro is. Google Gemini is an AI model designed for natural language understanding and generation. It can help with tasks like answering questions, providing recommendations, and carrying out conversations in a human-like manner.
With Gemini-Pro, you can create powerful chatbots that can have meaningful conversations with users. Whether it’s for customer support, virtual assistants, or just a fun chatbot, Gemini-Pro is a great tool to get started with.
Why Use Streamlit?
We’re using Streamlit to build the chat application because it's an easy-to-use framework for building web applications with Python. Streamlit makes it simple to build and share beautiful apps for machine learning and data science. You don’t need any frontend development skills like HTML or JavaScript — just Python!
Let’s Get Started!
Now that you know the basics, let's move on to building our chat app. We’ll break it down into small, easy steps so you can follow along.
Step 1 - Set Up Your Environment
Before we start coding, we need to set up the environment. First, make sure you have Python installed on your machine.
Then, install the required libraries by running this command:
pip install streamlit python-dotenv google-generativeai streamlit-extras
Here’s what each package does
- streamlit - Helps us build the web interface.
- python-dotenv - Allows us to load environment variables from a
.env
file. - google-generativeai - Provides the API for interacting with Gemini-Pro.
- streamlit extras - Provides extra UI features like colored headers and spacing.
Step 2 - Configure Your API Key
To use Google’s Generative AI (Gemini), you’ll need an API key. Refer to this blog post to get an API key for you.
Once you have your API key, create a .env
file in your project directory and add the following
GOOGLE_API_KEY="your_google_api_key"
This will securely store your API key and make it accessible in your application without hardcoding it into the source code.
Step 3 - Writing the Code
Here’s the full code for our chat app. Don't worry — I’ll explain it step by step afterward.
import streamlit as st
import os
from dotenv import load_dotenv
import google.generativeai as genai
from streamlit_extras.colored_header import colored_header
from streamlit_extras.add_vertical_space import add_vertical_space
import uuid
# Load environment variables
load_dotenv()
# Configure Gemini AI
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
# Function to get Gemini response
def get_gemini_response(question, chat_model):
response = chat_model.send_message(question, stream=True)
return response
# Set page config
st.set_page_config(page_title="Gemini Chat", layout="wide")
# Custom CSS for styling
st.markdown("""
<style>
.main {
background-color: #1E1E1E;
color: #FFFFFF;
}
.stTextInput > div > div > input {
background-color: #2D2D2D;
color: #FFFFFF;
}
.stButton > button {
background-color: #4CAF50;
color: #FFFFFF;
}
</style>
""", unsafe_allow_html=True)
# Initialize session state for conversations
if 'conversations' not in st.session_state:
st.session_state['conversations'] = []
if 'current_conversation' not in st.session_state:
st.session_state['current_conversation'] = None
# Sidebar for conversations
with st.sidebar:
st.title("Conversations")
if st.button("New Chat", key="new_chat"):
new_id = str(uuid.uuid4())
st.session_state['conversations'].append({
'id': new_id,
'title': f"New Chat {len(st.session_state['conversations']) + 1}",
'messages': []
})
st.session_state['current_conversation'] = new_id
for convo in st.session_state['conversations'][-5:]:
if st.button(convo['title'], key=f"convo_{convo['id']}"):
st.session_state['current_conversation'] = convo['id']
# Main chat interface
col1, col2 = st.columns([3, 1])
with col1:
colored_header(label="Gemini Chat", description="Chat with Gemini AI", color_name="green-70")
add_vertical_space(2)
if st.session_state['current_conversation']:
current_convo = next((c for c in st.session_state['conversations'] if c['id'] == st.session_state['current_conversation']), None)
if current_convo:
for message in current_convo['messages']:
with st.chat_message(message['role']):
st.write(message['content'])
user_input = st.chat_input("Type your message here...")
if user_input:
current_convo['messages'].append({"role": "user", "content": user_input})
with st.chat_message("user"):
st.write(user_input)
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
chat_model = genai.GenerativeModel('gemini-pro').start_chat(history=[])
for chunk in get_gemini_response(user_input, chat_model):
full_response += chunk.text
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
current_convo['messages'].append({"role": "assistant", "content": full_response})
with col2:
st.subheader("About")
st.write("This is a modern chat interface for Gemini AI. Ask any question and get AI-powered responses.")
Step 4 - Running the App
Now that we’ve written the code, let’s run the app! In your terminal, navigate to the project directory and run
streamlit run app.py
This command will launch your Streamlit app in the browser. You’ll see a clean and simple interface where you can start a new chat or continue an existing conversation.
Understanding the Code
Let’s go over the key parts of the code
- Loading Environment Variables - We use
load_dotenv()
to load our API key from the.env
file, keeping it secure. - Configuring Gemini-Pro - The
genai.configure(api_key)
sets up the API key to authenticate with the Gemini-Pro model. - Session State - We use
st.session_state
to store chat history and current conversations, ensuring that the chat persists when users navigate through the app. - Sidebar - In the sidebar, users can start a new chat or continue an existing one. The chat history for each conversation is stored and displayed.
- Chat Interface - The main interface is where the user can type messages and see responses from Gemini-Pro. When a user sends a message, it gets added to the conversation, and we call the
get_gemini_response()
function to get the AI's reply. - Custom Styling - We added some custom CSS to make the chat interface look nice, with dark backgrounds and styled text input fields.
Congratulations! You’ve just built a simple conversational chat application using Gemini-Pro. This project can be the starting point for more complex AI-powered chatbots. Whether you're building customer support bots, virtual assistants, or just experimenting with AI, this tutorial gives you the tools to get started.
Feel free to modify the app to add more features, like saving chat histories, supporting multiple languages, or even connecting it to external APIs for more functionality.
Happy coding!