-
Notifications
You must be signed in to change notification settings - Fork 1
StreamLit
To set up Streamlit in a Docker container, create a Dockerfile with these steps:
Start by copying the requirements.txt
file to the container and installing the required Python packages:
# Copy requirements.txt into the container
COPY requirements.txt /app/requirements.txt
# Install Python packages from requirements.txt
RUN mamba install --yes --file requirements.txt && mamba clean --all -f -y
Set up environment variables for running the Streamlit server, customizing the URL path and port as needed:
ENV STREAMLIT_SERVER_BASEURLPATH=/team1
ENV STREAMLIT_SERVER_PORT=5001
Expose the port that Streamlit will run on
# Streamlit port
EXPOSE 5001
Define the command to run the Streamlit app on startup:
ENV PATH=/opt/miniforge/envs/team1_env/bin:$PATH
ENTRYPOINT ["python"]
CMD ["app.py"]
Once running, the app will be accessible at:
To add custom styling, load the style.css
file within app.p
y:
def load_css(file_name):
"""
Load a CSS file to style the app.
Args:
file_name (str): css file path
"""
try:
with open(file_name) as f:
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
except FileNotFoundError:
st.error(f"css file '{file_name}' not found.")
Import Streamlit and other necessary libraries in app.py
:
import streamlit as st
import os
import subprocess
from RAG import *
The main application logic is encapsulated within the main()
function, which sets up the Streamlit interface, loads custom CSS, handles user input, and displays responses.
The load_css()
function loads an external CSS file (assets/style.css
) to style the chatbot's appearance. This enhances UI elements and customizes button styles.
def load_css(file_name):
try:
with open(file_name) as f:
st.markdown(f'<style>{f.read()}</style>', unsafe_allow_html=True)
except FileNotFoundError:
st.error(f"CSS file '{file_name}' not found.")
Streamlit containers are used to define the chatbot’s title and layout. Additionally, custom CSS is added for styling assistant and feedback messages:
header = st.container()
header.write("""<div class='chat-title'>Team 1 Support Chatbot</div>""", unsafe_allow_html=True)
header.write("""<div class='fixed-header'/>""", unsafe_allow_html=True)
The confusion matrix shows the count of True Positives
, False Positives
, True Negatives
, and False Negatives
. It updates as users interact with the chatbot:
confusion_matrix = {
"True Positives": 0,
"False Positives": 0,
"True Negatives": 0,
"False Negatives": 0
}
def update_confusion_matrix(key, increment=1):
"""
Updates the confusion matrix based on feedback.
Args:
key (str): Metric to update (e.g., 'True Positives').
increment (int): Value to add to the metric (default is 1).
"""
confusion_matrix[key] += increment
st.write("### Confusion Matrix")
for key, value in confusion_matrix.items():
st.write(f"{key}: {value}")
User messages are input using the Streamlit st.chat_input
widget, which stores messages in st.session_state
to enable persistent storage across reruns. The chatbot retrieves the response through the query_rag
function and displays it alongside user messages:
## 3.5 Handle user input
if prompt := st.chat_input("Message Team1 support chatbot"):
# creating user_message_id and assistant_message_id with the same unique "id" because they are related
unique_id = str(uuid4())
user_message_id = f"user_message_{unique_id}"
assistant_message_id = f"assistant_message_{unique_id}"
# save the user message in the session state
st.session_state.messages[user_message_id] = {"role": "user", "content": prompt}
st.markdown(f"<div class='user-message'>{prompt}</div>", unsafe_allow_html=True)
response_placeholder = st.empty()
with response_placeholder.container():
with st.spinner('Generating Response...'):
# generate response from RAG model
answer, sources = query_rag(prompt)
# removing the sources from the answer for keyword extraction
# main_answer = answer.split("\n\nSources:")[0].strip()
# total_text = prompt + " " + main_answer
# extract_keywords(total_text)
if sources == []:
st.error(f"{answer}")
else:
st.session_state.messages[assistant_message_id] = {"role": "assistant", "content": answer, "sources": sources}
st.rerun()
Each response includes "like" and "dislike" feedback buttons to gather user sentiment on assistant messages. This feature uses handle_feedback()
to update message feedback in st.session_state
:
def handle_feedback(assistant_message_id):
"""
Handle feedback for a message.
Args:
id (str): The unique ID of the message
"""
previous_feedback = st.session_state.messages[assistant_message_id].get("feedback", None)
feedback = st.session_state.get(f"feedback_{assistant_message_id}", None)
user_message_id = assistant_message_id.replace("assistant_message", "user_message", 1)
question = st.session_state.messages[user_message_id]["content"]
if question.lower().strip() in answerable_questions:
if feedback == 1:
if previous_feedback == None:
db_client.increment_performance_metric("true_positive")
elif previous_feedback == "dislike":
db_client.increment_performance_metric("false_negative", -1)
db_client.increment_performance_metric("true_positive")
st.session_state.messages[assistant_message_id]["feedback"] = "like"
elif feedback == 0:
if previous_feedback == None:
db_client.increment_performance_metric("false_negative")
elif previous_feedback == "like":
db_client.increment_performance_metric("true_positive", -1)
db_client.increment_performance_metric("false_negative")
st.session_state.messages[assistant_message_id]["feedback"] = "dislike"
else:
if previous_feedback == "like":
db_client.increment_performance_metric("true_positive", -1)
elif previous_feedback == "dislike":
db_client.increment_performance_metric("false_negative", -1)
st.session_state.messages[assistant_message_id]["feedback"] = None
elif question.lower().strip() in unanswerable_questions:
if feedback == 1:
if previous_feedback == None:
db_client.increment_performance_metric("true_negative")
elif previous_feedback == "dislike":
db_client.increment_performance_metric("false_positive", -1)
db_client.increment_performance_metric("true_negative")
st.session_state.messages[assistant_message_id]["feedback"] = "like"
elif feedback == 0:
if previous_feedback == None:
db_client.increment_performance_metric("false_positive")
elif previous_feedback == "like":
db_client.increment_performance_metric("true_negative", -1)
db_client.increment_performance_metric("false_positive")
st.session_state.messages[assistant_message_id]["feedback"] = "dislike"
else:
if previous_feedback == "like":
db_client.increment_performance_metric("true_negative", -1)
elif previous_feedback == "dislike":
db_client.increment_performance_metric("false_positive", -1)
st.session_state.messages[assistant_message_id]["feedback"] = None
db_client.update_performance_metrics()
Finally, the main function runs the Streamlit app with a specified server configuration. The environment variable STREAMLIT_RUNNING
ensures that only a single instance is initiated, and a specified port and server address are set for deployment:
if __name__ == "__main__":
# If streamlit instance is running
if os.environ.get("STREAMLIT_RUNNING") == "1":
main()
else:
os.environ["STREAMLIT_RUNNING"] = "1" # Set the environment variable to indicate Streamlit is running
#if multiple processes are being started, you must use Popen followed by run subprocess!
subprocess.Popen(["streamlit", "run", __file__, "--server.port=5001", "--server.address=0.0.0.0", "--server.baseUrlPath=/team1"])
subprocess.run(["jupyter", "notebook", "--ip=0.0.0.0", "--port=6001", "--no-browser", "--allow-root", "--NotebookApp.base_url=/team1/jupyter"])
Run the Docker container containing your Streamlit app, and it should automatically launch the application on the specified port (e.g., 5001
). You can access it via:
http://localhost:5001/team1
http://127.0.0.1:5001/team1
The application dynamically displays a confusion matrix summarizing chatbot performance. This is shown in the main app body:
# Initialize and display the confusion matrix
confusion_matrix = {
"True Positives": 0,
"False Positives": 0,
"True Negatives": 0,
"False Negatives": 0
}
st.write("### Confusion Matrix")
for key, value in confusion_matrix.items():
st.write(f"{key}: {value}")
The chatbot uses the st.chat_input
widget for user inputs, creating a conversational experience:
user_input = st.chat_input("Message Team1 support chatbot")
The conversation is displayed sequentially using the st.chat_message
widget, ensuring a smooth flow of communication between the user and the chatbot:
if "messages" not in st.session_state:
st.session_state.messages = []
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
User feedback on chatbot responses is collected using interactive buttons. These buttons allow users to indicate whether they found a response helpful or not:
if st.button("Like", key=f"like_{assistant_message_id}"):
handle_feedback(assistant_message_id, "like")
if st.button("Dislike", key=f"dislike_{assistant_message_id}"):
handle_feedback(assistant_message_id, "dislike")
Here are some common issues and solutions:
- Issues: You encounter an error indicating that the port is already in use.
- Solution: Change the port by updating the Dockerfile’s port setting or use the following command when starting the container:
docker run -p 5001:5001 <your_image>
This ensures the application runs on port 5001
, or another port of your choosing.
- Issues: Streamlit can’t find installed libraries, even though they’re in the container.
- Solution: Ensure the environment’s bin directory is correctly added to the PATH in your Dockerfile:
ENV PATH=/opt/miniforge/envs/team1_env/bin:$PATH
This ensures that the libraries installed in the container are accessible to the Streamlit app.
- Issues: The app crashes immediately upon running.
- Solution: Check the following:
- Ensure
requirements.txt
includes all necessary packages, including Streamlit.- Verify any syntax errors in
app.py
.- Use
docker logs <container_id>
to view logs and identify the issue.
- Issues: You can’t access the app at the specified URL.
- Solution:
- Confirm that Docker Desktop or Docker Engine is running.
- Verify that you’re accessing the correct port (e.g.,
5001
) as configured in the Dockerfile.- Check that no other applications are using the same port.
-
Issues: Modifications made to
app.py
or other files are not reflected when you run the app. - Solution:
Rebuild the Docker container to ensure changes are applied:
docker build -t <your_image_name> .
docker run -p 5001:5001 <your_image_name>
If developing locally without Docker, clear the Streamlit cache by running:
streamlit cache clear
- Issues: CSS styling or themes are not applying as expected.
-
Solution: Ensure that the CSS file is correctly located and loaded using
st.markdown
. If needed, use absolute paths for referencing the file.