简单快速搭建一个大模型agent前端

手撸代码开发完一个大模型 agent,如何用一个前端来展示它的效果,自己开发一个前端?或者把它添加到 dify 中调用查看效果?都太麻烦。。可以直接调用streamlit,快速搭建一个连接 agent 的前端服务。

先看效果

下面是用 streamlit 搭建的前端界面,向 deepseek 发送"你好",deepseek 把推理过程和最终响应都返回给前端。

上代码

下面代码通过 openai 接口把用户请求发送给阿里云上的 deepseek,然后使用 streamlit 展示 deepseek 的推理过程和最终响应。

注意 streamlit 展示元素位置和元素添加顺序有关,越早添加的元素其在网页上展示位置越靠上。譬如代码中先添加了推理过程的占位符 reasoning_placeholder,后添加最终响应占位符 response_placeholder,所以在浏览器访问 streamlit 时看到推理过程在最终响应上面。

import streamlit as st
from openai import OpenAI
import os

# 设置页面配置
st.set_page_config(
    page_title="DeepSeek-R1 Chat Interface",
    page_icon="",
    layout="wide"
)

# API基础URL
api_base = "https://dashscope.aliyuncs.com/compatible-mode/v1"
chat_completion = OpenAI(
    api_key=os.getenv("ALI_API_KEY"),
    base_url=api_base
)


def get_response(prompt):
    try:
        response = chat_completion.chat.completions.create(
            model="deepseek-r1",
            messages=[{"role": "user", "content": prompt}],
            temperature=0.7,
            stream=True
        )
        return response
    except Exception as e:
        print(f"Error occurred: {str(e)}")
        st.error(f"Error occurred: {str(e)}")
        return None


# 页面标题
st.title("DeepSeek-R1 Chat Interface")
st.markdown("---")

# 用户输入
user_input = st.text_area("Enter your prompt:", height=100)

# 添加发送按钮
if st.button("Send"):
    if user_input:
        # 创建空的占位符来显示推理过程
        reasoning_placeholder = st.empty()
        # 创建空的占位符来显示最终响应
        response_placeholder = st.empty()
        full_response = ""
        full_reasoning = ""

        # 获取流式响应
        response = get_response(user_input)
 
        if response:
            # 使用 expander 来显示详细的推理过程和响应
            with st.expander("Show detailed inference process", expanded=True):
                for chunk in response:
                    if hasattr(chunk.choices[0], 'delta'):
                        # 获取响应内容
                        content = chunk.choices[0].delta.content
                        # 获取推理内容
                        reasoning = chunk.choices[0].delta.reasoning_content

                        # 更新响应内容
                        if content:
                            full_response += content
                            response_msg = f"**Response:**\n{full_response}"
                            response_placeholder.markdown(response_msg)
                         
                        # 更新推理内容
                        if reasoning:
                            full_reasoning += reasoning
                            reasoning_msg = f"**Reasoning Process:**\n{full_reasoning}"
                            reasoning_placeholder.markdown(reasoning_msg)

    else:
        st.warning("Please enter a prompt first.")

# 添加侧边栏配置
with st.sidebar:
    st.title(" Chat Settings")
    st.markdown("---")
    if st.button("Clear Chat History"):
        st.session_state.messages = []
        st.rerun()

# 添加页脚
st.markdown("---")
st.markdown("*Powered by DeepSeek-R1 o Built with Streamlit*")

运行指令 streamlit run app.py,打开本机浏览器访问 http://localhost:8052,就可查看 streamlit 如上节效果所示。

# streamlit run app.py
Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false.


  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8502
  Network URL: http://10.36.10.121:8502
  External URL: http://51.140.101.125:8502



streamlit 对本地验证开发好的大模型应用非常实用,值得拿来玩下。

原文链接:,转发请注明来源!