Skip to content

nicolad/email-finder

Repository files navigation

AI SDK, Next.js, and FastAPI Examples

These examples show you how to use the AI SDK with Next.js and FastAPI.

How to use

Execute create-next-app with npm, Yarn, or pnpm to bootstrap the example:

npx create-next-app --example https://github.com/vercel/ai/tree/main/examples/next-fastapi next-fastapi-app
yarn create next-app --example https://github.com/vercel/ai/tree/main/examples/next-fastapi next-fastapi-app
pnpm create next-app --example https://github.com/vercel/ai/tree/main/examples/next-fastapi next-fastapi-app

You will also need Python 3.6+ and virtualenv installed to run the FastAPI server.

To run the example locally you need to:

  1. Sign up at OpenAI's Developer Platform.
  2. Go to OpenAI's dashboard and create an API KEY.
  3. Set the required environment variables as shown in the example env file but in a new file called .env.local.
  4. virtualenv venv to create a python virtual environment.
  5. source venv/bin/activate to activate the python virtual environment.
  6. pip install -r requirements.txt to install the required python dependencies.
  7. pnpm install to install the required dependencies.
  8. pnpm dev to launch the development server.

Learn More

To learn more about the AI SDK, Next.js, and FastAPI take a look at the following resources:

email-finder

Below is a brief README-style explanation for running DeepSeek-R1 locally with Ollama.


DeepSeek-R1: Local Setup Guide

This guide explains how to run DeepSeek-R1 on your local machine using Ollama.

1. Install Ollama

  1. Download: Visit the Ollama website and download the installer for your operating system.
  2. Install: Install Ollama as you would any other application.

2. Download and Test DeepSeek-R1

  1. Open Terminal: Launch your terminal or command prompt.

  2. Run the Model:

    ollama run deepseek-r1

    This command automatically downloads the DeepSeek-R1 model (default size) and runs a sample prompt.

  3. Alternate Model Sizes (optional):

    ollama run deepseek-r1:<size>b

    Replace <size> with 1.5, 7, 8, 14, 32, 70, or 671 to download/run smaller or larger versions.

3. Run DeepSeek-R1 as a Service

To keep DeepSeek-R1 running in the background and serve requests via an API:

ollama serve

This exposes DeepSeek-R1 at http://localhost:11434/api/chat for integration with other applications.

4. Test via CLI and API

  • CLI: Once DeepSeek-R1 is running, simply type:
    ollama run deepseek-r1
  • API: Use curl to chat with DeepSeek-R1 via the local server:
    curl http://localhost:11434/api/chat -d '{
      "model": "deepseek-r1",
      "messages": [{ "role": "user", "content": "Hello DeepSeek, how are you?" }],
      "stream": false
    }'

5. Next Steps

  • Python Integration: Use the ollama Python package to integrate DeepSeek-R1 into applications:

    import ollama
    
    response = ollama.chat(
        model="deepseek-r1",
        messages=[{"role": "user", "content": "Hi DeepSeek!"}],
    )
    print(response["message"]["content"])
  • Gradio App: Build a simple web interface (e.g., for RAG tasks) using Gradio.

For more details on prompt construction, chunk splitting, or building retrieval-based applications (RAG), refer to the official documentation and tutorials.


References


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published