Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intel Arc support #38

Open
Atinoda opened this issue Feb 11, 2024 · 6 comments
Open

Intel Arc support #38

Atinoda opened this issue Feb 11, 2024 · 6 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@Atinoda
Copy link
Owner

Atinoda commented Feb 11, 2024

Intel Arc GPUs have their own images now, according to the developments in the upstream project. I do not have the hardware to test them - so please give them a go! Reports are welcomed.

@Atinoda Atinoda added enhancement New feature or request help wanted Extra attention is needed labels Feb 11, 2024
@Atinoda Atinoda pinned this issue Feb 26, 2024
@ksullivan86
Copy link

ksullivan86 commented Mar 30, 2024

I was testing this out but I couldn't get this to work with my a770 16gb, I was able to get cpu only(on the ARC docker image) but even when I had cpu unchecked and I added 10 GPU layers to the only the cpu would be active when in use. I am guessing I am doing something wrong. I am happy to help if I can...just let me know what I can do?

I am wondering if the problem is because of the integrated gpu in the 14700k is what causing the a770 not to be used, I see this menthined a few times.

@Atinoda
Copy link
Owner Author

Atinoda commented Mar 30, 2024

@ksullivan86 - thank you for testing it out and reporting back your experiences! I'll check into what is required to make the card available to the container. Would you mind sharing what OS you are using, and any changes that you made to the docker-compose.yml? I would be keen to help you get this up and running, then we can share the results here for other people too.

@ksullivan86
Copy link

ksullivan86 commented Apr 1, 2024

I am using unraid 6.12.8 but I am using custom kernel 6.7(thor) for arc support.
here is my compose file, I dont think there is anything special, I do have - /dev/dri:/dev/dri
setup. for access to arc but I think that is also going to add the iGPU from my 14700k.

<script src="https://gist.github.com/ksullivan86/2d6f43b9341a77f87594c1be3532f929.js"></script>

@Atinoda
Copy link
Owner Author

Atinoda commented Apr 9, 2024

Sorry for my delayed reply, and thank you for sharing the information! Although I don't have experience with Unraid, there has been a successful deployment story at #27 with AMD hardware, and you have already got it running with CPU so we'll proceed on the assumption that everything is working well. Do you know if docker runs with root privileges on Unraid? It may be necessary to grant additional group membership if the runner account is limited in privileges.

Can you please try adding using the following docker-compose.yml? I have added parameters that blow the doors off with regards to security and container isolation - but the idea is to see if this works, then pare back afterwards:

version: "3"
services:
  text-generation-webui:
    image: atinoda/text-generation-webui:default-arc  # Specify variant as the :tag
    container_name: text-generation-webui-arc
    network_mode: docker_network
    environment:
      - TZ=America/Los_Angeles
      - EXTRA_LAUNCH_ARGS="--listen --verbose" # Custom launch args (e.g., --model MODEL_NAME)
#      - BUILD_EXTENSIONS_LIVE="silero_tts whisper_stt" # Install named extensions during every container launch. THIS WILL SIGNIFICANLTLY SLOW LAUNCH TIME.
      - PUID=99
      - PGID=100
    ports:
      - 7867:7860  # Default web port
      - 5200:5000  # Default API port
      - 5205:5005  # Default streaming port
      - 5201:5001  # Default OpenAI API extension port

      
   # labels:
     # traefik.enable: true  #  allows traefik reverse proxy to see the app
     # traefik.http.routers.text-generation-webui-cpu.entryPoints: https 
     # traefik.http.services.text-generation-webui-cpu.loadbalancer.server.port: 7860 # sepecifies port for traefik to route
    volumes:
      - /mnt/user/ai/text_generation_webui/models:/app/models
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/characters:/app/characters
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/loras:/app/loras
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/presets:/app/presets
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/prompts:/app/prompts
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/training:/app/training
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/extensions:/app/extensions  # Persist all extensions
#      - ./config/extensions/silero_tts:/app/extensions/silero_tts  # Persist a single extension
    logging:
      driver:  json-file
      options:
        max-file: "3"   # number of files or file count
        max-size: '10m'
    restart: unless-stopped
    # NEW PARAMS:
    group_add:
      - video
    tty: true
    ipc: host
    devices:
      - /dev/kfd
      - /dev/dri 
    cap_add: 
      - SYS_PTRACE
    security_opt:
      - seccomp=unconfined

@FirestarDrive
Copy link

Just got the WebUI to work on Arc A770M and load GGUF models onto the GPU. Churns out 12 tok/s on OpenOrca 7B Q5 (pretty good for a laptop card, huh?)

Turns out, llama-cpp-python needs to be built using Intel's compiler to recognise the GPU as per NineMeowICT's excellent findings.

Here's my build, perhaps, it can be integrated with yours?

@Atinoda
Copy link
Owner Author

Atinoda commented Jun 8, 2024

Hi @FirestarDrive, thank you for the report and the links to the working build! I will use this information to develop the arc variant to get it up and running.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants