Skip to content

jhj0517/AdvancedLivePortrait-WebUI

Repository files navigation

AdvancedLivePortrait-WebUI

Dedicated gradio based WebUI started from ComfyUI-AdvancedLivePortrait.
You can edit the facial expression from the image.

Demo.mov

Notebook

You can try it in Colab

  • colab

Installation And Running

Prerequisite

  1. 3.9 <= python <= 3.12 : https://www.python.org/downloads/release/python-3110/
  2. (Opitonal, only if you're using Nvidia GPU) CUDA 12.4 : https://developer.nvidia.com/cuda-12-4-0-download-archive?target_os=Windows
  3. (Optional, only needed if you use Video Driven) FFmpeg: https://ffmpeg.org/download.html
    After installing FFmpeg, make sure to add the FFmpeg/bin folder to your system PATH!

Run Locally

  1. git clone this repository
git clone https://github.com/jhj0517/AdvancedLivePortrait-WebUI.git
  1. Install dependencies ( Use requirements-cpu.txt if you're not using Nvidia GPU. )
pip install -r requirements.txt
  1. Run app
python app.py

Run with PowerShell

There're PowerShell scripts for each purpose : Install.ps1, Start-WebUI.ps1 and Update.ps1.
They do the same things as above with venv, creating, activating venv and running the app etc.

If you're using Windows, right-click the script and then click on Run with PowerShell.

Run with Docker

  1. git clone this repository
git clone https://github.com/jhj0517/AdvancedLivePortrait-WebUI.git
  1. Build the image
docker compose -f docker/docker-compose.yaml build
  1. Run the container
docker compose -f docker/docker-compose.yaml up
  1. Connect to http://localhost:7860/ in browser.

Update the docker-compose.yaml to match your environment if you're not using an Nvidia GPU.

🌐 Translation

Any PRs for language translation for translation.yaml would be greatly appreciated!

❤️ Acknowledgement

  1. LivePortrait paper comes from
@article{guo2024liveportrait,
  title   = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
  author  = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
  journal = {arXiv preprint arXiv:2407.03168},
  year    = {2024}
}
  1. The models are safetensors that have been converted by kijai. : https://github.com/kijai/ComfyUI-LivePortraitKJ
  2. ultralytics is used to detect the face.
  3. This WebUI is started from ComfyUI-AdvancedLivePortrait, various facial expressions like AAA, EEE, Eyebrow, Wink are found by PowerHouseMan.
  4. RealESRGAN is used for image restoration.