I have previously installed VALL-E X for voice cloning on my own PC in a python virtual environment.
Today we will do this using Docker; we will install Docker on WSL.
WSL (Windows Subsystem for Linux) is a feature that allows you to run a Linux operating system directly on Windows 10 or Windows 11. This allows Windows users to use Linux command line tools, utilities, and applications within the Windows environment. The following points should be kept in mind
- No dual-booting required: With WSL, there is no need to reboot the PC to install Linux as a separate operating system; Linux runs directly within Windows.
- Access to developer tools: The WSL is especially useful for developers, allowing them to use programming languages, tools, and applications that run in a Linux environment on Windows.
- Easy setup: Since WSL is built into Windows, it is relatively easy to set up: download a Linux distribution (e.g., Ubuntu, Debian, etc.) from the Microsoft Store and in a few steps you are up and running. The setup is completed in a few steps.
- File system sharing: The file system can be shared between Windows and Linux, so you can access and work with the same files in both environments.
- Performance: WSL provides high performance on Windows. In particular, WSL 2 offers significantly improved performance because the Linux kernel runs directly on Windows.
Simply put, WSL is a powerful tool that enables Windows users to take advantage of Linux functionality. This makes it easy to work in both Windows and Linux environments, a feature especially useful for developers.
Docker can be used to solve project dependency issues and provide a more consistent development and deployment environment. The main benefits are described below.
- Environment consistency and predictability: Docker provides an isolated environment called a “container. This allows the environment I develop to run as-is on someone else’s machine. This reduces problems caused by dependencies or specific library versions.
- Independence from the host system: The Docker container is isolated from the host system. This allows applications to run independent of the host CUDA version and other system-specific settings. The result is better compatibility between users with different environments and different CUDA versions.
- Simplified Setup: Users simply install Docker and launch the container to obtain an environment with all necessary dependencies and settings. This eliminates the need to perform complex setup procedures.
- Consistency between development and production environments: Docker reduces problems caused by differences between development and production environments. The same container images can be used in development, test, and production environments to minimize differences between environments.
- Scalability and Ease of Management: Docker containers are lightweight and can be easily managed and scaled up to multiple containers. This makes it easy to manage large applications and microservices.
With Docker, developers can focus on their projects in a more consistent environment without having to worry about specific system configurations and dependencies. Users can also easily set up and run projects.
Prerequisites.
Docker is already installed on the WSL.
You have installed NVIDIA’s Container Toolkit on your WSL.
NVIDIA’s Container Toolkit (NVIDIA Container Toolkit) is very important for Docker and GPU integration. Its main points are described below.
- GPU support: The NVIDIA Container Toolkit provides a mechanism to utilize NVIDIA GPUs directly within Docker containers. This allows applications and machine learning models that require GPUs to run efficiently within containers.
- CUDA Integration: The toolkit is also tightly integrated with CUDA, allowing access to the CUDA libraries from within a Docker container. This makes it easy to develop and deploy CUDA-enabled applications.
- Flexibility and portability: Using containers, applications can run consistently across different systems and different CUDA versions. This is especially important for applications that require GPU compute resources.
- Easy setup: NVIDIA’s Container Toolkit eliminates the need for users to include GPU drivers in the container image. As long as the driver is properly setup on the host system, the container can use it.
- End-to-end GPU support: NVIDIA provides holistic GPU support for Docker containers, helping you through the process from development to deployment to scaling.
NVIDIA’s container toolkit makes it significantly easier to develop and deploy GPU-enabled applications. This will no doubt enable you to efficiently perform machine learning, data science, and other GPU-intensive tasks.
Work with the following Github page.
https://github.com/Plachtaa/VALL-E-X
For clarity, create an appropriate directory and navigate to it. Create a Dockerfile and script in that directory.
mkdir vall
cd vall
sudo nano Dockerfile
The contents of the Dockerfile are as follows. Finally, after some improvements, it is ready. It works now.
# Use the official PyTorch image with CUDA 11.8 support
FROM pytorch/pytorch:latest
# Set environment variable to disable timezone prompt
ENV DEBIAN_FRONTEND=noninteractive
# Install necessary packages (including FFmpeg and git)
RUN apt-get update && \
apt-get install -y python3-pip ffmpeg git && \
pip3 install --upgrade pip
# Install PyTorch, torchvision, torchaudio with CUDA 11.8
RUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
# Upgrade Gradio
RUN pip install --upgrade gradio
# Set working directory
WORKDIR /usr/src/app
# Clone the VALL-E-X repository
RUN git clone https://github.com/Plachtaa/VALL-E-X.git
# Change working directory to the cloned repository
WORKDIR /usr/src/app/VALL-E-X
# Install required packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Add FFmpeg to the PATH environment variable
ENV PATH="/usr/bin/ffmpeg:$PATH"
# Copy the locally modified launch-ui.py into the container
COPY launch-ui.py .
# Create checkpoints and whisper directories
RUN mkdir -p ./checkpoints
RUN mkdir -p ./whisper
# Define environment variable
ENV NAME World
# Execute launch-ui.py when the container starts
CMD ["python3", "launch-ui.py"]
The script was modified from launch-ui.py on Github, saved in the same location as the Dockerfile. I needed to add a description to allow access from non-local locations, etc. I also made some other modifications.
sudo nano launch-ui.py
It is long, so I made it available for download at the following page.
https://minokamo.tokyo/youtube
Build and create an image based on the Dockerfile.
docker build -t valle .
Launch the container using the created image.
docker run -d --name vall --gpus all -p 7860:7860 valle
Here is something to note. The container appears to be up immediately. However, if you access it immediately with a browser, you may get an error. This is because the model is being downloaded. You can check it with the following command
docker logs -f vall
The docker logs -f container name
command is used by Docker to display the logs of a specific container in real time. Here is what each part means
- docker logs: This command instructs Docker to retrieve the logs for a given container. All standard output (stdout) and standard error output (stderr) produced by applications and processes run by the container will be displayed.
- -f: This option stands for “follow,” which means to track logs in real time. This means that as new logs are output from the container, they are immediately displayed on the command line. This is useful for monitoring the current behavior of your application.
- Container Name: This is the name of the Docker container for which you want to view logs; in Docker, each container is assigned a unique name. By specifying this name, you can view the logs for that particular container.
Simply put, the docker logs -f container name
command is used to display the logs for a given Docker container in real time and monitor what is happening in the container. This is very useful for debugging and monitoring your system.
Once enough time has passed, you can access it with a browser; the WSL IP address can be found at e.g. ip a
http://172.25.209.126:7860
Environments running on Windows Subsystem for Linux (WSL) have different network settings than the host Windows environment. This has several important aspects
- Private IP address: WSLs are typically assigned a private IP address that is different from the host Windows system. This is because the WSL has its own network interface and resides on a separate subnet from the host OS.
- Network isolation: The fact that WSLs have different IP addresses means that they are network isolated. Therefore, to access services and applications running within a WSL, you must use its IP address.
- Communication and Port Forwarding: Communication between the WSL and the host Windows is possible, but port forwarding and certain network settings may need to be properly configured. This is especially important for external access to servers running within the WSL.
- Development Environment Implications: Development environments, such as Python, need to account for differences regarding network access between programs running on the WSL and programs running on the host OS. This is especially important when setting up a local development environment.
While WSL is a useful tool for running a Linux environment on Windows, differences in network settings and IP addresses may require additional configuration and adjustments in certain situations.
If you run a python script inside a container, you will see the following message To enter the container, type the following command
docker exec -ti vall bash
root@709a765560ce:/usr/src/app/VALL-E-X# python3 launch-ui.py default encoding is utf-8,file system encoding is utf-8 You are using Python version 3.10.13 Use 20 cpu cores for computing /opt/conda/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm. warnings.warn(“torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.”) Running on local URL: http://0.0.0.0:7861 Running on public URL: https://a8d8a69c61a01967a3.gradio.live This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Based on the message displayed, it means that the application you are using (probably a tool called Gradio) has generated a temporary public URL.
Specifically,
- Local URL:
http://0.0.0.0:7861
is a URL to access the application running in the container on the local network (i.e., from other devices or computers on the same network). - Public URL:
https://a8d8a69c61a01967a3.gradio.live
is a temporary public URL that can be accessed from anywhere on the Internet. This URL is provided by the Gradio service and provides a means to share your application with external users. - 72-hour expiration: This public URL expires after 72 hours. If you need permanent hosting or GPU upgrades, you may want to consider deploying on Hugging Face’s Spaces platform.
In summary, the displayed message indicates that the application running in the container has generated a temporary public URL that can now be accessed from anywhere on the Internet. This is a common way to easily share applications using Gradio or similar tools.
To use VALL-E X the next time your computer is turned off or for some other reason, start the container with the following command
docker start vall