Deployment Guide
After receiving your license file and gaining access to our container registry, you can follow the steps below to get Redact-Edge up and running.
Getting Started
After receiving your license file and gaining access to our container registry, you can follow the steps below to get Redact-Edge up and running.
Step 1: Pull the Docker Image
Once you've been granted access to the container registry, you will need to pull the Docker image that corresponds to the targeted version of Redact-Edge.
- Log in to our container registry. We provide you with a key for this purpose. Use the following command to log in:
docker login docker.brighter.ai -u enterprise -p <key>
- Use the following command to pull the image:
Replace <redact_version> with the specific version of the image you are targeting.docker pull docker.brighter.ai/redact-edge:<redact_version>
Step 2: Run the Docker Container
After the Docker image is pulled, you can run the container using the following command:
docker run -it -d \
-v <input_folder_path>:/input \
-v <output_folder_path>:/output \
-v <license_file_path>:/root/license.json \
-p <rtsp_output_stream_host_port>:<rtsp_output_stream_container_port>
docker.brighter.ai/redact-edge:<redact_version>
Here’s a description of the input fields in the command:
-
<input_folder_path>: The local directory on your host system that will be mounted to the
/input
directory inside the container. This is where the container will access the input data for processing. -
<output_folder_path>: The local directory on your host system that will be mounted to the
/output
directory inside the container. This is where the container will store the processed output data. -
<license_file_path>: The location of the license file you received from brighterAI. It should be mounted to
/root/license.json
inside the container to activate Redact-Edge. -
<rtsp_output_stream_host_port>: The port on the host machine where the RTSP stream will be exposed.
-
<rtsp_output_stream_container_port>: The port inside the container used to run the anonymization pipeline.
This port must be exposed by the container so that the host machine (and potentially external clients) can access the anonymized RTSP stream. -
<redact_version>: The version of the Redact-Edge docker image.
This will start the container and launch Redact-Edge. Since DeepStream requires optimized models, the container will begin optimizing the models upon startup.
Example Usage:
If your input data is in /home/user/data/input
, your output data should be saved in /home/user/data/output
, and your license file is located at /home/user/license/license.json
, you would run:
docker run -d -it \
-v /home/user/data/input:/input \
-v /home/user/data/output:/output \
-v /home/user/license/license.json:/root/license.json \
-p 8555:8555 \
docker.brighter.ai/redact-edge:0.9.0-aarch64
Make sure to replace <redact_version> with the appropriate version you are using. This command mounts the input, output, and license directories as specified, and starts the container in detached mode.
Step 3: Wait for Model Optimization
The optimization process can take some time, depending on your hardware. On Jetson devices, the process can take up to 30 minutes.
Important: During the optimization process, do not interfere with the container. This means no restarting, stopping, or modifying the container. You can monitor the progress by viewing the Docker logs:
docker logs -f <container_id>
Wait until you see the message Container started
in the logs, which indicates that the optimization is complete and the container is fully operational.
Once the container has started, Redact-Edge is ready for use!
Troubleshooting
If the container does not start and the logs do not include the message Optimization for model finished.
, this likely indicates an issue with the TensorRT (TRT) model optimization. Follow these steps to resolve the issue:
-
Check GPU Accessibility:
Ensure the GPU is accessible within the container by verifying that the NVIDIA runtime is set. You can do this either by configuring it as the default runtime in
daemon.json
or by adding--runtime=nvidia
and--gpus all
to the docker run command. -
Check Logs for Driver Issues:
Access the logs in
/tmp/logs/*.log
to identify any driver-related issues. These logs can provide insights into potential compatibility or driver problems impacting container startup.
Updated 29 days ago