-
Notifications
You must be signed in to change notification settings - Fork 318
add test script #2199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add test script #2199
Conversation
Signed-off-by: Ubuntu <azureuser@denvr-inf.kifxisxbiwme5gt4kkwqsfdjuh.dx.internal.cloudapp.net>
Dependency Review✅ No vulnerabilities or license issues found.Scanned FilesNone |
for more information, see https://pre-commit.ci
fi | ||
} | ||
|
||
function stop_docker() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a line to stop the vllm-cpu docker container
fi | ||
} | ||
|
||
function stop_docker() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a line to stop the vllm-cpu docker container
"stream=False" | ||
} | ||
|
||
function stop_docker() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a line to stop the vllm-cpu docker container
fi | ||
} | ||
|
||
function stop_docker() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a line to stop the vllm-cpu docker container
|
||
# Start Docker Containers | ||
docker compose -f compose_remote.yaml -f compose.telemetry.yaml up -d --quiet-pull > ${LOG_PATH}/start_services_with_compose_remote.log | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add logic to wait for the longest docker container to spin up, usually the backend service. example: https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/tests/test_compose_on_xeon.sh#L47
export API_KEY=$TEST_KEY | ||
export LLM_MODEL_ID=TinyLlama/TinyLlama-1.1B-Chat-v1.0 | ||
# Start Docker Containers | ||
docker compose -f compose_remote.yaml up -d > ${LOG_PATH}/start_services_with_compose.log |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add logic to wait for the longest docker container to spin up, usually the backend service
export API_KEY=$TEST_KEY | ||
export LLM_MODEL_ID=TinyLlama/TinyLlama-1.1B-Chat-v1.0 | ||
docker compose -f compose_remote.yaml up -d > ${LOG_PATH}/start_services_with_compose_remote.log | ||
sleep 1m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sleeping for 1 minute is ok from experience waiting for the docker containers to spin up. But if you want to make it a closed-loop check, add logic to wait for the longest docker container to spin up, usually the backend service
|
||
# Start Docker Containers | ||
docker compose -f compose_remote.yaml up -d > ${LOG_PATH}/start_services_with_compose_remote.log | ||
sleep 30s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sleeping for 30 seconds is ok, but if you want to make it a closed-loop check, add logic to wait for the longest docker container to spin up, usually the backend service
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TEST_KEY needs to be set somewhere, vllm-cpu docker image as a dummy remote endpoint uses a fixed version image, and need to add logic to wait for docker containers to spin up before proceeding to validate services.
echo "Build all the images with --no-cache, check docker_image_build.log for details..." | ||
service_list="chatqna chatqna-ui dataprep retriever nginx" | ||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log | ||
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this docker run command inside docker build function? you could move it to start services and do sleep in between.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds test scripts for remote endpoint testing across different OPEA services. The scripts are designed to validate remote LLM endpoint configurations for ChatQnA, CodeGen, DocSum, and ProductivitySuite services running on Intel Xeon processors.
Key changes include:
- Addition of comprehensive test scripts for four different OPEA services with remote endpoint support
- Implementation of microservice and megaservice validation functions
- Integration of frontend testing capabilities for UI components
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 9 comments.
File | Description |
---|---|
ProductivitySuite/tests/test_compose_remote_on_xeon.sh | Test script for ProductivitySuite with remote endpoint validation, including ChatQnA and CodeGen services |
DocSum/tests/test_compose_remote_on_xeon.sh | Document summarization service test script supporting text, audio, and video input formats |
CodeGen/tests/test_compose_remote_on_xeon.sh | Code generation service test script with microservice validation and frontend testing |
ChatQnA/tests/test_compose_remote_on_xeon.sh | Chat Q&A service test script with embedding, retrieval, and reranking microservice validation |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
IMAGE_REPO=${IMAGE_REPO:-"opea"} | ||
IMAGE_TAG=${IMAGE_TAG:-"latest"} | ||
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}" | ||
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}"
and echo "TAG=${IMAGE_TAG}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" | |
echo "REGISTRY=${IMAGE_REPO}" | |
echo "TAG=${IMAGE_TAG}" |
Copilot uses AI. Check for mistakes.
WORKPATH=$(dirname "$PWD") | ||
LOG_PATH="$WORKPATH/tests" | ||
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}" | ||
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}"
and echo "TAG=${IMAGE_TAG}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" | |
echo "REGISTRY=${IMAGE_REPO}" | |
echo "TAG=${IMAGE_TAG}" |
Copilot uses AI. Check for mistakes.
IMAGE_REPO=${IMAGE_REPO:-"opea"} | ||
IMAGE_TAG=${IMAGE_TAG:-"latest"} | ||
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}" | ||
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}"
and echo "TAG=${IMAGE_TAG}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" | |
echo "REGISTRY=${IMAGE_REPO}" | |
echo "TAG=${IMAGE_TAG}" |
Copilot uses AI. Check for mistakes.
IMAGE_REPO=${IMAGE_REPO:-"opea"} | ||
IMAGE_TAG=${IMAGE_TAG:-"latest"} | ||
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}" | ||
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}"
and echo "TAG=${IMAGE_TAG}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" | |
echo "REGISTRY=${IMAGE_REPO}" | |
echo "TAG=${IMAGE_TAG}" |
Copilot uses AI. Check for mistakes.
|
||
echo "::group::validate_frontend" | ||
validate_frontend | ||
echo "::endgroup::" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The validate_frontend
function is called but never defined in this script, which will cause a runtime error.
Copilot uses AI. Check for mistakes.
echo "Build all the images with --no-cache, check docker_image_build.log for details..." | ||
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log | ||
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY | ||
docker images && sleep 1s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d
flag and implementing proper container lifecycle management.
docker images && sleep 1s | |
container_id=$(docker run -d --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY) | |
docker images && sleep 1s | |
# Cleanup the container after use | |
docker stop $container_id | |
docker rm $container_id |
Copilot uses AI. Check for mistakes.
service_list="docsum docsum-gradio-ui whisper llm-docsum" | ||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log | ||
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY | ||
docker images && sleep 1s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d
flag and implementing proper container lifecycle management.
docker images && sleep 1s | |
VLLM_CONTAINER_ID=$(docker run -d --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY) | |
docker images && sleep 1s | |
# Cleanup: stop and remove the container | |
docker stop $VLLM_CONTAINER_ID | |
docker rm $VLLM_CONTAINER_ID |
Copilot uses AI. Check for mistakes.
|
||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log | ||
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY | ||
docker images && sleep 1s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d
flag and implementing proper container lifecycle management.
docker images && sleep 1s | |
docker run -d --name vllm_test_container --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY | |
# Wait for the container to be ready (optional: implement healthcheck/wait logic here) | |
docker images && sleep 1s | |
# Cleanup: stop and remove the test container | |
docker stop vllm_test_container && docker rm vllm_test_container |
Copilot uses AI. Check for mistakes.
service_list="chatqna chatqna-ui dataprep retriever nginx" | ||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log | ||
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY | ||
docker images && sleep 1s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d
flag and implementing proper container lifecycle management.
docker images && sleep 1s | |
VLLM_CONTAINER_ID=$(docker run -d --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY) | |
docker images && sleep 1s | |
# Stop and remove the container after use | |
docker stop $VLLM_CONTAINER_ID | |
docker rm $VLLM_CONTAINER_ID |
Copilot uses AI. Check for mistakes.
This PR is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
This PR was closed because it has been stalled for 7 days with no activity. |
Description
Test script for remote endpoint
Issues
List the issue or RFC link this PR is working on. If there is no such link, please mark it as
n/a
.Type of change
List the type of change like below. Please delete options that are not relevant.
Dependencies
NA
Tests
Describe the tests that you ran to verify your changes.