Skip to content

Conversation

srinarayan-srikanthan
Copy link
Collaborator

Description

Test script for remote endpoint

Issues

List the issue or RFC link this PR is working on. If there is no such link, please mark it as n/a.

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • [ x ] New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Dependencies

NA

Tests

Describe the tests that you ran to verify your changes.

Signed-off-by: Ubuntu <azureuser@denvr-inf.kifxisxbiwme5gt4kkwqsfdjuh.dx.internal.cloudapp.net>
@Copilot Copilot AI review requested due to automatic review settings August 14, 2025 03:48
@srinarayan-srikanthan srinarayan-srikanthan self-assigned this Aug 14, 2025
Copy link

github-actions bot commented Aug 14, 2025

Dependency Review

✅ No vulnerabilities or license issues found.

Scanned Files

None

@louie-tsai louie-tsai requested review from louie-tsai and removed request for Copilot August 14, 2025 03:49
fi
}

function stop_docker() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a line to stop the vllm-cpu docker container

fi
}

function stop_docker() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a line to stop the vllm-cpu docker container

"stream=False"
}

function stop_docker() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a line to stop the vllm-cpu docker container

fi
}

function stop_docker() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a line to stop the vllm-cpu docker container


# Start Docker Containers
docker compose -f compose_remote.yaml -f compose.telemetry.yaml up -d --quiet-pull > ${LOG_PATH}/start_services_with_compose_remote.log
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add logic to wait for the longest docker container to spin up, usually the backend service. example: https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/tests/test_compose_on_xeon.sh#L47

export API_KEY=$TEST_KEY
export LLM_MODEL_ID=TinyLlama/TinyLlama-1.1B-Chat-v1.0
# Start Docker Containers
docker compose -f compose_remote.yaml up -d > ${LOG_PATH}/start_services_with_compose.log
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add logic to wait for the longest docker container to spin up, usually the backend service

export API_KEY=$TEST_KEY
export LLM_MODEL_ID=TinyLlama/TinyLlama-1.1B-Chat-v1.0
docker compose -f compose_remote.yaml up -d > ${LOG_PATH}/start_services_with_compose_remote.log
sleep 1m
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sleeping for 1 minute is ok from experience waiting for the docker containers to spin up. But if you want to make it a closed-loop check, add logic to wait for the longest docker container to spin up, usually the backend service


# Start Docker Containers
docker compose -f compose_remote.yaml up -d > ${LOG_PATH}/start_services_with_compose_remote.log
sleep 30s
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sleeping for 30 seconds is ok, but if you want to make it a closed-loop check, add logic to wait for the longest docker container to spin up, usually the backend service

Copy link
Collaborator

@alexsin368 alexsin368 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TEST_KEY needs to be set somewhere, vllm-cpu docker image as a dummy remote endpoint uses a fixed version image, and need to add logic to wait for docker containers to spin up before proceeding to validate services.

echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="chatqna chatqna-ui dataprep retriever nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this docker run command inside docker build function? you could move it to start services and do sleep in between.

@Copilot Copilot AI review requested due to automatic review settings August 19, 2025 20:57
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds test scripts for remote endpoint testing across different OPEA services. The scripts are designed to validate remote LLM endpoint configurations for ChatQnA, CodeGen, DocSum, and ProductivitySuite services running on Intel Xeon processors.

Key changes include:

  • Addition of comprehensive test scripts for four different OPEA services with remote endpoint support
  • Implementation of microservice and megaservice validation functions
  • Integration of frontend testing capabilities for UI components

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 9 comments.

File Description
ProductivitySuite/tests/test_compose_remote_on_xeon.sh Test script for ProductivitySuite with remote endpoint validation, including ChatQnA and CodeGen services
DocSum/tests/test_compose_remote_on_xeon.sh Document summarization service test script supporting text, audio, and video input formats
CodeGen/tests/test_compose_remote_on_xeon.sh Code generation service test script with microservice validation and frontend testing
ChatQnA/tests/test_compose_remote_on_xeon.sh Chat Q&A service test script with embedding, retrieval, and reranking microservice validation

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}" and echo "TAG=${IMAGE_TAG}"

Suggested change
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
echo "REGISTRY=${IMAGE_REPO}"
echo "TAG=${IMAGE_TAG}"

Copilot uses AI. Check for mistakes.

WORKPATH=$(dirname "$PWD")
LOG_PATH="$WORKPATH/tests"
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}" and echo "TAG=${IMAGE_TAG}"

Suggested change
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
echo "REGISTRY=${IMAGE_REPO}"
echo "TAG=${IMAGE_TAG}"

Copilot uses AI. Check for mistakes.

IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}" and echo "TAG=${IMAGE_TAG}"

Suggested change
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
echo "REGISTRY=${IMAGE_REPO}"
echo "TAG=${IMAGE_TAG}"

Copilot uses AI. Check for mistakes.

IMAGE_REPO=${IMAGE_REPO:-"opea"}
IMAGE_TAG=${IMAGE_TAG:-"latest"}
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}"
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The echo statements are displaying variable names instead of values. Should be echo "REGISTRY=${IMAGE_REPO}" and echo "TAG=${IMAGE_TAG}"

Suggested change
echo "TAG=IMAGE_TAG=${IMAGE_TAG}"
echo "REGISTRY=${IMAGE_REPO}"
echo "TAG=${IMAGE_TAG}"

Copilot uses AI. Check for mistakes.


echo "::group::validate_frontend"
validate_frontend
echo "::endgroup::"
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The validate_frontend function is called but never defined in this script, which will cause a runtime error.

Copilot uses AI. Check for mistakes.

echo "Build all the images with --no-cache, check docker_image_build.log for details..."
docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY
docker images && sleep 1s
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d flag and implementing proper container lifecycle management.

Suggested change
docker images && sleep 1s
container_id=$(docker run -d --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY)
docker images && sleep 1s
# Cleanup the container after use
docker stop $container_id
docker rm $container_id

Copilot uses AI. Check for mistakes.

service_list="docsum docsum-gradio-ui whisper llm-docsum"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY
docker images && sleep 1s
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d flag and implementing proper container lifecycle management.

Suggested change
docker images && sleep 1s
VLLM_CONTAINER_ID=$(docker run -d --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY)
docker images && sleep 1s
# Cleanup: stop and remove the container
docker stop $VLLM_CONTAINER_ID
docker rm $VLLM_CONTAINER_ID

Copilot uses AI. Check for mistakes.


docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY
docker images && sleep 1s
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d flag and implementing proper container lifecycle management.

Suggested change
docker images && sleep 1s
docker run -d --name vllm_test_container --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY
# Wait for the container to be ready (optional: implement healthcheck/wait logic here)
docker images && sleep 1s
# Cleanup: stop and remove the test container
docker stop vllm_test_container && docker rm vllm_test_container

Copilot uses AI. Check for mistakes.

service_list="chatqna chatqna-ui dataprep retriever nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log
docker run --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY
docker images && sleep 1s
Copy link
Preview

Copilot AI Aug 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Docker run command is executed without proper backgrounding or container cleanup. Consider running in detached mode with -d flag and implementing proper container lifecycle management.

Suggested change
docker images && sleep 1s
VLLM_CONTAINER_ID=$(docker run -d --env "VLLM_SKIP_WARMUP=true" -p 8000:8000 --ipc=host public.ecr.aws/q9t5s3a7/vllm-cpu-release-repo:v0.9.2 --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 --api-key $TEST_KEY)
docker images && sleep 1s
# Stop and remove the container after use
docker stop $VLLM_CONTAINER_ID
docker rm $VLLM_CONTAINER_ID

Copilot uses AI. Check for mistakes.

@CICD-at-OPEA
Copy link
Collaborator

This PR is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

@CICD-at-OPEA
Copy link
Collaborator

This PR was closed because it has been stalled for 7 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants