Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Set update schedule for GitHub Actions

version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"

- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "monthly"
4 changes: 3 additions & 1 deletion CHANGELOG/CHANGELOG-2.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,6 @@ When cutting a new release, update the `unreleased` heading to the tag being gen
* [#48](https://github.com/datastax/zdm-proxy/issues/48) Fix scheduler shutdown race condition
* [#69](https://github.com/datastax/zdm-proxy/issues/69) Client connection can be closed before proxy returns protocol error
* [#76](https://github.com/datastax/zdm-proxy/issues/76) Log error when closing connection
* [#74](https://github.com/datastax/zdm-proxy/issues/74) Handshakes with auth enabled can deadlock if multiple handshakes are happening concurrently
* [#74](https://github.com/datastax/zdm-proxy/issues/74) Handshakes with auth enabled can deadlock if multiple handshakes are happening concurrently

---
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ The setup we described above is only for testing in a local environment. It is *
installation where the minimum number of proxy instances is 3.

For a comprehensive guide with the recommended production setup check the documentation available at
[Datastax Migration](https://docs.datastax.com/en/astra-serverless/docs/migrate/introduction.html).
[DataStax Migration](https://docs.datastax.com/en/data-migration/introduction.html).

There you'll find information about an Ansible-based tool that automates most of the process.

Expand All @@ -131,5 +131,5 @@ For information on the packaged dependencies of the Zero Downtime Migration (ZDM

## Frequently Asked Questions

For frequently asked questions, please refer to our separate [FAQ](https://docs.datastax.com/en/astra-serverless/docs/migrate/faqs.html) page.
For frequently asked questions, please refer to our separate [FAQ](https://docs.datastax.com/en/data-migration/faqs.html) page.

71 changes: 35 additions & 36 deletions compose/nosqlbench-entrypoint.sh
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
#!/bin/sh
apk add --no-cache netcat-openbsd
apk add py3-pip

apt -qq update -y
apt -qq install netcat-openbsd python3-pip -y

echo "deb https://downloads.datastax.com/deb stable main" | tee -a /etc/apt/sources.list.d/datastax.sources.list
curl -sL https://downloads.datastax.com/deb/doc/apt_key.gpg | apt-key add -
pip install cqlsh

function test_conn() {
test_conn() {
nc -z -v $1 9042;
while [ $? -ne 0 ];
do echo "CQL port not ready on $1";
Expand All @@ -17,62 +21,57 @@ test_conn zdm_tests_origin
test_conn zdm_tests_target
test_conn zdm_tests_proxy

set -e
set -xe

echo "Creating schema"
cat /source/nb-tests/schema.cql | cqlsh zdm_tests_proxy

echo "Running NoSQLBench RAMPUP job"
java -jar /nb.jar \
java -jar /nb5.jar \
--show-stacktraces \
/source/nb-tests/cql-nb-activity.yaml \
/source/nb-tests/cql_nb_activity.yaml \
rampup \
driver=cqld3 \
hosts=zdm_tests_proxy \
localdc=datacenter1 \
errors=retry \
errors=counter,retry \
-v

echo "Running NoSQLBench WRITE job"
java -jar /nb.jar \
java -jar /nb5.jar \
--show-stacktraces \
/source/nb-tests/cql-nb-activity.yaml \
/source/nb-tests/cql_nb_activity.yaml \
write \
driver=cqld3 \
hosts=zdm_tests_proxy \
localdc=datacenter1 \
errors=retry \
errors=counter,retry \
-v

echo "Running NoSQLBench READ job"
java -jar /nb.jar \
java -jar /nb5.jar \
--show-stacktraces \
/source/nb-tests/cql-nb-activity.yaml \
/source/nb-tests/cql_nb_activity.yaml \
read \
driver=cqld3 \
hosts=zdm_tests_proxy \
localdc=datacenter1 \
errors=retry \
errors=counter,retry \
-v

echo "Running NoSQLBench VERIFY job on ORIGIN"
java -jar /nb.jar \
--show-stacktraces \
--report-csv-to /source/verify-origin \
/source/nb-tests/cql-nb-activity.yaml \
verify \
driver=cqld3 \
hosts=zdm_tests_origin \
localdc=datacenter1 \
-v
#echo "Running NoSQLBench VERIFY job on ORIGIN"
Copy link
Collaborator

@grighetto grighetto Jul 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without the VERIFY step, this test is pretty much useless because this is what actually compares the stored data against the pseudo-random generated data.
I don't have the context why the NB update is happening in this PR (if it's for some new feature or compatibility with newer C* versions), but if we could still run the old NB with the verify step in addition to this until it's support in the latest NB version, I think it would be ideal.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood. NB 5.21 doesnt support verify feature yet.

#java -jar /nb5.jar \
# --show-stacktraces \
# --report-csv-to /source/verify-origin \
# /source/nb-tests/cql_nb_activity.yaml \
# verify \
# hosts=zdm_tests_origin \
# localdc=datacenter1 \
# -v

echo "Running NoSQLBench VERIFY job on TARGET"
java -jar /nb.jar \
--show-stacktraces \
--report-csv-to /source/verify-target \
/source/nb-tests/cql-nb-activity.yaml \
verify \
driver=cqld3 \
hosts=zdm_tests_target \
localdc=datacenter1 \
-v
#echo "Running NoSQLBench VERIFY job on TARGET"
#java -jar /nb5.jar \
# --show-stacktraces \
# --report-csv-to /source/verify-target \
# /source/nb-tests/cql_nb_activity.yaml \
# verify \
# hosts=zdm_tests_target \
# localdc=datacenter1 \
# -v
4 changes: 2 additions & 2 deletions docker-compose-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ services:
ipv4_address: 192.168.100.103

nosqlbench:
image: nosqlbench/nosqlbench:4.15.101
image: nosqlbench/nosqlbench:5.21-latest
container_name: zdm_tests_nb
tty: true
volumes:
Expand All @@ -52,4 +52,4 @@ services:
- /source/compose/nosqlbench-entrypoint.sh
networks:
proxy:
ipv4_address: 192.168.100.104
ipv4_address: 192.168.100.104
66 changes: 0 additions & 66 deletions nb-tests/cql-nb-activity.yaml

This file was deleted.

52 changes: 52 additions & 0 deletions nb-tests/cql_nb_activity.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
bindings:
seq_key: Mod(TEMPLATE(keycount,1000000000)); ToInt();
seq_value: Hash(); Mod(TEMPLATE(valuecount,1000000000)); ToString() -> String
# rw_key: TEMPLATE(keydist,Uniform(0,1000000000)->int);
rw_key: Uniform(0,1000000000)->int
# rw_value: Hash(); TEMPLATE(valdist,Uniform(0,1000000000)->int); ToString() -> String
rw_value: Hash(); Uniform(0,1000000000)->int; ToString() -> String

scenarios:
rampup: run driver=cqld4 tags=block:rampup cycles=20000
write: run driver=cqld4 tags=block:write cycles=20000
read: run driver=cqld4 tags=block:read cycles=20000
# verify: run driver=cqld4 tags=block:verify errors=warn,unverified->count compare=all cycles=20000

params:
driver: cql
prepared: true

blocks:
rampup:
params:
cl: TEMPLATE(write_cl,LOCAL_QUORUM)
ops:
rampup_insert: |
INSERT INTO TEMPLATE(keyspace,test).TEMPLATE(table,keyvalue)
(key, value)
VALUES ({seq_key},{seq_value});

# verify:
# params:
# cl: TEMPLATE(read_cl,LOCAL_QUORUM)
# ops:
# verify_select: |
# SELECT * FROM TEMPLATE(keyspace,test).TEMPLATE(table,keyvalue) WHERE key={rw_key};
# verify_fields: key->rw_key, value->rw_value

read:
params:
ratio: 1
cl: TEMPLATE(read_cl,LOCAL_QUORUM)
ops:
main_select: |
select * from TEMPLATE(keyspace,test).TEMPLATE(table,keyvalue) where key={rw_key};

write:
params:
ratio: 1
cl: TEMPLATE(write_cl,LOCAL_QUORUM)
ops:
main-insert: |
insert into TEMPLATE(keyspace,test).TEMPLATE(table,keyvalue)
(key, value) values ({rw_key}, {rw_value});