Releases: marcosborges/terraform-aws-loadtest-distribuited
Added support and example of use to locust distributed mode
Using Locust in distributed mode
In its basic use it is necessary to provide information about which network will be used, where are your test plan scripts and finally define the number of nodes needed to carry out the desired load.
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao-locust"
nodes_size = var.node_size
executor = "locust"
loadtest_dir_source = "../plan/"
# LEADER ENTRYPOINT
loadtest_entrypoint = <<-EOT
nohup locust \
-f ${var.locust_plan_filename} \
--web-port=8080 \
--expect-workers=${var.node_size} \
--master > locust-leader.out 2>&1 &
EOT
# NODES ENTRYPOINT
node_custom_entrypoint = <<-EOT
nohup locust \
-f ${var.locust_plan_filename} \
--worker \
--master-host={LEADER_IP} > locust-worker.out 2>&1 &
EOT
subnet_id = data.aws_subnet.current.id
locust_plan_filename = var.locust_plan_filename
}
variable "node_size" {
description = "Size of total nodes"
default = 2
}
variable "locust_plan_filename" {
default = "locust/basic.py"
}
Example of usage with taurus and some fixes
Taurus Basic Configuration
In its basic use it is necessary to provide information about which network will be used, where are your test plan scripts and finally define the number of nodes needed to carry out the desired load.
module "loadtest" {
source = "../../"
name = "nome-da-implantacao-taurus"
executor = "bzt"
loadtest_dir_source = "../plan/"
loadtest_entrypoint = "bzt -q -o execution.0.distributed=\"{NODES_IPS}\" taurus/*.yml"
nodes_size = 2
subnet_id = data.aws_subnet.current.id
}
data "aws_subnet" "current" {
filter {
name = "tag:Name"
values = ["subnet-prd-a"]
}
}
Fixes
- When is disabled spliter generate a error.
Auto fragmentation of mass data files between load nodes
Split data between nodes
The implementation for dividing the data mass file between the load executing nodes aims to uncomplicate this existing friction in the execution of distributed load tests;
Is very simple to activate this option.
Just set the split_data_mass_between_nodes
variable by activating the feature and informing your mass data files to be distributed.
See the example below...
Example
module "loadtest" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "jmeter"
loadtest_dir_source = "../plan/"
nodes_size = 2
loadtest_entrypoint = "jmeter -n -t jmeter/basic-with-data.jmx -R \"{NODES_IPS}\" -l /loadtest/logs -e -o /var/www/html/jmeter -Dnashorn.args=--no-deprecation-warning -Dserver.rmi.ssl.disable=true -LDEBUG "
split_data_mass_between_nodes = {
enable = true
data_mass_filenames = [
"data/users.csv"
]
}
subnet_id = data.aws_subnet.current.id
}
Behind the scene:
- sends all data mass files to the leader.
- After submission, the files are divided by the leader into fragments by us for each nodes.
- The last action is to send each fragment to its respective node.
Splitting the files uses the linux split
command and splits the main file into 1 fragment for each node.
More info: Split doc
Basic usage with JMeter
Basic Config:
In its basic use it is necessary to provide information about which network will be used, where are your test plan scripts and finally define the number of nodes needed to carry out the desired load.
module "loadtest-distribuited" {
source = "marcosborges/loadtest-distribuited/aws"
name = "nome-da-implantacao"
executor = "jmeter"
loadtest_dir_source = "../plan/"
nodes_size = 2
loadtest_entrypoint = "jmeter -n -t jmeter/*.jmx -R \"{NODES_IPS}\" -l /var/logs/loadtest -e -o /var/www/html -Dnashorn.args=--no-deprecation-warning -Dserver.rmi.ssl.disable=true "
subnet_id = data.aws_subnet.current.id
}