-
Notifications
You must be signed in to change notification settings - Fork 2k
core: limit job GC batch size to match other GC batches #26974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In the core scheduler we have several object types where we can delete them by ID. We batch up to 7281 UUIDs because this works out to be about 256 KiB per request, which is well below the maximum Raft log entry size we want to have. When we GC jobs we use this same constant to size the batch, but the request body is not a list of UUIDs but instead a map of namespaced job IDs to a pointer to a struct. This pushes the batch size into 746 KiB (assuming UUID-sizes job names), which is going to impact performance if GC happens during large volumes of short-lived dispatch work where users may be GC'ing jobs frequently. Limit the batch size for `JobBatchDeregisterRequest` to roughly the same size as the requests that are lists of UUIDs. Ref: https://hashicorp.atlassian.net/browse/NMD-1041
550dce4 to
1fed054
Compare
|
It doesn't make sense to have a test we run on every commit to determine the size of an object, but here's how I determined the sizes (which have a lot of reference objects like strings to account for): nomad/core_sched_test.gofunc TestMaximumGCRequestSizes(t *testing.T) {
check := func(batchSize int) {
largestSize := 0
req := &structs.JobBatchDeregisterRequest{
Jobs: map[structs.NamespacedID]*structs.JobDeregisterOptions{},
WriteRequest: structs.WriteRequest{
Region: "global",
AuthToken: uuid.Generate(),
},
}
for range batchSize {
id := structs.NewNamespacedID(uuid.Generate(), uuid.Generate())
opts := structs.JobDeregisterOptions{
Purge: true,
}
largestSize += len(id.ID)
largestSize += len(id.Namespace)
largestSize += int(unsafe.Sizeof(id))
largestSize += int(unsafe.Sizeof(opts))
req.Jobs[id] = &opts
}
largestSize += int(unsafe.Sizeof(*req))
largestSize += int(unsafe.Sizeof(req.Jobs))
largestSize += int(unsafe.Sizeof(req.WriteRequest.AuthToken))
largestSize += int(unsafe.Sizeof(req.WriteRequest.Region))
fmt.Printf("size of JobBatchDeregisterRequest with batch size of %d jobs: %d KiB\n", batchSize, largestSize/1024)
}
check(structs.MaxUUIDsPerWriteRequest)
check(2048)
largestSize := 0
evalReq := &structs.EvalReapRequest{
Evals: []string{},
Allocs: []string{},
WriteRequest: structs.WriteRequest{
Region: "global",
AuthToken: uuid.Generate(),
},
}
for range structs.MaxUUIDsPerWriteRequest {
id := uuid.Generate()
largestSize += len(id)
evalReq.Evals = append(evalReq.Evals, id)
}
largestSize += int(unsafe.Sizeof(*evalReq))
largestSize += int(unsafe.Sizeof(evalReq.Evals))
largestSize += int(unsafe.Sizeof(evalReq.Allocs))
largestSize += int(unsafe.Sizeof(evalReq.WriteRequest.AuthToken))
largestSize += int(unsafe.Sizeof(evalReq.WriteRequest.Region))
fmt.Printf("size of EvalReapRequest with 7281 UUIDs: %d KiB\n", largestSize/1024)
} |
| // Call to the leader to issue the reap with a batch size intended to be | ||
| // similar to the GC by batches of UUIDs for evals, allocs, and nodes | ||
| // (limited by structs.MaxUUIDsPerWriteRequest) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, I think we'll want to revisit this limit if we adopt raft-wal in Nomad 1.12.0, but let's cross that bridge when we get to it
|
I don't really have a suggestion here, just a note/question. UUIDs are a known length, but namespaces are variable. Your test uses |
That's a very reasonable question! But I've also been really surprised at how long namespace and job IDs get at some of our users who are running very large/busy clusters (who are also the only folks for whom this issue is really going to matter). And we add 29 characters to the length of every dispatch job ID! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
In the core scheduler we have several object types where we can delete them by ID. We batch up to 7281 UUIDs because this works out to be about 256 KiB per request, which is well below the maximum Raft log entry size we want to have. When we GC jobs we use this same constant to size the batch, but the request body is not a list of UUIDs but instead a map of namespaced job IDs to a pointer to a struct. This pushes the batch size into 746 KiB (assuming UUID-sized job names), which is going to impact performance if GC happens during large volumes of short-lived dispatch work where users may be GC'ing jobs frequently.
Limit the batch size for
JobBatchDeregisterRequestto roughly the same size as the requests that are lists of UUIDs.Ref: https://hashicorp.atlassian.net/browse/NMD-1041
Contributor Checklist
changelog entry using the
make clcommand.ensure regressions will be caught.
and job configuration, please update the Nomad website documentation to reflect this. Refer to
the website README for docs guidelines. Please also consider whether the
change requires notes within the upgrade guide.
Reviewer Checklist
backporting document.
in the majority of situations. The main exceptions are long-lived feature branches or merges where
history should be preserved.
within the public repository.
Changes to Security Controls
Are there any changes to security controls (access controls, encryption, logging) in this pull request? If so, explain.