-
Notifications
You must be signed in to change notification settings - Fork 14.4k
KAFKA-17747: [2/N] Add compute topic and group hash #19523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: trunk
Are you sure you want to change the base?
Conversation
Signed-off-by: PoAn Yang <[email protected]>
Signed-off-by: PoAn Yang <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FrankYang0529 Thanks for the patch. I have started diving into it. I left some questions to start with.
gradle/dependencies.gradle
Outdated
@@ -147,6 +148,7 @@ libs += [ | |||
caffeine: "com.github.ben-manes.caffeine:caffeine:$versions.caffeine", | |||
classgraph: "io.github.classgraph:classgraph:$versions.classgraph", | |||
commonsValidator: "commons-validator:commons-validator:$versions.commonsValidator", | |||
guava: "com.google.guava:guava:$versions.guava", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is something that we haven't really discussed in the KIP because it is an implementation detail but we should discuss whether we really want to take a dependency on Guava.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I update PR to remove guava. I think we can put all data to a byte array and use Murmur3 to hash it, so we don't rely on guava.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
While talking to @ijuma about it, he has suggested to look into https://github.com/lz4/lz4-java/tree/master/src/java/net/jpountz/xxhash. We get it via lz4 and it is apparently much faster than Murmur3. It may be worth running a few benchmarks to compare then. What do you think?
I also wonder what is the impact of putting all the data to a byte array before hashing it. Do you have thoughts on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, the lz4-java is not maintained anymore, so not sure whether the code maintaining is a risk.
I also wonder what is the impact of putting all the data to a byte array before hashing it. Do you have thoughts on this?
I suggest that EventProcessorThread
can leverage GrowableBufferSupplier
to reuse buffer as much as possible. Additionally, Group#computeTopicHashin
should use ByteBufferOutputStream
to generate the bytes array, as ByteBufferOutputStream#buffer#array
can avoid extra array copy like ByteArrayOutputStream#toByteArray
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This hash function is used by zstd too. Its pretty safe to rely on it given that lz4 and zstd are the most popular compression algorithms. And we will be supporting them for the foreseeable future.
Which particular implementation we use is a fair question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, the lz4-java is not maintained anymore, so not sure whether the code maintaining is a risk.
I also wonder what is the impact of putting all the data to a byte array before hashing it. Do you have thoughts on this?
I suggest that
EventProcessorThread
can leverageGrowableBufferSupplier
to reuse buffer as much as possible. Additionally,Group#computeTopicHashin
should useByteBufferOutputStream
to generate the bytes array, asByteBufferOutputStream#buffer#array
can avoid extra array copy likeByteArrayOutputStream#toByteArray
I agree with using a BufferSupplier in order to reuse buffers. However, EventProcessorThread may be too low level to hold it. Having it in Shard may be enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion. Updated benchmark result.
Benchmark (numReplicasPerBroker) (partitionsPerTopic) (replicationFactor) Mode Cnt Score Error Units
TopicHashBenchmark.testLz4 10 10 3 avgt 15 166.389 ± 1.542 ns/op
TopicHashBenchmark.testLz4 10 50 3 avgt 15 375.660 ± 2.771 ns/op
TopicHashBenchmark.testLz4 10 100 3 avgt 15 636.176 ± 8.305 ns/op
TopicHashBenchmark.testMurmur 10 10 3 avgt 15 238.242 ± 1.664 ns/op
TopicHashBenchmark.testMurmur 10 50 3 avgt 15 1143.583 ± 5.981 ns/op
TopicHashBenchmark.testMurmur 10 100 3 avgt 15 2278.680 ± 29.007 ns/op
TopicHashBenchmark.java
package org.apache.kafka.jmh.metadata;
import org.apache.kafka.common.Uuid;
import org.apache.kafka.common.metadata.RegisterBrokerRecord;
import org.apache.kafka.image.ClusterDelta;
import org.apache.kafka.image.ClusterImage;
import org.apache.kafka.image.TopicsDelta;
import org.apache.kafka.image.TopicImage;
import org.apache.kafka.metadata.BrokerRegistration;
import org.apache.kafka.streams.state.internals.Murmur3;
import net.jpountz.xxhash.XXHash64;
import net.jpountz.xxhash.XXHashFactory;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import java.io.ByteArrayOutputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
import static org.apache.kafka.jmh.metadata.TopicsImageSnapshotLoadBenchmark.getInitialTopicsDelta;
import static org.apache.kafka.jmh.metadata.TopicsImageSnapshotLoadBenchmark.getNumBrokers;
@State(Scope.Benchmark)
@Fork(value = 1)
@Warmup(iterations = 5)
@Measurement(iterations = 15)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
public class TopicHashBenchmark {
@Param({"10", "50", "100"})
private int partitionsPerTopic;
@Param({"3"})
private int replicationFactor;
@Param({"10"})
private int numReplicasPerBroker;
private byte[] topicBytes;
@Setup(Level.Trial)
public void setup() throws IOException {
TopicsDelta topicsDelta = getInitialTopicsDelta(1, partitionsPerTopic, replicationFactor, numReplicasPerBroker);
int numBrokers = getNumBrokers(1, partitionsPerTopic, replicationFactor, numReplicasPerBroker);
ClusterDelta clusterDelta = new ClusterDelta(ClusterImage.EMPTY);
for (int i = 0; i < numBrokers; i++) {
clusterDelta.replay(new RegisterBrokerRecord()
.setBrokerId(i)
.setRack(Uuid.randomUuid().toString())
);
}
TopicImage topicImage = topicsDelta.apply().topicsById().values().stream().findFirst().get();
ClusterImage clusterImage = clusterDelta.apply();
try (ByteArrayOutputStream baos = new ByteArrayOutputStream();
DataOutputStream dos = new DataOutputStream(baos)) {
dos.writeByte(0); // magic byte
dos.writeLong(topicImage.id().hashCode()); // topic ID
dos.writeUTF(topicImage.name()); // topic name
dos.writeInt(topicImage.partitions().size()); // number of partitions
for (int i = 0; i < topicImage.partitions().size(); i++) {
dos.writeInt(i); // partition id
List<String> sortedRacksList = Arrays.stream(topicImage.partitions().get(i).replicas)
.mapToObj(clusterImage::broker)
.filter(Objects::nonNull)
.map(BrokerRegistration::rack)
.filter(Optional::isPresent)
.map(Optional::get)
.sorted()
.toList();
String racks = IntStream.range(0, sortedRacksList.size())
.mapToObj(idx -> idx + ":" + sortedRacksList.get(idx)) // Format: "index:value"
.collect(Collectors.joining(",")); // Separator between "index:value" pairs
dos.writeUTF(racks); // sorted racks
}
dos.flush();
topicBytes = baos.toByteArray();
}
}
@Benchmark
public void testLz4() {
XXHash64 hash = XXHashFactory.fastestInstance().hash64();
hash.hash(topicBytes, 0, topicBytes.length, 0);
}
@Benchmark
public void testMurmur() {
Murmur3.hash64(topicBytes);
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with using a BufferSupplier in order to reuse buffers. However, EventProcessorThread may be too low level to hold it. Having it in Shard may be enough.
we can revisit this when the critical code are used by production :)
@FrankYang0529 thanks for updates. the result LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also wonder what is the impact of putting all the data to a byte array before hashing it. Do you have thoughts on this?
Based on the KIP-1101, it minimizes the calculation count of topic hash. The result can be shared between groups. I think we can keep this function simple currently.
I suggest that EventProcessorThread can leverage GrowableBufferSupplier to reuse buffer as much as possible.
With BufferSupplier, the hash function needs to be thread safe to reuse the buffer. We can revisit it in the future.
Additionally, Group#computeTopicHashin should use ByteBufferOutputStream to generate the bytes array, as ByteBufferOutputStream#buffer#array can avoid extra array copy like ByteArrayOutputStream#toByteArray
The ByteBufferOutputStream needs a ByteBuffer with capacity. I wonder whether we can calculate a accurate capacity. For example, rack string can contain any character. It presents 1 to 4 bytes in UTF-8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ByteBufferOutputStream needs a ByteBuffer with capacity. I wonder whether we can calculate a accurate capacity. For example, rack string can contain any character. It presents 1 to 4 bytes in UTF-8.
The initialize capacity can be discussed later. In fact, it may be not a issue if we adopt the growable buffer. The buffer can be big enough for each hash computing eventually.
* @param topicHashes The map of topic hashes. Key is topic name and value is the topic hash. | ||
* @return The hash of the group. | ||
*/ | ||
static long computeGroupHash(Map<String, Long> topicHashes) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether it is worth inlining the implementation from Guava or something similar to combine the hashes. It would avoid the extra collections. I am not sure whether it makes a real difference though. What are your thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the KIP, you also mentioned combining the index with the hash. Is this something done within combineOrdered
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether it is worth inlining the implementation from Guava or something similar to combine the hashes. It would avoid the extra collections.
Yes, I copy some implementation to this function.
In the KIP, you also mentioned combining the index with the hash. Is this something done within
combineOrdered
?
No, the computeGroupHash
sorts topics by name and use this order to merge hashes. I also add test case testComputeGroupHashWithDifferentOrder
and testComputeGroupHashWithSameKeyButDifferentValue
to verify it.
.filter(Optional::isPresent) | ||
.map(Optional::get) | ||
.sorted() | ||
.collect(Collectors.joining(";")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
;
is allowed in the rack
field too so it does really protect us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like any character can be valid. I change the combination with following format:
0:<rack 0>,1:<rack 1>, ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine preventing for accidental collisions. Though it's still possible to intentionally come up with rack names that create collisions, but I believe you'd only be impacting your own cluster.
To rule out any ambiguity, we could pretend this was a serialization format and either prefix strings with their length, or null-terminate them. The same for variable-length lists of strings. These can either be length-prefixed or terminated with an invalid string that cannot occur (""? but not sure on this).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking the suggestion. I think it's fine now.
Small nit though, I was actually thinking of writing the length in binary, using writeInt
and dropping the :
and ,
separators entirely. Apologies if I wasn't clear enough earlier.
.putString(topicImage.name(), StandardCharsets.UTF_8) // topic name | ||
.putInt(topicImage.partitions().size()); // number of partitions | ||
|
||
topicImage.partitions().entrySet().stream().sorted(Map.Entry.comparingByKey()).forEach(entry -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We know that partitions go from 0 to N. I wonder whether we should use a good old for loop instead of sorting the partitions. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good suggestion! Thanks. Updated it.
static long computeTopicHash(TopicImage topicImage, ClusterImage clusterImage) { | ||
HashFunction hf = Hashing.murmur3_128(); | ||
Hasher topicHasher = hf.newHasher() | ||
.putByte((byte) 0) // magic byte |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we define a constant for the magic byte?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, add TOPIC_HASH_MAGIC_BYTE
.
This reverts commit f4c9749.
Signed-off-by: PoAn Yang <[email protected]>
Signed-off-by: PoAn Yang <[email protected]>
Signed-off-by: PoAn Yang <[email protected]>
Signed-off-by: PoAn Yang <[email protected]>
This reverts commit aff55ce.
Signed-off-by: PoAn Yang <[email protected]>
byte TOPIC_HASH_MAGIC_BYTE = 0x00; | ||
XXHash64 LZ4_HASH_INSTANCE = XXHashFactory.fastestInstance().hash64(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to put those and the new methods to a separate class? Having them in Group
is weird because it won't be used by all the group types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move to org.apache.kafka.coordinator.group.Utils
. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FrankYang0529 thanks for this patch.
* @return The hash of the topic. | ||
*/ | ||
static long computeTopicHash(TopicImage topicImage, ClusterImage clusterImage) throws IOException { | ||
try (ByteArrayOutputStream baos = new ByteArrayOutputStream(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do a small optimization for it by using ByteBufferOutputStream
. for example:
try (var baos = new ByteBufferOutputStream(100);
var dos = new DataOutputStream(baos)) {
...
dos.flush();
var topicBytes = baos.buffer().flip();
return LZ4_HASH_INSTANCE.hash(topicBytes, 0);
}
LZ4_HASH_INSTANCE.hash
takes an array of ByteBuffer to compute the hash, which avoids an array copy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I misunderstood ByteBufferOutputStream. I thought it uses fixed capacity even if there is no enough buffer. After checking the source code, it expands memory if the buffer is not big enough. Updated it. Thanks.
} | ||
|
||
/** | ||
* Computes the hash of the topic id, name, number of partitions, and partition racks by Murmur3. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please update the docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated it. Thanks.
} | ||
}); | ||
|
||
// Convert the byte array to long. This is taken from guava BytesHashCode#asLong. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not using LZ4_HASH_INSTANCE
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can use it. Thanks.
.sorted() | ||
.toList(); | ||
|
||
String racks = IntStream.range(0, sortedRacksList.size()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the KIP does not mention the "index" for the rack. could it be replaced by String.join(",", sortedRacksList)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no limitation for rack string, so any character can be part of rack string. I can update KIP if needs.
Signed-off-by: PoAn Yang <[email protected]>
Signed-off-by: PoAn Yang <[email protected]>
Signed-off-by: PoAn Yang <[email protected]>
* @param clusterImage The cluster image. | ||
* @return The hash of the topic. | ||
*/ | ||
static long computeTopicHash(TopicImage topicImage, ClusterImage clusterImage) throws IOException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add documentation to remind developers that the hash is stored as part of the state. Changing the implementation of the hashing function may break compatibility with existing states.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the hashing function is ever changed, is there a version field that should be updated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the hashing function is ever changed, is there a version field that should be updated?
yes, there is a magic byte as version.
Signed-off-by: PoAn Yang <[email protected]>
.sorted(Map.Entry.comparingByKey()) // sort by topic name | ||
.map(Map.Entry::getValue) | ||
.map(longToBytes::apply) | ||
.forEach(nextBytes -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're adding a lot of unnecessary overhead for the hash computation (multiple map calls, etc.). We should probably just use an old school loop.
.addTopic(FOO_TOPIC_ID, FOO_TOPIC_NAME, FOO_NUM_PARTITIONS) | ||
.addRacks() | ||
.build(); | ||
private static final XXHash64 LZ4_HASH_INSTANCE = XXHashFactory.fastestInstance().hash64(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XXH3
seems to be the fastest implementation. Did we consider using that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion. I do benchmark for streaming XXH3 / streaming XXH64 / non-streaming XXH3 / non-streaming XXH64. The streaming XXH3 gets the best result. However, it needs to include new library com.dynatrace.hash4j
. Do we want to import it?
Benchmark (numReplicasPerBroker) (partitionsPerTopic) (replicationFactor) Mode Cnt Score Error Units
TopicHashBenchmark.testDynatraceStreamingXXH3 10 10 3 avgt 5 879.241 ± 6.788 ns/op
TopicHashBenchmark.testDynatraceStreamingXXH3 10 50 3 avgt 5 4192.380 ± 195.424 ns/op
TopicHashBenchmark.testDynatraceStreamingXXH3 10 100 3 avgt 5 8027.227 ± 210.403 ns/op
TopicHashBenchmark.testDynatraceXXH3 10 10 3 avgt 5 1676.398 ± 2.249 ns/op
TopicHashBenchmark.testDynatraceXXH3 10 50 3 avgt 5 9256.175 ± 45.298 ns/op
TopicHashBenchmark.testDynatraceXXH3 10 100 3 avgt 5 20195.772 ± 37.651 ns/op
TopicHashBenchmark.testLz4StreamingXXHash64 10 10 3 avgt 5 9739.833 ± 188.303 ns/op
TopicHashBenchmark.testLz4StreamingXXHash64 10 50 3 avgt 5 45540.195 ± 455.747 ns/op
TopicHashBenchmark.testLz4StreamingXXHash64 10 100 3 avgt 5 89084.689 ± 2164.862 ns/op
TopicHashBenchmark.testLz4XXHash64 10 10 3 avgt 5 1755.391 ± 6.436 ns/op
TopicHashBenchmark.testLz4XXHash64 10 50 3 avgt 5 9421.643 ± 79.838 ns/op
TopicHashBenchmark.testLz4XXHash64 10 100 3 avgt 5 19461.960 ± 425.881 ns/op
JMH benchmarks done
TopicHashBenchmark.java
package org.apache.kafka.jmh.metadata;
import org.apache.kafka.common.Uuid;
import org.apache.kafka.common.metadata.RegisterBrokerRecord;
import org.apache.kafka.common.utils.ByteBufferOutputStream;
import org.apache.kafka.image.ClusterDelta;
import org.apache.kafka.image.ClusterImage;
import org.apache.kafka.image.TopicsDelta;
import org.apache.kafka.image.TopicImage;
import org.apache.kafka.metadata.BrokerRegistration;
import com.dynatrace.hash4j.hashing.HashStream64;
import com.dynatrace.hash4j.hashing.Hashing;
import net.jpountz.xxhash.StreamingXXHash64;
import net.jpountz.xxhash.XXHashFactory;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Level;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;
import java.io.DataOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import static org.apache.kafka.jmh.metadata.TopicsImageSnapshotLoadBenchmark.getInitialTopicsDelta;
import static org.apache.kafka.jmh.metadata.TopicsImageSnapshotLoadBenchmark.getNumBrokers;
@State(Scope.Benchmark)
@Fork(value = 1)
@Warmup(iterations = 3)
@Measurement(iterations = 5)
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
public class TopicHashBenchmark {
@Param({"10", "50", "100"})
private int partitionsPerTopic;
@Param({"3"})
private int replicationFactor;
@Param({"10"})
private int numReplicasPerBroker;
private TopicImage topicImage;
private ClusterImage clusterImage;
@Setup(Level.Trial)
public void setup() throws IOException {
TopicsDelta topicsDelta = getInitialTopicsDelta(1, partitionsPerTopic, replicationFactor, numReplicasPerBroker);
int numBrokers = getNumBrokers(1, partitionsPerTopic, replicationFactor, numReplicasPerBroker);
ClusterDelta clusterDelta = new ClusterDelta(ClusterImage.EMPTY);
for (int i = 0; i < numBrokers; i++) {
clusterDelta.replay(new RegisterBrokerRecord()
.setBrokerId(i)
.setRack(Uuid.randomUuid().toString())
);
}
topicImage = topicsDelta.apply().topicsById().values().stream().findFirst().get();
clusterImage = clusterDelta.apply();
}
@Benchmark
public void testLz4StreamingXXHash64() {
try (StreamingXXHash64 hash = XXHashFactory.fastestInstance().newStreamingHash64(0)) {
hash.update(new byte[]{(byte) 0}, 0, 1); // magic byte
// topic id
hash.update(intToBytes(topicImage.id().hashCode()), 0, 32);
// topic name
byte[] topicNameBytes = topicImage.name().getBytes();
hash.update(topicNameBytes, 0, topicNameBytes.length);
// number of partitions
hash.update(intToBytes(topicImage.partitions().size()), 0, 32);
for (int i = 0; i < topicImage.partitions().size(); i++) {
// partition id
hash.update(intToBytes(i), 0, 32);
// sorted racks
List<String> racks = new ArrayList<String>();
for (int replicaId : topicImage.partitions().get(i).replicas) {
BrokerRegistration broker = clusterImage.broker(replicaId);
if (broker != null) {
Optional<String> rackOptional = broker.rack();
rackOptional.ifPresent(racks::add);
}
}
Collections.sort(racks);
for (String rack : racks) {
// Format: "<length><value>"
byte[] rackBytes = rack.getBytes();
hash.update(intToBytes(rack.length()), 0, 32);
hash.update(rackBytes, 0, rackBytes.length);
}
}
hash.getValue();
}
}
@Benchmark
public void testLz4XXHash64() throws IOException {
try (ByteBufferOutputStream bbos = new ByteBufferOutputStream(512);
DataOutputStream dos = new DataOutputStream(bbos)) {
dos.writeByte(0); // magic byte
dos.writeLong(topicImage.id().hashCode()); // topic ID
dos.writeUTF(topicImage.name()); // topic name
dos.writeInt(topicImage.partitions().size()); // number of partitions
for (int i = 0; i < topicImage.partitions().size(); i++) {
dos.writeInt(i); // partition id
// The rack string combination cannot use simple separator like ",", because there is no limitation for rack character.
// If using simple separator like "," it may hit edge case like ",," and ",,," / ",,," and ",,".
// Add length before the rack string to avoid the edge case.
List<String> racks = new ArrayList<>();
for (int replicaId : topicImage.partitions().get(i).replicas) {
BrokerRegistration broker = clusterImage.broker(replicaId);
if (broker != null) {
Optional<String> rackOptional = broker.rack();
rackOptional.ifPresent(racks::add);
}
}
Collections.sort(racks);
for (String rack : racks) {
// Format: "<length><value>"
dos.writeInt(rack.length());
dos.writeUTF(rack);
}
}
dos.flush();
ByteBuffer topicBytes = bbos.buffer().flip();
XXHashFactory.fastestInstance().hash64().hash(topicBytes, 0);
}
}
@Benchmark
public void testDynatraceStreamingXXH3() {
HashStream64 hash = Hashing.xxh3_64().hashStream();
hash = hash.putByte((byte) 0)
.putLong(topicImage.id().hashCode())
.putString(topicImage.name())
.putInt(topicImage.partitions().size());
for (int i = 0; i < topicImage.partitions().size(); i++) {
// partition id
hash = hash.putInt(i);
// sorted racks
List<String> racks = new ArrayList<String>();
for (int replicaId : topicImage.partitions().get(i).replicas) {
BrokerRegistration broker = clusterImage.broker(replicaId);
if (broker != null) {
Optional<String> rackOptional = broker.rack();
rackOptional.ifPresent(racks::add);
}
}
Collections.sort(racks);
for (String rack : racks) {
// Format: "<length><value>"
hash.putInt(rack.length());
hash.putString(rack);
}
}
hash.getAsLong();
}
@Benchmark
public void testDynatraceXXH3() throws IOException {
try (ByteBufferOutputStream bbos = new ByteBufferOutputStream(512);
DataOutputStream dos = new DataOutputStream(bbos)) {
dos.writeByte(0); // magic byte
dos.writeLong(topicImage.id().hashCode()); // topic ID
dos.writeUTF(topicImage.name()); // topic name
dos.writeInt(topicImage.partitions().size()); // number of partitions
for (int i = 0; i < topicImage.partitions().size(); i++) {
dos.writeInt(i); // partition id
// The rack string combination cannot use simple separator like ",", because there is no limitation for rack character.
// If using simple separator like "," it may hit edge case like ",," and ",,," / ",,," and ",,".
// Add length before the rack string to avoid the edge case.
List<String> racks = new ArrayList<>();
for (int replicaId : topicImage.partitions().get(i).replicas) {
BrokerRegistration broker = clusterImage.broker(replicaId);
if (broker != null) {
Optional<String> rackOptional = broker.rack();
rackOptional.ifPresent(racks::add);
}
}
Collections.sort(racks);
for (String rack : racks) {
// Format: "<length><value>"
dos.writeInt(rack.length());
dos.writeUTF(rack);
}
}
dos.flush();
ByteBuffer topicBytes = bbos.buffer().flip();
Hashing.xxh3_64().hashBytesToLong(topicBytes.array());
}
}
private byte[] intToBytes(int value) {
return new byte[] {
(byte)(value >>> 24),
(byte)(value >>> 16),
(byte)(value >>> 8),
(byte)value
};
}
}
} | ||
dos.flush(); | ||
ByteBuffer topicBytes = bbos.buffer().flip(); | ||
return LZ4_HASH_INSTANCE.hash(topicBytes, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also a streaming hash class - would that be a better option instead of creating the complete byte buffer?
guava
to dependencies.computeTopicHash
andcomputeGroupHash
functions toGroup
class.
Murmur3
toorg.apache.kafka.common.internals
.