You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Within each round folder, you will find a detector, monitor, and replayer folder containing all necessary code to create a streaming knowledge graph environment.
4
+
5
+
Inside the replayer folder, the functionality of replay.py can be customized to suit your requirements. You can:
6
+
7
+
* Load different data segments based on regions where anomalies occur.
8
+
* Adjust the time between knowledge graph events to expedite local evaluations.
9
+
10
+
Please note that the challenge organizers will use the default replayer functionality to score the different systems.
11
+
12
+
Two Docker Compose YAML files are also provided:
13
+
14
+
* A full-application file that can build and run the detector, monitor, replayer, and Kafka altogether.
15
+
* A simpler Docker Compose file that only utilizes the Kafka functionality for local development. Running this Docker Compose file enables participants to initiate the replayer, monitor, and detector as individual Python runs, for debugging purposes outside the Docker environment.
16
+
17
+
Participants can expand upon or create a new detector using the code within the detector folder. Participants are permitted to use languages other than Python for their detectors. For additional assistance, please contact the challenge organizers through Slack.
<pclass="description">Evaluation against new testset without labels. Precision and recall evaluation metrics and scores made available on a public leaderboard</p>
0 commit comments