-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Whe talking about confusion matrices you should add cohen's kappa. You can refer to this paper: https://www.nature.com/articles/s41467-020-20206-z.pdf using it as a measure to check that two coders correctly classified search queries the same way. The difference between Cohen's Kappa and simply accuracy is is that Cohen’s Kappa is a quantitative measure of reliability for two raters that are rating the same thing, corrected for how often that the raters may agree by chance. See how they use it here: https://github.com/stefanjwojcik/ms_flu/blob/master/Survey%20and%20MRP/Main_Replication_Code.R#L98 and an explanation here: https://towardsdatascience.com/cohens-kappa-9786ceceab58
Metadata
Metadata
Assignees
Labels
No labels