Skip to content

A Python-based toolkit for evaluating machine learning models using statistical tests. Includes implementations of McNemar Test, Two-Matched Samples t-Test, Friedman Test, Post-hoc Nemenyi Test, and visualization tools like boxplots. Designed to help researchers and practitioners compare model performance rigorously.

Notifications You must be signed in to change notification settings

mohamed-stifi/ML-Model-Evaluation-Statistical-Tests

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ML-Model-Evaluation-Statistical-Tests

A Python-based toolkit for evaluating machine learning models using statistical tests. Includes implementations of McNemar Test, Two-Matched Samples t-Test, Friedman Test, Post-hoc Nemenyi Test, and visualization tools like boxplots. Designed to help researchers and practitioners compare model performance rigorously.

About

A Python-based toolkit for evaluating machine learning models using statistical tests. Includes implementations of McNemar Test, Two-Matched Samples t-Test, Friedman Test, Post-hoc Nemenyi Test, and visualization tools like boxplots. Designed to help researchers and practitioners compare model performance rigorously.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published