Skip to content

Files

Latest commit

May 26, 2018
e508aca · May 26, 2018

History

History
118 lines (96 loc) · 8.61 KB

paper.md

File metadata and controls

118 lines (96 loc) · 8.61 KB
title tags authors affiliations date bibliography
Philentropy: Information Theory and Distance Quantification with R
R
information theory
distance metrics
probability functions
divergence quantification
jensen-shannon divergence
name orcid affiliation
Hajk-Georg Drost
0000-0002-1567-306X
1
name index
The Sainsbury Laboratory, University of Cambridge, Bateman Street, Cambridge CB2 1LR, UK
1
22 May 2018
paper.bib

Summary

Comparison is a fundamental method of scientific research leading to insights about the processes that generate similarity or dissimilarity. In statistical terms comparisons between probability functions are performed to infer connections, correlations, or relationships between objects or samples [@Cha2007]. Most quantification methods rely on distance or similarity measures, but the right choice for each individual application is not always clear and sometimes poorly explored. The reason for this is partly that diverse measures are either implemented in different R packages with very different notations or are not implemented at all. Thus, a comprehensive framework implementing the most common similarity and distance measures using a uniform notation is still missing. The R [@R2018] package Philentropy aims to fill this gap by implementing forty-six fundamental distance and similarity measures [@Cha2007] for comparing probability functions. These comparisons between probability functions have their foundations in a broad range of scientific disciplines from mathematics to ecology. The aim of this package is to provide a comprehensive and computationally optimized base framework for clustering, classification, statistical inference, goodness-of-fit, non-parametric statistics, information theory, and machine learning tasks that are based on comparing univariate or multivariate probability functions. All functions are written in C++ and are integrated into the R package using the Rcpp Application Programming Interface (API) [@Eddelbuettel2013].

Together, this framework allows building new similarity or distance based (statistical) models and algorithms in R which are computationally efficient and scalable. The comprehensive availability of diverse metrics and measures furthermore enables a systematic assessment of choosing the most optimal similarity or distance measure for individual applications in diverse scientific disciplines.

The following probability distance/similarity and information theory measures are implemented in Philentropy.

Distance and Similarity Measures

L p Minkowski Family

  • Euclidean : d = i = 1 N | P i Q i | 2 )
  • Manhattan : d = i = 1 N | P i Q i |
  • Minkowski : d = ( i = 1 N | P i Q i | p ) 1 / p
  • Chebyshev : d = m a x | P i Q i |

L 1 Family

  • Sorensen : d = i = 1 N | P i Q i | i = 1 N ( P i + Q i )
  • Gower : d = 1 N ˙ i = 1 N | P i Q i | , where N is the total number of elements i in P i and Q i
  • Soergel : d = i = 1 N | P i Q i | i = 1 N m a x ( P i , Q i )
  • Kulczynski d : d = i = 1 N | P i Q i | i = 1 N m i n ( P i , Q i )
  • Canberra : d = i = 1 N | P i Q i | ( P i + Q i )
  • Lorentzian : d = i = 1 N l n ( 1 + | P i Q i | )

Intersection Family

  • Intersection : s = i = 1 N m i n ( P i , Q i )
  • Non-Intersection : d = 1 i = 1 N m i n ( P i , Q i )
  • Wave Hedges : d = i = 1 N | P i Q i | m a x ( P i , Q i )
  • Czekanowski : d = i = 1 N | P i Q i | i = 1 N | P i + Q i |
  • Motyka : d = i = 1 N m i n ( P i , Q i ) ( P i + Q i )
  • Kulczynski s : d = i = 1 N m i n ( P i , Q i ) i = 1 N | P i Q i |
  • Tanimoto : d = i = 1 N ( m a x ( P i , Q i ) m i n ( P i , Q i ) ) i = 1 N m a x ( P i , Q i ) ; equivalent to Soergel
  • Ruzicka : s = i = 1 N m i n ( P i , Q i ) i = 1 N m a x ( P i , Q i ) ; equivalent to 1 - Tanimoto = 1 - Soergel

Inner Product Family

  • Inner Product : s = i = 1 N P i Q ˙ i
  • Harmonic mean : s = 2 i = 1 N P i Q i P i + Q i
  • Cosine : s = i = 1 N P i Q i i = 1 N P i 2 i = 1 N Q i 2
  • Kumar-Hassebrook (PCE) : s = i = 1 N ( P i Q i ) ( i = 1 N P i 2 + i = 1 N Q i 2 i = 1 N ( P i Q i ) )
  • Jaccard : d = 1 i = 1 N P i Q i i = 1 N P i 2 + i = 1 N Q i 2 i = 1 N P i Q i ; equivalent to 1 - Kumar-Hassebrook
  • Dice : d = i = 1 N ( P i Q i ) 2 ( i = 1 N P i 2 + i = 1 N Q i 2 )

Squared-chord Family

  • Fidelity : s = i = 1 N P i Q i
  • Bhattacharyya : d = l n i = 1 N P i Q i
  • Hellinger : d = 2 1 i = 1 N P i Q i
  • Matusita : d = 2 2 i = 1 N P i Q i
  • Squared-chord : d = i = 1 N ( P i Q i ) 2

Squared L 2 family ( X 2 squared family)

  • Squared Euclidean : d = i = 1 N ( P i Q i ) 2
  • Pearson X 2 : d = i = 1 N ( ( P i Q i ) 2 Q i )
  • Neyman X 2 : d = i = 1 N ( ( P i Q i ) 2 P i )
  • Squared X 2 : d = i = 1 N ( ( P i Q i ) 2 ( P i + Q i ) )
  • Probabilistic Symmetric X 2 : d = 2 i = 1 N ( ( P i Q i ) 2 ( P i + Q i ) )
  • Divergence : X 2 : d = 2 i = 1 N ( ( P i Q i ) 2 ( P i + Q i ) 2 )
  • Clark : d = i = 1 N ( | P i Q i | ( P i + Q i ) 2
  • Additive Symmetric X 2 : d = i = 1 N ( ( ( P i Q i ) 2 ( P i + Q i ) ) ( P i Q i ) )

Shannon's Entropy Family

  • Kullback-Leibler : d = i = 1 N P i l o g ( P i Q i )
  • Jeffreys : d = i = 1 N ( P i Q i ) l o g ( P i Q i )
  • K divergence : d = i = 1 N P i l o g ( 2 P i P i + Q i )
  • Topsoe : d = i = 1 N ( P i l o g ( 2 P i P i + Q i ) ) + ( Q i l o g ( 2 Q i P i + Q i ) )
  • Jensen-Shannon : d = 0.5 ( i = 1 N P i l o g ( 2 P i P i + Q i ) + i = 1 N Q i l o g ( 2 Q i P i + Q i ) )
  • Jensen difference : d = i = 1 N ( ( P i l o g ( P i ) + Q i l o g ( Q i ) 2 ) ( P i + Q i 2 ) l o g ( P i + Q i 2 ) )

Combinations

  • Taneja : d = i = 1 N ( P i + Q i 2 ) l o g ( P i + Q i ( 2 P i Q i ) )
  • Kumar-Johnson : d = i = 1 N ( P i 2 Q i 2 ) 2 2 ( P i Q i ) 3 2
  • Avg( L 1 , L n ) : d = i = 1 N | P i Q i | + m a x | P i Q i | 2

Note: d refers to distance measures, whereas s denotes similarity measures.

Information Theory Measures

  • Shannon's Entropy H(X) : H ( X ) = i = 1 n P ( x i ) l o g b ( P ( x i ) )
  • Shannon's Joint-Entropy H(X,Y) : H ( X , Y ) = i = 1 n j = 1 m P ( x i , y j ) l o g b ( P ( x i , y j ) )
  • Shannon's Conditional-Entropy H(X | Y) : H ( Y | X ) = i = 1 n j = 1 m P ( x i , y j ) l o g b ( P ( x i ) P ( x i , y j ) )
  • Mutual Information I(X,Y) : M I ( X , Y ) = i = 1 n j = 1 m P ( x i , y j ) l o g b ( P ( x i , y j ) ( P ( x i ) P ( y j ) ) )
  • Kullback-Leibler Divergence : K L ( P | | Q ) = i = 1 n P ( p i ) l o g 2 ( P ( p i ) P ( q i ) ) = H ( P , Q ) H ( P )
  • Jensen-Shannon Divergence : J S D ( P | | Q ) = 0.5 ( K L ( P | | R ) + K L ( Q | | R ) )
  • Generalized Jensen-Shannon Divergence : g J S D π 1 , . . . , π n ( P 1 , . . . , P n ) = H ( i = 1 n π i P i ) i = 1 n π i H ( P i )

Philentropy already enabled the robust comparison of similarity measures in analogy-based software effort estimation [@Phannachitta2017] as well as in evolutionary transcriptomics applications [@Drost2018]. The package aims to assist efforts to determine optimal similarity or distance measures when developing new (statistical) models or algorithms. In addition, Philentropy is implemented to be applicable to large-scale datasets that were previously inaccessible using other R packages. The software is open source and currently available on GitHub (https://github.com/HajkD/philentropy) and CRAN (https://cran.r-project.org/web/packages/philentropy/index.html). A comprehensive documentation of Philentropy can be found at https://hajkd.github.io/philentropy/.

Acknowledgements

I would like to thank Jerzy Paszkowski for providing me an inspiring scientific environment and for supporting my projects.

References