Relative performance of mutual information estimation methods for quantifying the dependence among short and noisy data

Shiraj Khan, Sharba Bandyopadhyay, Auroop R. Ganguly, Sunil Saigal, David J. Erickson, III, Vladimir Protopopescu, and George Ostrouchov
Phys. Rev. E 76, 026209 – Published 14 August 2007

Abstract

Commonly used dependence measures, such as linear correlation, cross-correlogram, or Kendall’s τ, cannot capture the complete dependence structure in data unless the structure is restricted to linear, periodic, or monotonic. Mutual information (MI) has been frequently utilized for capturing the complete dependence structure including nonlinear dependence. Recently, several methods have been proposed for the MI estimation, such as kernel density estimators (KDEs), k-nearest neighbors (KNNs), Edgeworth approximation of differential entropy, and adaptive partitioning of the XY plane. However, outstanding gaps in the current literature have precluded the ability to effectively automate these methods, which, in turn, have caused limited adoptions by the application communities. This study attempts to address a key gap in the literature—specifically, the evaluation of the above methods to choose the best method, particularly in terms of their robustness for short and noisy data, based on comparisons with the theoretical MI estimates, which can be computed analytically, as well with linear correlation and Kendall’s τ. Here we consider smaller data sizes, such as 50, 100, and 1000, and within this study we characterize 50 and 100 data points as very short and 1000 as short. We consider a broader class of functions, specifically linear, quadratic, periodic, and chaotic, contaminated with artificial noise with varying noise-to-signal ratios. Our results indicate KDEs as the best choice for very short data at relatively high noise-to-signal levels whereas the performance of KNNs is the best for very short data at relatively low noise levels as well as for short data consistently across noise levels. In addition, the optimal smoothing parameter of a Gaussian kernel appears to be the best choice for KDEs while three nearest neighbors appear optimal for KNNs. Thus, in situations where the approximate data sizes are known in advance and exploratory data analysis and/or domain knowledge can be used to provide a priori insights into the noise-to-signal ratios, the results in the paper point to a way forward for automating the process of MI estimation.

  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
  • Figure
3 More
  • Received 6 February 2007

DOI:https://doi.org/10.1103/PhysRevE.76.026209

©2007 American Physical Society

Authors & Affiliations

Shiraj Khan1,2, Sharba Bandyopadhyay3, Auroop R. Ganguly1,*, Sunil Saigal2, David J. Erickson, III4, Vladimir Protopopescu1, and George Ostrouchov4

  • 1Computational Sciences and Engineering, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
  • 2Civil and Environmental Engineering, University of South Florida, Tampa, Florida 33620, USA
  • 3Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21218, USA
  • 4Computer Science and Mathematics, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA

  • *Corresponding author. gangulyar@ornl.gov

Article Text (Subscription Required)

Click to Expand

References (Subscription Required)

Click to Expand
Issue

Vol. 76, Iss. 2 — August 2007

Reuse & Permissions
Access Options
Author publication services for translation and copyediting assistance advertisement

Authorization Required


×
×

Images

×

Sign up to receive regular email alerts from Physical Review E

Log In

Cancel
×

Search


Article Lookup

Paste a citation or DOI

Enter a citation
×