My watch list
my.chemeurope.com  
Login  

Min-entropy



In probability theory or information theory, the min-entropy of a discrete random event x with possible states (or outcomes) 1... n and corresponding probabilities p1... pn is

H_\infty(X) = \min_{i=1}^n (-\log p_i) = -(\max_i \log p_i) = -\log \max_i p_i

The base of the logarithm is just a scaling constant; for a result in bits, use a base-2 logarithm. Thus, a distribution has a min-entropy of at least b bits if no possible state has a probabilty greater than 2-b.

The min-entropy is always less than or equal to the Shannon entropy; it is equal when all the probabilities pi are equal. min-entropy is important in the theory of randomness extractors.

The notation H_\infty(X) derives from a parameterized family of Shannon-like entropy measures, Rényi entropy,

H_k(X) = -\log \sqrt[k-1]{\begin{matrix}\sum_i (p_i)^k\end{matrix}}

k=1 is Shannon entropy. As k is increased, more weight is given to the larger probabilities, and in the limit as k→∞, only the largest p_i has any effect on the result.

See also

 
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Min-entropy". A list of authors is available in Wikipedia.
Your browser is not current. Microsoft Internet Explorer 6.0 does not support some functions on Chemie.DE