 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
According
to Shannon the information content of a message
|
|
|
is a
function of how surprised we are by it.
|
|
|
The
less probable a message the more information it contains
|
|
|
H = - log2 p
|
|
|
H =
information also called entropy, p the probability of
|
|
|
|
occurrence
of a messages. The mean information content
|
|
|
|
of an
ensemble of messages is obtained by weighting the
|
|
|
|
messages
by their probability of occurence
|
|
|
|
-S p
log2p
|
|