# Created by Octave 3.2.4, Tue Nov 23 12:52:16 2010 EST <mockbuild@jetta.math.Princeton.EDU.private>
# name: cache
# type: cell
# rows: 3
# columns: 27
# name: <cell-element>
# type: string
# elements: 1
# length: 17
arithmetic_decode
# name: <cell-element>
# type: string
# elements: 1
# length: 637
 -- Function File:  arithmetic_decode (TAG_MESSAGE,
          SYMBOL_PROBABILITES_LIST)
     Computes the message from arithmetic code given  with symbol
     probabilities. The arithmetic decoding procedure assumes that
     MESSAGE is a list of numbers and the symbol probabilities
     correspond to the index. The message is returned. For example

                   symbols=[1,2,3,4]; sym_prob=[0.5 0.25 0.15 0.10];
                   message=[1, 1, 2, 3, 4];
                   arithmetic_encode(message,sym_prob) ans=0.18078
                   arithmetic_decode(0.18078,sym_prob) ans=[1 1 2 3 4];

   See also: arithmetic_encode


# name: <cell-element>
# type: string
# elements: 1
# length: 75
Computes the message from arithmetic code given  with symbol
probabilities.

# name: <cell-element>
# type: string
# elements: 1
# length: 17
arithmetic_encode
# name: <cell-element>
# type: string
# elements: 1
# length: 540
 -- Function File:  arithmetic_encode (MESSAGE,
          SYMBOL_PROBABILITES_LIST)
     Computes the arithmetic code for the message with symbol
     probabilities are given. The arithmetic coding procedure assumes
     that MESSAGE is a list of numbers and the symbol probabilities
     correspond to the index. For example

                   symbols=[1,2,3,4]; sym_prob=[0.5 0.25 0.15 0.10];
                   message=[1, 1, 2, 3, 4];
                   arithmetic_encode(message,sym_prob) ans=0.18078

   See also: arithmetic_decode


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Computes the arithmetic code for the message with symbol probabilities
are given

# name: <cell-element>
# type: string
# elements: 1
# length: 10
bscchannel
# name: <cell-element>
# type: string
# elements: 1
# length: 156
 -- Function File:  bscchannel (P)
     Returns the transition matrix for a Binary Symmetric Channel with
     error probability, P.

   See also: entropy


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Returns the transition matrix for a Binary Symmetric Channel with error
probabil

# name: <cell-element>
# type: string
# elements: 1
# length: 12
condentr_seq
# name: <cell-element>
# type: string
# elements: 1
# length: 307
 -- Function File:  condentr_seq (SEQ_X, SEQ_Y)
     Calculates information entropy of the sequence x conditional on
     the sequence y:      H(X|Y) = H(X,Y) - H(Y)
                   X=[1, 1, 2, 1, 1];
                   Y=[2, 2, 1, 1, 2];
                   condentr_seq(X,Y)

   See also: infoentr_seq


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Calculates information entropy of the sequence x conditional on the
sequence y: 

# name: <cell-element>
# type: string
# elements: 1
# length: 21
conditionalentropy_XY
# name: <cell-element>
# type: string
# elements: 1
# length: 356
 -- Function File:  conditionalentropy_XY (X, Y)
     Computes the H(X/Y) = SUM( P(Yi)*H(X/Yi) ) , where H(X/Yi) = SUM(
     -P(Xk/Yi)log(P(Xk/Yi))), where P(Xk/Yi) = P(Xk,Yi)/P(Yi).  The
     matrix XY must have Y along rows and X along columns.  Xi = SUM(
     COLi ) Yi = SUM( ROWi ) H(X|Y) = H(X,Y) - H(Y)

     See also: entropy, conditionalentropy



# name: <cell-element>
# type: string
# elements: 1
# length: 80
Computes the H(X/Y) = SUM( P(Yi)*H(X/Yi) ) , where H(X/Yi) = SUM(
-P(Xk/Yi)log(P

# name: <cell-element>
# type: string
# elements: 1
# length: 21
conditionalentropy_YX
# name: <cell-element>
# type: string
# elements: 1
# length: 318
 -- Function File:  conditionalentropy_YX (XY)
     Computes the H(Y/X) = SUM( P(Xi)*H(Y/Xi) ), where H(Y/Xi) = SUM(
     -P(Yk/Xi)log(P(Yk/Xi))) The matrix XY must have Y along rows and X
     along columns.  Xi = SUM( COLi ) Yi = SUM( ROWi ) H(Y|X) = H(X,Y)
     - H(X)

   See also: entropy, conditionalentropy_XY


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Computes the H(Y/X) = SUM( P(Xi)*H(Y/Xi) ), where H(Y/Xi) = SUM(
-P(Yk/Xi)log(P(

# name: <cell-element>
# type: string
# elements: 1
# length: 7
entropy
# name: <cell-element>
# type: string
# elements: 1
# length: 492
 -- Function File:  entropy (SYMBOL_PROBABILITES, BASE)
     Computes the Shannon entropy of a discrete source whose
     probabilities are by SYMBOL_PROBABILITIES, and optionally BASE can
     be specified. Base of logarithm defaults to 2, when the entropy
     can be thought of as a measure of bits needed to represent any
     message of the source. For example

                   entropy([0.25 0.25 0.25 0.25]) => ans = 2
                   entropy([0.25 0.25 0.25 0.25],4) => ans = 1


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Computes the Shannon entropy of a discrete source whose probabilities
are by SYM

# name: <cell-element>
# type: string
# elements: 1
# length: 7
graydec
# name: <cell-element>
# type: string
# elements: 1
# length: 121
 -- Function File:  graydec (P)
     Decodes the binary gray code P to the original binary code.


   See also: grayenc


# name: <cell-element>
# type: string
# elements: 1
# length: 59
Decodes the binary gray code P to the original binary code.

# name: <cell-element>
# type: string
# elements: 1
# length: 7
grayenc
# name: <cell-element>
# type: string
# elements: 1
# length: 185
 -- Function File:  grayenc (P)
     Encodes the binary code P to the gray code. Also P can be a
     decimal number which is automatically converted to binary.


   See also: graydec


# name: <cell-element>
# type: string
# elements: 1
# length: 43
Encodes the binary code P to the gray code.

# name: <cell-element>
# type: string
# elements: 1
# length: 15
hartley_entropy
# name: <cell-element>
# type: string
# elements: 1
# length: 485
 -- Function File:  hartley_entropy (P)
     Compute the Hartley entropy using Reyni entropy of order 0, for
     the given probability distribution.

     H\alpha(P(x)) = log\sum_i (Pi(x)^\alpha)/(1-\alpha)

     special-cases include, and when alpha=0, it reduces to Hartley
     entropy.

     Hartley entropy H0(X) = log|X|, where X=n(P), cardinality of P,
     the pdf of random variable x.

                   hartley_entropy([0.2 0.3 0.5])
                   =>   ans = 1.0986


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Compute the Hartley entropy using Reyni entropy of order 0, for the
given probab

# name: <cell-element>
# type: string
# elements: 1
# length: 12
infoentr_seq
# name: <cell-element>
# type: string
# elements: 1
# length: 501
 -- Function File:  infoentr_seq (SEQ_X, SEQ_Y)
     If just one input, calculates Shannon Information Entropy of the
     sequence x:      H(X) = \sum_x \in X p(x) log2(1/p(x))

     If two inputs, calculates joint entropy of the concurrent
     sequences x and y:   H(X,Y) = \sum_x \in X, y \in Y p(x,y)
     log2(1/p(x,y))

                   X=[1, 1, 2, 1, 1];
                   infoentr_seq(X)
                   infoentr_seq([1,2,2,2,1,1,1,1,1],[1,2,2,2,2,2,1,1,1])

   See also: infogain_seq


# name: <cell-element>
# type: string
# elements: 1
# length: 80
If just one input, calculates Shannon Information Entropy of the
sequence x:    

# name: <cell-element>
# type: string
# elements: 1
# length: 12
infogain_seq
# name: <cell-element>
# type: string
# elements: 1
# length: 348
 -- Function File:  infogain_seq (SEQ_X, SEQ_Y)
     Gives the information gain ratio (also known as the `uncertainty
     coefficient') of the sequence x conditional on y:        I(X|Y) =
     I(X;Y)/H(X)

                   X=[1, 1, 2, 1, 1];
                   Y=[2, 2, 1, 1, 2];
                   infogain_seq(X,Y)

   See also: infoentr_seq


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Gives the information gain ratio (also known as the `uncertainty
coefficient') o

# name: <cell-element>
# type: string
# elements: 1
# length: 12
jointentropy
# name: <cell-element>
# type: string
# elements: 1
# length: 251
 -- Function File:  jointentropy (XY)
     Computes the joint entropy of the given channel transition matrix.
     By definition we have `H(X, Y)' given as `H(X:Y) = SUMx(SUMy(P(X,
     Y) * log2(p(X, Y))))'

   See also: entropy, conditionalentropy


# name: <cell-element>
# type: string
# elements: 1
# length: 66
Computes the joint entropy of the given channel transition matrix.

# name: <cell-element>
# type: string
# elements: 1
# length: 25
kullback_leibler_distance
# name: <cell-element>
# type: string
# elements: 1
# length: 421
 -- Function File:  kullback_leibler_distance (P, Q)
     P and Q are probability distribution functions of the Dkl(P,Q) =
     \sum_x -P(x).log(Q(x)) + P(x).log(P(x))          = \sum_x
     -P(x).log(P(x)/Q(x))

     Compute the Kullback-Leibler distance of two probability
     distributions given, P & Q.
                   kullback_leibler_distance([0.2 0.3 0.5],[0.1 0.8 0.1])
                   =>   ans = 0.64910


# name: <cell-element>
# type: string
# elements: 1
# length: 78
P and Q are probability distribution functions of the Dkl(P,Q) = \sum_x
-P(x).

# name: <cell-element>
# type: string
# elements: 1
# length: 8
laverage
# name: <cell-element>
# type: string
# elements: 1
# length: 364
 -- Function File: AVGBITS = laverage (CODEBOOK, PROBLIST)
     Compute the average word length `SUM(I = 1:N)* Li * Pi' where
     codebook is a struct of strings, where each string represents the
     codeword. Problist is the probability values. For example

               x = {"0","111","1110"}; p=[0.1 0.5 0.4];
               laverage(x, p) => ans = 3.2000


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Compute the average word length `SUM(I = 1:N)* Li * Pi' where codebook
is a stru

# name: <cell-element>
# type: string
# elements: 1
# length: 9
marginalc
# name: <cell-element>
# type: string
# elements: 1
# length: 152
 -- Function File:  marginalc (XY)
     Computes marginal  probabilities along columns. Where XY is the
     transition matrix

   See also: marginalr


# name: <cell-element>
# type: string
# elements: 1
# length: 47
Computes marginal  probabilities along columns.

# name: <cell-element>
# type: string
# elements: 1
# length: 9
marginalr
# name: <cell-element>
# type: string
# elements: 1
# length: 149
 -- Function File:  marginalr (XY)
     Computes marginal  probabilities along rows. Where XY is the
     transition matrix

   See also: marginalc


# name: <cell-element>
# type: string
# elements: 1
# length: 44
Computes marginal  probabilities along rows.

# name: <cell-element>
# type: string
# elements: 1
# length: 14
mutualinfo_seq
# name: <cell-element>
# type: string
# elements: 1
# length: 308
 -- Function File:  mutualinfo_seq (SEQ_X, SEQ_Y)
     Calculates mutual information of the sequences x and y:
     I(X;Y) = H(X) - H(X|Y) = H(Y) - H(Y|X) = I(Y;X)

                   X=[1, 1, 2, 1, 1];
                   Y=[2, 2, 1, 1, 2];
                   mutualinfo_seq(X,Y)

   See also: infoentr_seq


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Calculates mutual information of the sequences x and y:      I(X;Y) =
H(X) - H(X

# name: <cell-element>
# type: string
# elements: 1
# length: 17
mutualinformation
# name: <cell-element>
# type: string
# elements: 1
# length: 471
 -- Function File:  mutualinformation (XY)
     Computes the mutual information of the given channel transition
     matrix.  By definition we have `I(X, Y)' given as `I(X:Y) =
     SUM(P(X,Y) * log2(p(X,Y) / p(X)/p(Y))) = relative_entropy(P(X,Y) ||
     P(X),P(Y))' Mutual Information, is amount of information, one
     variable has, about the other. It is the reduction of uncertainity.
     This is a symmetric function.

     See also: entropy, conditionalentropy



# name: <cell-element>
# type: string
# elements: 1
# length: 71
Computes the mutual information of the given channel transition matrix.

# name: <cell-element>
# type: string
# elements: 1
# length: 10
narysource
# name: <cell-element>
# type: string
# elements: 1
# length: 775
 -- Function File:  narysource (PROBABILITY_DIST,N_ORDER)
     This function creates a N-ary order source using the given
     PROBABILITY_DIST  (as a column vector) of a 1-order source
     building a probability distribution of size
     len(PROBABILITY_DIST)^ORDER. Basically if you have X-symbol
     distribution with a N-ary source then we have, in the resultant
     N-ary source with a X^N symbols. The function is equivalent to the
     definitions of a q-ary order source.

           pdist=[1 2 3 4]./10;
           ndist = narysource(pdist,2)
                 =>  [ 0.010000   0.020000   0.030000   0.040000   0.020000 0.040000   0.060000   0.080000   0.030000   0.060000    0.090000   0.120000   0.040000   0.080000   0.120000   0.160000]

   See also: entropy


# name: <cell-element>
# type: string
# elements: 1
# length: 80
This function creates a N-ary order source using the given
PROBABILITY_DIST  (as

# name: <cell-element>
# type: string
# elements: 1
# length: 10
redundancy
# name: <cell-element>
# type: string
# elements: 1
# length: 384
 -- Function File:  redundancy (CODE_WORD_LIST, SYMBOL_PROBABILITES)
     Computes the wasted excessive bits over the entropy when using a
     particular coding scheme. For example

                   prob_list = [0.5 0.25 0.15 0.1];
                   min_bits = entropy(prob_list);
                   cw_list = huffman(prob_list);
                   redundancy(cw_list,prob_list)


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Computes the wasted excessive bits over the entropy when using a
particular codi

# name: <cell-element>
# type: string
# elements: 1
# length: 15
relativeentropy
# name: <cell-element>
# type: string
# elements: 1
# length: 361
 -- Function File:  relativeentropy (P, Q)
     Computes the relative entropy between the 2 given pdf's.  `d(P ||
     Q)' is the Kullback-Leiber distance between 2 probability,
     distributions, is relative entropy.  Not a real measure as its not
     symmetric. wherever infinity is present, we reduce it to zeros

   See also: entropy, conditionalentropy


# name: <cell-element>
# type: string
# elements: 1
# length: 56
Computes the relative entropy between the 2 given pdf's.

# name: <cell-element>
# type: string
# elements: 1
# length: 13
renyi_entropy
# name: <cell-element>
# type: string
# elements: 1
# length: 438
 -- Function File:  renyi_entropy (ALPHA, P)
     Compute the Renyi entropy of order ALPHA, for the given
     probability distribution P.

     Halpha(P(x)) = log\sum_i(P(x_i)^alpha)/(1-alpha)

     special-cases include, when ALPHA=1, it reduces to regular
     definition of shannon entropy, and when ALPHA=0, it reduces to
     hartley entropy.

                   renyi_entropy(0,[0.2 0.3 0.5])
                   =>   ans = 1.0986


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Compute the Renyi entropy of order ALPHA, for the given probability
distribution

# name: <cell-element>
# type: string
# elements: 1
# length: 15
shannon_entropy
# name: <cell-element>
# type: string
# elements: 1
# length: 156
 -- Function File:  shannon_entropy (P)
     Redirects Shannon Entropy to entropy function. This is consistent
     with the definition of Renyi entropy.



# name: <cell-element>
# type: string
# elements: 1
# length: 46
Redirects Shannon Entropy to entropy function.

# name: <cell-element>
# type: string
# elements: 1
# length: 12
tunstallcode
# name: <cell-element>
# type: string
# elements: 1
# length: 1190
 -- Function File: CODE_DICTIONARY = tunstallcode (PROBABILITY_LIST)
     Implementation of a `|A|'-bit tunstall coder given the source
     probability of the `|A|' symbols from the source with `2^|A|'
     code-words involved. The variable PROBABILITY_LIST ordering of
     symbols is preserved in the output symbol/code dictionary.
     Tunstall code is a variable to fixed source coding scheme, and the
     arrangement of the codeword list order corrseponds to to the
     regular tunstall code ordering of the variable source word list,
     and the codes for each of them are enumerations from `1:2^N'.
     Return only the ordering (grouping) of source symbols as their
     index of match is the corresponding code word. The probabilites of
     the various symbols are also stored in here.  for example

            [cw_list, prob_list] = tunstallcode([0.6 .3 0.1])

     essentially you will use the cw_list to parse the input and then
     compute the code as the binary value of their index of match,
     since it is a variable to fixed code.

     Reference: "Synthesis of noiseless compression codes", Ph.D. Thesis
                of B.P. Tunstall, Georgia Tech, Sept 1967


# name: <cell-element>
# type: string
# elements: 1
# length: 80
Implementation of a `|A|'-bit tunstall coder given the source
probability of the

# name: <cell-element>
# type: string
# elements: 1
# length: 8
unarydec
# name: <cell-element>
# type: string
# elements: 1
# length: 437
 -- Function File:  unarydec (VALUE)
     This function decodes the unary encoded value.  Useful if you are
     trying to perform golomb-rice coding value needs to be a number or
     row-vector. For example

           message = [5   4   4   1   1   1]
           coded = unaryenc(message)
                 => [62   30   30    2    2    2]
           unarydec(coded)
                 => [5   4   4   1   1   1]

   See also: unaryenc


# name: <cell-element>
# type: string
# elements: 1
# length: 46
This function decodes the unary encoded value.

# name: <cell-element>
# type: string
# elements: 1
# length: 8
unaryenc
# name: <cell-element>
# type: string
# elements: 1
# length: 692
 -- Function File:  unaryenc (VALUE)
     This function encodes the decimal value.  Useful if you are trying
     to perform golomb-rice coding value needs to be a number or
     row-vector. VALUE is a non-negative number.

     Unary encoding of a +ve number N is done as follows, use N-ones
     followed by one zero. For instance, the unary coded value of 5
     will be then (111110) in base2 which is 31x2 = 62. From this
     definition, decoding follows.

           message = [5   4   4   1   1   1]
           coded = unaryenc(message)
                 =>  [62   30   30    2    2    2]
           unarydec(coded)
                 =>  [5   4   4   1   1   1]

   See also: unarydec


# name: <cell-element>
# type: string
# elements: 1
# length: 40
This function encodes the decimal value.

