Asymptotic Equipartition Property

Sembra essere molto simile a Central Limit Theorem and Law of Large Numbers però per Entropy. This is also called Shannon’s source coding theorem see here Enunciato AEP Data una serie di variabili aleatorie $X_{1}, X_{2}, \dots$ i.i.d. $\sim p(x)$ se vale che $$ -\frac{1}{n} \log p(X_{1}, X_{2}, \dots, X_{n}) \to H(X) $$ in probability (la definizione data in Central Limit Theorem and Law of Large Numbers#Convergence in probability). Un modo alternativo per enunciarla è così, segue il metodo in (MacKay 2003)....

2 min · Xuanqiang 'Angelo' Huang

Introduction to Information Theory

The course will be more about the the quantization, talking about lossless and lossy compression (how many bits will be needed to describe something? This is not a CS course so it will not be so much algorithmically focused course), then we will talk about channel and capacity and DMC things. Most of the things explained in the Lapidoth course will be theoretical there will be some heavy maths. The professor starts with some mathy definitions (not very important, just that the $\mathbb{E}[ \cdot]$ needs a domain to be defined, so notations like $\mathbb{E}[x]$ do not make sense, while $\mathbb{E}[g(x)]$ do make sense because $g(x) : \mathcal{X} \to \mathbb{R}$)....

1 min · Xuanqiang 'Angelo' Huang

Entropy

Questo è stato creato da 1948 Shannon in (Shannon 1948). Questa nozione è basata sulla nozione di probabilità, perché le cose rare sono più informative rispetto a qualcosa che accade spesso. Introduction to Entropy The Shannon Information Content This is dependent on the notion of the Shannon information content defined as $$ h(x = a_{i}) = \log_{2}\frac{1}{P(x = a_{i})} $$ We will see that the entropy is a weighted average of the information, so the expected information content in a distribution....

13 min · Xuanqiang 'Angelo' Huang