Commutators

In quantum mechanics we assume the following:

  1. Each observable is associated with an Hermitian operator of a Hilbert space H. Its eigenvalues must be real and the eigenstates are orthogonal to each other, thus form a set of basis of H.
  2. Upon observation, one of the eigenvalues will be the quantity and the wave function will collapse onto one of the corresponding eigenstates.

Here I don’t want to discuss the deep insights, which I have no idea of. That’s why I take Copenhagen interpretation, so shut up and compute!

This article is written as a note of my understanding of physical and mathematical meaning of commutators, mainly the answer to the question: “why do commute operators have a common set of eigenstates?”

Practical Theology

Those who know me in person may already know that I’ve become religious. The foundation of my belief comes from Christianity, Calvinist branch of Protestantism, to be specific. It was a spectacular psychological journey in which I did much philosophical and sci-fi pondering, and my basic understanding of life, universe and everything has altered a lot. Of course the faith didn’t come out from nowhere. I’ve repeatedly experienced certain kinds of revelation (facts that hard to reasoning via atheists’ perspective), as the lyric goes: “how precious did that grace appear, the hour I first believed …”

Shannon–Hartley theorem

有噪信道编码定理只适用于离散的数字信道,若要将其应用于真实的物理信道还需要费一番功夫。当然其结果就是著名的香农极限。

对于某个物理信道,若:

  1. 其带宽为 $W$.
  2. 其噪声为功率谱密度为 $N_0$ 的高斯白噪声
  3. 信号的平均功率为 $P$.

则该信道的容量为 $C = W\log(1+\frac{P}{N_0W})$ 比特每秒。

The Noisy-Channel Coding Theorem

这在信息论中是一个非常重要的定理,之所以会想到回头学习信息论的相关知识,是因为这与概率推理有直接的联系,通信的本质就是根据接收到的信道输Y出来推理信道的输入X的过程。

Definitions & Notations

在正文开始之间先介绍一些定义和标识,顺便复习一下一些信息论中的基本结论。

  • 用 $X, Y, Z$ 来表示随机变量,$\mathcal{A}_{X}$ 表示变量的取值集合,$X^N$ 则表示将 $N$ 个独立的 $X$ 组成的整体作为一个随机变量
  • 信息熵: $H(X) = -\sum\limits_{p_i}\log{p_i}$
  • 条件信息熵: $H(X|Y) = \sum\limits_{y \in \mathcal{A}_Y} P(y) H(X|Y=y) = - \sum\limits_{xy \in \mathcal{A}_X\mathcal{A}_Y} P(x, y) \log{P(x|y)}$,满足:

$H(X, Y) = H(X) + H(Y|X) = H(Y) + H(X|Y)$

  • 互信息: $I(X; Y) \equiv H(X) - H(X|Y) = H(Y) - H(Y|X)$,度量了两个随机变量相互蕴含了多少关于对方的信息
  • 信道容量: $C \equiv \underset{\mathcal{P}_X}{\mathrm{max}}\ I(X;Y)$,表示输出变量 $Y$ 最多能够蕴含多少关于输入 $X$ 的信息

Fisher information and CRLB

Fisher Information

又来抄录 Wikipedia 了,Fisher information 这个词我在好多地方见过,一直是不求甚解地意会着,最近发现这影响到一些东西的理解,于是开始翻看百科。

言归正传,Fisher information 对于学统计的人应该很熟悉,用于描述可观测的变量 X 包含的对于决定它分布的参数 $\theta$ 的信息大小,直观地理解,如果 X 包含的关于 $\theta$ 的信息越多,后者就越确定,也就是其估计的方差越小。了解到这个信息就能得知,对于某些特定的分布模型,我们对于参数 $\theta$ 的极大似然估计 $\hat{\theta}_{ML}$ 的确信程度。

Style mod for GoodReader with vimperator

The Pain Point

Like I’ve mentioned in other posts, most of my reading is done with digital materials. And the application that I use most often for that purpose is named GoodReader(IOS platform). There certainly are better alternatives, nevertheless, I’m too lazy(poor) to alter.

The old fashioned UI design of the app seems a little bit complicated and confusing, however the front-end page for file transfer via a WLAN is too simple to be satisfying.

  • no design not all
  • multi-file uploading not allowed
  • progressing info is too inconspicuous

The hybrid Monte Carlo

Preface

作为一种先进的采样算法,在 PRML 11.5 中有较为详细的介绍,然而我反复细读了两遍才初窥门径,于是打算将其整理成文。

Goals of Sampling

我决定从 Metropolis 算法开始介绍 Markov Chain Monte Carlo,然后是 Gibbs 采样,最后到混合 Monte Carlo,顺便作为笔记以便日后翻阅。

不论是哪种算法,其目的都是一致的,即抽取出符合分布 $p(\mathbf{z})$ 的样本,同时,我们希望样本之间是无关的,即满足 i.i.d. 条件。

Hammersley Clifford Theorem

Preface

PRML 中 8.3.2 小节简单描述了 Markov Random Fields 的分解特性,其中最核心的部分就是 Hammersley Clifford Theorem, 然而它并没有证明这个定理,只是在末尾的时候提到了这个结论,导致我在阅读中间部分的时候一头雾水。好在我 google 到了一个优雅的证明,顺便翻译在此。

Quosi Newton Methods

Preface

最近在以蜗牛般的速度啃 PRML。第五章提到了拟牛顿法,这玩意我之前大致意会过,并作了一些笔记,然而我完全无法回忆起来,于是翻出之前的笔记重新意会了一遍,顺便更正了之前一些不太正确的理解,总结整理成下文。

Newton’s Method

考虑求自变量 X 为 N 维向量的函数 $f(X)$ 的极值点问题。我们知道高中数学中的牛顿法可以用于求解函数0点的:$X_{k+1} = X_n - [J_f(X_k)] ^{-1}f(X_k)$. 那么可导函数的极值问题其实就是求其导函数的0点问题,同样可以用牛顿法求解。

Add TOC indexes to PDF

PDF books without Indexes are like Planets with Individuals

I prefer e-books to paper books, actually I may have read only 1-2 paper books in recent 3 years, while e-books of a much larger amount. The pros of e-books are really obvious:

  1. portability
  2. context can be copied
  3. easy to manage
  4. easy to navigate

I don’t want to argue about the superiorities here. I just wanna to convey that the 4th feature is a crucial one, and it is mainly achieved via indexes, bookmarks, reference links, etc. There’s no doubt that among those approaches, indexes play the most significant role. So I say that e-books without indexes are like planets with individuals, i.e. huge gaps among separated pieces of text.