User Tools

Site Tools


science:phd-notes:2025-05-13-porter-thomas-fluctuations

This is an old revision of the document!


Porter-Thomas fluctuations

Let's talk about Porter-Thomas (PT) fluctuations! To do that, we need to start talking about:

The Porter-Thomas distribution

Long story short: The PT distribution is the $\chi^2$ distribution with one degree of freedom ($k = 1$). In nuclear physics we have a central concept, namely the gamma strength function, which is a statistical property of atomic nuclei which describes the nuclei's gamma decay probabilities. The dipole ($L = 1$) strength function is given by

$$ f_{X1}(E_{\gamma}, E_i, j_i, \pi_i) = \dfrac{16 \pi}{9 \hbar^3 c^3}\langle B(X1;\downarrow) \rangle (E_{\gamma}, E_i, j_i, \pi_i) \rho (E_i, j_i, \pi_i).\qquad (0) $$

See p. 230 of Bartholomew et. al. for the general definition. We can re-arrange eq. (0) to get

$$ \langle B(Xj_{\gamma}) \rangle (E_{\gamma}, E_i, j_i, \pi_i) = \dfrac{9 \hbar^3 c^3}{16 \pi} \dfrac{f_{Xj_{\gamma}}(E_{\gamma}, E_i, j_i, \pi_i)}{\rho (E_i, j_i, \pi_i)}.\qquad (1) $$

From eq. (1) we see that the GSF $(f)$ is proportional to the mean $B$ value with a proportionality constant of $9 \hbar^3 c^3/(16 \pi \rho)$. The $B$ values deviate from the mean $B$ value by

$$ y = \dfrac{B}{\langle B \rangle} $$

and the distribution of $y$ values are hypothesised to follow the $\chi^2_1$ distribution, aka. the Porter-Thomas distribution. In the following figure we see an example of $B$ values plotted as a histogram and scaled to the height of the PT-distribution to show the resemblance.

Porter-Thomas fluctuations

… is just really a fancy way of saying how much we expect $y$ values to vary. The PDF of the PT distribution is given by

$$ g(x) = \dfrac{1}{\sqrt{2 \pi x}}e^{-x/2}, \quad x > 0, $$

with a mean of 1 and a variance of 2. Just check the Wikipedia page if you don't believe me. Let us now invoke the almighty Central Limit Theorem (CLT)! Let us now draw $N$ values from the PT distribution and let's name this value $X_1$. Suppose we want to know the sample average

$$ \bar{X}_n = \dfrac{X_1 + ... + X_n}{n}. $$

The law of large numbers tells us that the sample average will converge to the expected value $\mu = 1$ as $n$ goes to infinity. The CLT states that as $n$ gets larger, the distribution of $\bar{X}_n$ gets arbitrarily close to the normal distribution with a mean of 1 and a variance of $2/n$.

Discussion

Enter your comment. Wiki syntax is allowed:
 
science/phd-notes/2025-05-13-porter-thomas-fluctuations.1747141166.txt.gz · Last modified: by jon-dokuwiki