User Tools

Site Tools


Sidebar

white_noise_project

**This is an old revision of the document!** ----

A PCRE internal error occured. This might be caused by a faulty plugin

======Modelling of the white noise====== ==By Matthew Nugent, Eli Dollinger and Brady Goodwin== ==in collaboration with Prof. Nicholas Kuzma== =====Introduction===== //Feel free to expand and contribute// ====General background==== Pure [[wp>white noise]] can be observed when no signal is transmitted into the receiver, for example if the radio or TV is not tuned to any particular station, or no microphone is plugged into the audio amplifier. As a matter of fact, any recording or measurement process has some degree of white noise superimposed onto the recorded or measured signal. For example, any resistor at a given temperature will generate a white-noise voltage, called [[wp>Johnson-Nyquist noise]]. In some applications, such as [[wp>MRI]] or [[wp>ultrasound]] imaging, the persistence of white noise in the images is the dominant factor negatively affecting the scan duration and the image quality, or both. The goal of this project is to get an insight into the statistical properties of white noise, and to distinguish between random and non-random origins of certain types of noise. ... ===Personal observations by authors=== | {{ :projects:noise:noisefig1.png?nolink |}} | {{ :projects:noise:noisefig2.png?nolink | ... }} | ^ Figure 1. White noise, observed by N.K. in an [[wp>NMR]] spectrometer (tuned to <sup>129</sup>Xe nuclear-precession frequency). The vertical scale is the detector voltage, and the horizontal scale is time in milliseconds. ^ Figure 2. In contrast to Fig. 1, this recording is dominated by the 60-Hz interference from the power line (N.K.) ^ ====History==== [[wp>John B. Johnson]], while working at [[wp>Bell Labs]] in 1926, was the first to quantify the white noise in resistors. He described his results to [[wp>Harry Nyquist]], a Bell Labs theorist, who was able to come up with the explaination. | {{ :projects:noise:johnsonnyquist.jpg?nolink | }} | \\ \\ \\ \\ **Figure 3**. John B. Johnson and Harry Nyquist of Bell Labs, who came up with the first quantitative theory of white noise in resistors (//Courtesy Wikipedia//). Note that John's photo exhibits quite a bit of noise superimposed onto his image. | ====Cultural and cinematographic references==== White noise (a.k.a "static") is mentioned in ... =====Theory===== ====Defining features==== By definition, white noise is a sequence of statistically independent random measurements with the same distribution centered on 0. A more special, albeit perhaps more commonly encountered type of white noise is the Gaussian white noise, with the additional requirement that each sample in the recording is "normally" distributed (i.e. its statistical distribution has a "bell-shaped" curve, * $f(x)$ $=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}$, $\;\;\;\;{\text{Eq. }} (1)$ where * $f(x)$ is the probability density: the ratio of the probability of finding $x$ in the interval $(x,x\!+\!\Delta x)$ to the interval's width $\Delta x$, and * $\sigma$ is the so-called //standard deviation// (or "width") of the distribution). | {{ :projects:noise:noisefig4.png?nolink |...}} | ^ Figure 4. Normal (gaussian) probability distribution given by Eq. (1), where 68% of observations fall between $-\sigma$ and $\sigma$. About 95% of observations fall between $-2\sigma$ and $2\sigma$. ^ ====The hypothesis of the noise origins==== The idea is, while each individual source of noise might not be random (for example, electromagnetic emissions from nearby microprocessors, electric motors next door, distant lightnings and radio stations all have very specific frequencies and time-domain signatures), when multiple unrelated sources are combined at the receiver input, their sum signal is, to a much greater degree, random. Moreover, as the [[wp>central limit theorem]] of statistics would suggest, such sum of many unrelated non-random signals is not just random, but is also normally distributed according to Eq. (1). We shall experimentally test this hypothesis in this project. =====Methods===== ====Computational goals==== The main goal of this project is to get a feeling of how unrelated non-random signals, such as sine waves, are randomly combined to produce much more random (and ultimately, normally-distributed) white noise. Specifically, we want to demonstrate - The distribution of readings in a single sine-wave is very different from the Gaussian (bell-shaped curve of Fig. 4). * It is much closer to a [[wp>bimodal distribution]]. - As several non-related sine waves of different frequencies are combined, the distribution of readings becomes lumpier. - Eventually, when many sine waves are combined, the readings of their sum signal are distributed very closely to a Gaussian. - //feel free to suggest more// ====Software and data analysis==== Microsoft Excel software will be used. The following functionality is needed: - Creating a "grid" of numbers, e.g. a row or column of {0, 0.001, 0.002, ... 1} for the model the times in which recordings take place * typing up such an array is tedious, but it's quite easy using the "drag by the corner" trick: * type ''0'' into a cell * type ''0.001'' into the next cell below * select both cells with a mouse * drag the bottom-right corner of the selection all the way down, generating the desired sequence - Creating a formula in the next column, taking the preceding column as an input, and dragging it all the way to the end of the input column * you can create a formula by typing ''='' into a cell, followed by a formula content (e.g. ''SQRT(2)''), then hitting "Enter": * ''=SQRT(2)'' will calculate the square root of 2 * ''=SQRT(A2)'' will calculate the square root of the number in the cell ''A2'' (i.e. in the column ''A'', row ''2'') * See Exercise 3 below for more detailed instructions - Generating random numbers, evenly distributed on an interval [0,1] * using the ''=RAND()'' function * <color red>Warning</color>: all random numbers, and the numbers that depend on them, will change every time any cell in Excel is modified * These can be scaled to any interval [//a,b//] by simply using * ''=a+(b-a)*RAND()'' - Generating normally-distributed random numbers * use the ''=sigma*NORM.S.INV(RAND())'' function, where sigma is the desired standard deviation (width) of the curve * sigma can be just a number (e.g. 1), or a cell someplace else containing the desired value - //<color blue>**Exercise 1**</color>//: generate a column of 200 (and/or, separately, 20000) normally-distributed random numbers with sigma=1 - Plotting (using scatter chart) the output column versus the input column * with main title * and with axis labels (called "axis titles" in Excel) - //<color blue>**Exercise 2**</color>//: - create a column containing a grid of //x// values from $-$5 to 5 with step 0.01 - create another column next to it with the //f(x)// values generated using Eq. (1) with $\sigma\!=\!1$ - plot //f// values versus the //x// values * use "Scatter", "Smooth-lined scatter" options - add the main title and axis labels (titles) - save that plot for the introduction section of your report - Automatically counting the number of cells in a certain range in an array * use a pair of ''COUNTIF()'' functions: * To count the number of cells in the column ''B'' of your excel sheet that are greater or equal to 3.2 but less then 3.3, use * ''=COUNTIF(B:B,%%"%%>=3.2%%"%%)-COUNTIF(B:B,%%"%%>=3.3%%"%%)'' * Here, the first ''COUNTIF'' counts the number of cells that are greater or equal to 3.2 * The second ''COUNTIF'' counts the number of cells that are greater or equal to 3.3 * The difference between the two counts is the number of cells that belong to the $[3.2,3.3)$ semi-open interval. * The number 3.2 (if it happens in column ''B'') will be counted, but the number 3.3 will not make it to the difference count * To count the number of cells in the range ''C2:C1001'' that exceed the number in ''D4'' but no greater than the number in ''D5'', use ''&'': * ''=COUNTIF(C2:C1001,%%"%%>%%"%%<color red>&</color>D4)-COUNTIF(C2:C1001,%%"%%>%%"%%<color red>&</color>D5)'' - //<color blue>**Exercise 3**</color>//: in the same workbook containing the previous exercise, - Create a column containing the grid of "bin boundaries" from $-$5.1 to 5.1 with step 0.3 * <color red>do not overwrite the results of the previous exercises</color> - In the next column, create a grid of "bin centers": * calculate the first center by typing the ''=0.5*(E2+E3)'' formula, assuming the first bin boundary is in ''E2'', the second in ''E3'', etc * hit enter, select this first bin center with a mouse, and drag the lower-right corner of this cell all the way down * this will automatically generate formulas for all other bin centers, with the inputs automatically shifting down as you drag * to avoid such automatic adjustment (later in the project - when it is not desirable) of the formula inputs, you can * use ''E$3'' to avoid vertical shifting * use ''$E3'' to avoid horizontal shifting when dragging horizontally * use ''$E$3'' to avoid any shifting * this automatic shifting also happens when copying/pasting cells with formulas, and when shifting cells due to deleting or inserting - In the next column, calculate numbers of random numbers generated in the first exercise that fall within each bin - Plot the number of occurrences of the random numbers versus the bin centers. * use "Column", "Clustered column" options * use the found numbers of occurrences as "Y values", and the bin centers as "Category (X) axis labels" * alternatively, use "Scatter", "Marked scatter" options: * select the bin centers as "X values" and the numbers of occurrences as "Y values" * Such plot is called a [[wp>histogram]] - Compare the histograms for small (//N//~200) and large (//N//~20000) numbers to the Gaussian shape plotted in the previous exercise. * for a more quantitative comparison, convert probability density //f(x)// of Eq. (1) to the observed bin counts in this exercise: * $f=\frac{P}{\Delta x}$ $=\frac{\text{probability}}{\text{bin width}}$ $=\frac{n_i/N}{\Delta x}$ $=\frac{n_i}{N\Delta x}$ * $n_{i\,}(\text{predicted})=f\,N\Delta x$ * here $\Delta x$ is the difference between successive bin boundaries * $n_i$ is the observed (or predicted) number of occurrences in the //i//<sup> th</sup> bin * $N$ is the total number of observations - Save the exercises above for the "Intro", "Theory", and "Methods" sections of your report * You can convert any screen content into an image that can be pasted into your report: * on a Mac, press ''Command'', ''Ctrl'', ''Shift'', and ''4'' keys at the same time. * Release the keys, select any screen area by dragging a "cross-hair" pointer with a mouse or a track-pad across the image * Release mouse or trackpad * Switch to the editing software (e.g. Word or Pages), and paste at the desired spot * on a PC, press ''Alt'' and ''PrtScn'' ("Print Screen") at the same time, then release * Switch to the editing software (e.g. Word. Powerpoint, or Paint) * Paste at the desired spot * Crop the excessive margins as needed ====Coding tasks==== This is the detailed list of tasks to be accomplished: - | ... | ^ Figure 5. ... ^ =====Data===== =====References=====

white_noise_project.1399316264.txt.gz · Last modified: 2014/05/05 18:57 by wikimanager