Define function that is a linear combination of "Basis Functions," $X_k(x)$, evaluated at all values of independent variable $x_j$ -- see Eq. (14.3.2) in Numerical Recipes. These are combined into the single matrix that gets weighted for the illustrated matrix in Fig. 14.3.1 of Numerical Recipes.
Covariance Matrix; See Eq. (14.3.8), and following discussion:
$$\alpha_{kj} = \sum_{i=1}^N\frac{X_j(x_i)X_k(x_i)}{\sigma_i^2} \hspace{0.2in}\mbox{and} \hspace{0.2in}C_{jk} = [\alpha]_{jk}^{-1}$$This result gives covariance matrix if uncertainties are measurement uncertainties with well-known values. To get covariance matrix from uncertainties estimated from observed spread of values, multiply $C_{jk}$ by $\chi_R^2$.
import numpy as np
from scipy import linalg
from scipy import stats
import matplotlib as mpl # As of July 2017 Bucknell computers use v. 2.x
import matplotlib.pyplot as plt
# Following is an Ipython magic command that puts figures in the notebook.
# For figures in separate windows, comment out following line and uncomment
# the next line
# Must come before defaults are changed.
%matplotlib notebook
#%matplotlib
# As of Aug. 2017 reverting to 1.x defaults.
# In 2.x text.ustex requires dvipng, texlive-latex-extra, and texlive-fonts-recommended,
# which don't seem to be universal
# See https://stackoverflow.com/questions/38906356/error-running-matplotlib-in-latex-type1cm?
mpl.style.use('classic')
# M.L. modifications of matplotlib defaults using syntax of v.2.0
# More info at http://matplotlib.org/2.0.0/users/deflt_style_changes.html
# Changes can also be put in matplotlibrc file, or effected using mpl.rcParams[]
plt.rc('figure', figsize = (6, 4.5)) # Reduces overall size of figures
plt.rc('axes', labelsize=16, titlesize=14)
plt.rc('figure', autolayout = True) # Adjusts supblot parameters for new size
# Basis functions for linear model: func = a0*X0 + a1*X1 + a2*X2 + ...
def basis(x):
''' Returns basis functions for linear model
The function to be fit is assumed to be of the form
f(x) = a0*X0 + a1*X1 + a2*X2 + a3*X3 + ...
where a0, a1, a2, ... are constants, and X0, X1, X2, ... are defined below.
'''
X2 = x**2
X1 = x
X0 = 0.*X1 + 1. # Need array of len(x), thus the 0.*X1
return np.array([X0,X1,X2])
a
¶def func(x, a):
'''Given basis functions and coefficients, returns value of linear function'''
return np.dot(basis(x).T, a)
def LinearModelFit(x, y, u):
'''Performs linear least squares fit to a set of 2-d data with uncertainties
x = array of x values [x0, x1, x2, ...]
y = array of values of dependent variable [y0, y1, y2, ...]
u = array of uncertainties for dependent variable [u0, u1, u2, ...]
'''
X = basis(x).T # Basis functions evaluated at all x (the X_j(x_i)) of N.R.)
W = np.diag(1/u) # Matrix with uncertainties on diagonal
Xw = np.dot(W,X) # A_ij of Eq. (14.3.4)
Yw = np.dot(y,W) # b_i of Eq. (14.3.5)
fit = np.linalg.lstsq(Xw,Yw,rcond=1) # lstq returns: best values, chi2, ....
covariance = np.linalg.inv(np.dot(Xw.T,Xw))
uncertainty = np.sqrt(np.diag(covariance))
return(fit[0], uncertainty,fit[1], covariance)
Generate some noisy data (x,y) with uncertainties u, or import sample data file.
# Cell for generating data; overwritten by following cell if data is coming from file.
x = np.linspace(0, 10, 11)
sigma = 5.0
a = np.array([1.2, 3.4, -0.9])
y = func(x,a) + stats.norm.rvs(0, sigma, size=len(x))
u = sigma*np.ones(len(y))
#sp.savetxt("sample.dat",sp.array([x,y,u]).T)
data = np.loadtxt("sample.dat") #Each line in file corresponds single data point: x,y,u
x = data.T[0]
y = data.T[1]
u = data.T[2]
Two previous cells would be more "pythonic" as
x, y, u = sp.loadtxt("sample.dat", unpack=True)
The "unpack = True" reads columns.
x, y, u = np.loadtxt("sample.dat", unpack=True)
plt.figure(1)
plt.axhline(0, color='magenta')
plt.title("data")
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.xlim(min(x)-0.05*(np.max(x)-np.min(x)), np.max(x)+ 0.05*(np.max(x)-np.min(x))) \
# Pad x-range on plot
plt.errorbar(x, y, yerr=u, fmt='o');
a, unc, chi2, cov = LinearModelFit(x,y,u)
a, unc, chi2, cov
for i in range(len(a)):
print("parameter", i,"=", a[i],"+/-", np.sqrt(cov[i,i]))
print("chi2 =", chi2)
print("reduced chi2 = chi2/(11-3) =", chi2/8)
xfine = np.linspace(np.min(x), np.max(x), 201) # "quasi-continuous" set of x's for plotting function
plt.figure(2)
plt.title("data with best fit quadratic",fontsize=14)
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.axhline(0, color='magenta')
plt.xlim(min(x)-0.05*(np.max(x)-np.min(x)), np.max(x)+ 0.05*(np.max(x)-np.min(x))) \
# Pad x-range on plot
plt.errorbar(x, y, yerr=u, fmt='o')
plt.plot(xfine, func(xfine, a));
Explicit calculation of reduced chi-square parameter:
\begin{equation} \chi_R^2= \frac{1}{N-c}\times\sum_{i=1}^N \frac{(y_i-f(x_i))^2}{\sigma_i^2}, \end{equation}and compare with return value from lstsq
.
np.sum((y-func(x,a))**2/u**2)/(len(x)-3)
version_information
is from J.R. Johansson (jrjohansson at gmail.com)
See Introduction to scientific computing with Python:
http://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-0-Scientific-Computing-with-Python.ipynb
for more information and instructions for package installation.
If version_information
has been installed system wide (as it has been on Bucknell linux computers with shared file systems), continue with next cell as written. If not, comment out top line in next cell and uncomment the second line.
%load_ext version_information
#%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
version_information numpy, scipy, matplotlib