Short appetizers: with relevant refernces
T. Guhr, A. Mueller-Groeling, H. A. Weidenmueller,Phys.Rept. 299, 189 (1998)
Alan Edelman and Yuyang Wang,"Random Matrix Theory and its Innovative Applications"
and as always Wikipedia and Google are your friends!
from scipy.optimize import curve_fit # use this for fitting
x # this is a 1D array containing sampling points of the data
y # this is a 1D array containing the data at the sampling points
# We define a function to be used for fitting
# in ths case we fit a sin
def fun(x,A,w,phi): # The signature is important!
# First argument corresponds to sampling!
return A*sin(w*x+phi) # This is just a simple sine
# with the usual parameters
#fitting is done like this
popt,pcov=curve_fit(fun,x,y)
popt # parameters of the fit go here
sqrt(diag(pcov)) # errors of the parameters are
# obtained from the covariance matrix
fun(x,*popt) # this will evaluate the fitted function at the sampling pointsunfolded sample
having uniform distribution
Wigner conjectured that the distribution of the unfolded level spacings shows universal behavior...
universality class
normalization
Integrable systems
Generic 'chaotic' systems
correlated levels
"repel" eachother
"uncorrelated" levels
can be arbitrarily close
from scipy import interpolate # we will need interpolate data
ev=eigvalsh(H) # get some eigenvalues
# generate the cumulative distribution of the eigenvalues
# here we use matplotlib's hist
# could use numpy's but it has no built in cumulative histogram ...
hg=hist(ev,100,cumulative=True,normed=True) # may need to play with bins
# interpolate the cumulative
# careful ! histogram generators give one more bin
ipol=interpolate.interp1d(hg[1][1:],hg[0],
fill_value=(0,1),bounds_error=False)
# these last options are needed to treat the edge properly
# unfolded eigenvalues
unfolded_ev=ipol(ev))Goal: Solve the Schrödinger equation for biliard systems
y
x
Hard wall potential is realized by omitting well chosen points from the grid!
diag(ones(3))
>>array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
diag(ones(2),1)
>>array([[0., 1., 0.],
[0., 0., 1.],
[0., 0., 0.]])kron(array([[1,0,0],
[0,1,0],
[0,0,1]]),
array([[1,2],
[3,4]])
)
>>array([[1., 2., 0., 0., 0., 0.],
[3., 4., 0., 0., 0., 0.],
[0., 0., 1., 2., 0., 0.],
[0., 0., 3., 4., 0., 0.],
[0., 0., 0., 0., 1., 2.],
[0., 0., 0., 0., 3., 4.]])Matrices with entries on
diagonals can be built with
diag()
Use kron() to build hypermatrices
x,y=meshgrid(...) # when defineing the lattice
# keep the coordinates close at hand!!
x=x.flatten() # flatten meshgrid generated matrices
y=y.flatten() # so we can use coordinates for indexing
...
H # let this be a Hamiltonian
# of a regular lattice
H_potato=H[:,f(x,y)<0][f(x,y)<0,:] # bool expressions
# can be used for slicing
va=eigenvals(H) # only eigenvalues
va,ve=eigh(H) # eigenvalues AND eigenvectors
# if using numpy arrays @ is the dot product
# if using numpy matrices * is the dot product
H@ve[:,i]=va[i]*ve[:,i] # the i-th eigenvector and eigenvalue satisfy this
# if coordinates of the tracked degrees of freedom
# are stored in the variables x,y
tripcolor(x,y,abs(ve[:,i])**2) # this will visualize the i-th eigenvector# these are modules to deal with sparse matrices
import scipy.sparse as ss
import scipy.sparse.linalg as sl
# matrix building functions have sparse alternatives
idL=ss.eye(L) # identity
odL=ss.diags(ones(L-1),1,(L,L)) # off diagonal
ss.kron(A,B) # kron is also here
# cut region of interest
Hsliced=H[:,slice][slice,:] # slicing works if sparse
Hsliced=(H.tocsr())[:,slice][slice,:] # csr format is used
# casting from other formats
# might be needed
# Some (strictly not all) eigenvalues can be obtained
# Lanczos and Arnoldi algorithms are used in the background
va,ve=sl.eigsh(Hsliced,30,sigma=0.5) # this gets 30 eigenvalues
# and eigenvectors from around 0.5 R=0
R=1