The Walrus Documentation¶
- Release
0.13.0-dev
A library for the calculation of hafnians, Hermite polynomials, and Gaussian boson sampling.
Features¶
Fast calculation of hafnians, loop hafnians, and torontonians of general and certain structured matrices.
An easy to use interface to use the loop hafnian for Gaussian quantum state calculations.
Sampling algorithms for hafnian and torontonians of graphs.
Efficient classical methods for approximating the hafnian of non-negative matrices.
Easy to use implementations of the multidimensional Hermite polynomials, which can also be used to calculate hafnians of all reductions of a given matrix.
Getting started¶
To get the The Walrus installed and running on your system, begin at the download and installation guide. Then, familiarise yourself with some background information on the Hafnian and the computational algorithm.
For getting started with using the The Walrus in your own code, have a look at the Python tutorial.
Finally, detailed documentation on the code and API is provided.
Support¶
Source Code: https://github.com/XanaduAI/thewalrus
Issue Tracker: https://github.com/XanaduAI/thewalrus/issues
If you are having issues, please let us know, either by email or by posting the issue on our Github issue tracker.
Authors¶
Nicolás Quesada, Brajesh Gupt, and Josh Izaac.
If you are doing research using The Walrus, please cite our paper:
Brajesh Gupt, Josh Izaac and Nicolás Quesada. The Walrus: a library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Journal of Open Source Software, 4(44), 1705 (2019)
License¶
The Walrus library is free and open source, released under the Apache License, Version 2.0.
Installation and Downloads¶
Installation¶
Pre-built binary wheels are available for the following platforms:
macOS 10.6+ |
manylinux x86_64 |
Windows 64bit |
|
---|---|---|---|
Python 3.6 |
X |
X |
X |
Python 3.7 |
X |
X |
X |
Python 3.8 |
X |
X |
X |
To install, simply run
pip install thewalrus
Compiling from source¶
The Walrus depends on the following Python packages:
In addition, to compile the C++ extension, the following dependencies are required:
A C++11 compiler, such as
g++
>= 4.8.1,clang
>= 3.3,MSVC
>= 14.0/2015Eigen3 - a C++ header library for linear algebra.
On Debian-based systems, these can be installed via apt
and curl
:
$ sudo apt install g++ libeigen3-dev
or using Homebrew on MacOS:
$ brew install gcc eigen
Alternatively, you can download the Eigen headers manually:
$ mkdir ~/.local/eigen3 && cd ~/.local/eigen3
$ wget http://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz -O eigen3.tar.gz
$ tar xzf eigen3.tar.gz eigen-eigen-323c052e1731/Eigen --strip-components 1
$ export EIGEN_INCLUDE_DIR=$HOME/.local/eigen3
Note that we export the environment variable EIGEN_INCLUDE_DIR
so that The Walrus can find the Eigen3 header files (if not provided, The Walrus will by default look in /use/include/eigen3
and /usr/local/include/eigen3
).
Once all dependencies are installed, you can compile the latest stable version of the The Walrus library as follows:
$ python -m pip install thewalrus --no-binary :all:
Alternatively, you can compile the latest development version by cloning the git repository, and installing using pip in development mode.
$ git clone https://github.com/XanaduAI/thewalrus.git
$ cd thewalrus && python -m pip install -e .
OpenMP¶
libwalrus
uses OpenMP to parallelize both the permanent and the hafnian calculation. At the moment, this is only supported on Linux using the GNU g++ compiler, due to insufficient support using Windows/MSCV and MacOS/Clang.
Using LAPACK, OpenBLAS, or MKL¶
If you would like to take advantage of the highly optimized matrix routines of LAPACK, OpenBLAS, or MKL, you can optionally compile the libwalrus
such that Eigen uses these frameworks as backends. As a result, all calls in the libwalrus
library to Eigen functions are silently substituted with calls to LAPACK/OpenBLAS/MKL.
For example, for LAPACK integration, make sure you have the lapacke
C++ LAPACK bindings installed (sudo apt install liblapacke-dev
in Ubuntu-based Linux distributions), and then compile with the environment variable USE_LAPACK=1
:
$ USE_LAPACK=1 python -m pip install thewalrus --no-binary :all:
Alternatively, you may pass USE_OPENBLAS=1
to use the OpenBLAS library.
Software tests¶
To ensure that The Walrus library is working correctly after installation, the test suite can be run by navigating to the source code folder and running
$ make test
To run the low-level C++ test suite, Googletest will need to be installed. In Ubuntu-based distributions, this can be done as follows:
sudo apt-get install cmake libgtest-dev
cd /usr/src/googletest/googletest
sudo cmake
sudo make
sudo cp libgtest* /usr/lib/
sudo mkdir /usr/local/lib/googletest
sudo ln -s /usr/lib/libgtest.a /usr/local/lib/googletest/libgtest.a
sudo ln -s /usr/lib/libgtest_main.a /usr/local/lib/googletest/libgtest_main.a
Alternatively, the latest Googletest release can be installed from source:
sudo apt install cmake
wget -qO - https://github.com/google/googletest/archive/release-1.8.1.tar.gz | tar -xz
cmake -D CMAKE_INSTALL_PREFIX:PATH=$HOME/googletest -D CMAKE_BUILD_TYPE=Release googletest-release-1.8.1
make install
If installing Googletest from source, make sure that the included headers and libraries are available on your include/library paths.
Documentation¶
The Walrus documentation is available online on Read the Docs.
To build it locally, you need to have the following packages installed:
They can be installed via a combination of pip
and apt
if on a Debian-based system:
$ sudo apt install pandoc doxygen
$ pip3 install sphinx sphinxcontrib-bibtex nbsphinx breathe exhale
To build the HTML documentation, go to the top-level directory and run the command
$ make doc
The documentation can then be found in the docs/_build/html/
directory.
Research and contribution¶
Research¶
If you are doing research using The Walrus, please cite our paper:
Brajesh Gupt, Josh Izaac and Nicolás Quesada. The Walrus: a library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Journal of Open Source Software, 4(44), 1705 (2019)
We are always open for collaboration, and can be contacted at research@xanadu.ai.
Contribution¶
If you would like to get involved with The Walrus, or you would like to contribute, simply fork our Github repository, and make a descriptive pull request.
Source Code: https://github.com/XanaduAI/thewalrus
Issue Tracker: https://github.com/XanaduAI/thewalrus/issues
Support¶
If you are having issues, please let us know, either by email or by posting the issue on our Github issue tracker.
We have a mailing list located at: support@xanadu.ai
Quick guide¶
This section provides a quick guide to find which function does what in The Walrus.
What you want |
Corresponding function |
Numerical hafnian |
|
Symbolic hafnian |
|
Hafnian of a matrix with repeated rows and columns |
|
Hafnians of all possible reductions of a given matrix |
|
Hafnian samples of a Gaussian state |
|
Torontonian samples of a Gaussian state |
|
Hafnian samples of a graph |
|
Torontonian samples of a graph |
|
All probability amplitudes of a pure Gaussian state |
|
All matrix elements of a general Gaussian state |
|
A particular probability amplitude of pure Gaussian state |
|
A particular matrix element of a general Gaussian state |
|
The Fock representation of a Gaussian unitary |
Note that all the hafnian functions listed above generalize to loop hafnians.
Tutorials and gallery¶
Tutorials¶
Basics of Hafnians and Loop Hafnians¶
Author: Nicolás Quesada
In the background section of the The Walrus documentation, some basic ideas related to (loop) hafnians were introduced. This tutorial is a computational exploration of the same ideas.
[1]:
from thewalrus.reference import hafnian as haf_ref
from thewalrus import hafnian
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats=['svg']
A simple loopless graph and the hafnian¶
Let’s consider the following graph with adjacency matrix
[2]:
A = np.array([[0,0,0,1,0,0],
[0,0,0,1,1,0],
[0,0,0,1,1,1],
[1,1,1,0,0,0],
[0,1,1,0,0,0],
[0,0,1,0,0,0]])
It is easy to verify by inspection that the graph in Fig. 1 has only one perfect matching given by the edges (1,4)(2,5)(3,6). We can verify this by calculating the hafnian of the adjacency matrix \(A\)
[3]:
haf_ref(A) # Using the reference implementation
[3]:
1
[4]:
hafnian(A) # Using the default recursive method
[4]:
1
Let’s see what happens if we rescale the adjacency matrix by a scalar \(a\). We’ll use the SymPy library for symbolic manipulation:
[5]:
from sympy import symbols
[6]:
a = symbols("a")
haf_ref(a*A)
[6]:
a**3
The example above shows that one can use the reference implementation not only with numpy arrays but also with symbolic sympy expressions.
A graph with loops and the loop hafnian¶
Now let’s consider a graph with loops: The adjacency matrix is now
[7]:
At = np.array([[1,0,0,1,0,0],
[0,0,0,1,1,0],
[0,0,0,1,1,1],
[1,1,1,0,0,0],
[0,1,1,0,1,0],
[0,0,1,0,0,0]])
Note that now the adjacency matrix has non zero elements in the diagonal. It is also strightforward to see that the graph in Fig. 2 has two perfect matchings, namely, (1,4)(2,5)(3,6) and (1,1)(5,5)(2,4)(3,6)
[8]:
haf_ref(At, loop=True) # Using the reference implementation
[8]:
2
[9]:
hafnian(At, loop=True) # Using the default recursive method
[9]:
2.0000000000000107
We can also use the loop hafnian to count the number of matching (perfect or otherwise) by taking the adjacency matrix of the loop less graph, putting ones on its diagonal and calculating the loop hafnian of the resulting matrix. For the graph in Fig. 1 we find
[10]:
haf_ref(A+np.diag([1,1,1,1,1,1]), loop=True)
[10]:
15
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
Benchmarking the Hafnian¶
This tutorial shows how to use Hafnian, a C (masquerading as Python) library to calculate the Hafnian.
The Hafnian¶
The hafnian of an \(n\)-by-\(n\) symmetric matrix \(A = A^T\) is defined as
where PMP\((n)\) stands for the set of perfect matching permutations of \(n\) (even) objects.
Using the library¶
Import the library in the usual way:
[1]:
from thewalrus import hafnian
To use it we need to pass square numpy arrays as arguments, thus we also must import NumPy:
[2]:
import numpy as np
The library provides functions to compute hafnians of real and complex matrices. The functions arguments must be passed as the NumPy arrays or matrices.
[3]:
size = 10
nth = 4
matrix = np.ones([size,size])
hafnian(matrix)
[3]:
945.0
[4]:
size = 10
nth = 4
matrix = 1j*np.ones([size,size])
hafnian(matrix)
[4]:
945j
Not surprisingly, the hafnian of a matrix containing only ones is given by \((n-1)!! = \frac{n!}{(n/2)! 2^{n/2}}\)
[5]:
from math import factorial
factorial(size)/(factorial(size//2)*2**(size//2))
[5]:
945.0
Note that when doing floating point computations with large numbers, precision can be lost.
Benchmarking the performance of the code¶
For sizes \(n=2,30\) we will generate random symmetric matrices and measure the (average) amount of time it takes to calculate their hafnian. The number of samples for each will be geometrically distributed, with 1000 samples for size \(n=2\) and 10 samples for \(n=30\). The unitaries will be random Haar distributed.
[6]:
a0 = 1000.
anm1 = 2.
n = 20
r = (anm1/a0)**(1./(n-1))
nreps = [(int)(a0*(r**((i)))) for i in range(n)]
[7]:
nreps
[7]:
[1000,
721,
519,
374,
270,
194,
140,
101,
73,
52,
37,
27,
19,
14,
10,
7,
5,
3,
2,
2]
The following function generates random Haar unitaries of dimensions \(n\)
[8]:
from scipy import diagonal, randn
from scipy.linalg import qr
def haar_measure(n):
'''A Random matrix distributed with Haar measure
See https://arxiv.org/abs/math-ph/0609050
How to generate random matrices from the classical compact groups
by Francesco Mezzadri '''
z = (randn(n,n) + 1j*randn(n,n))/np.sqrt(2.0)
q,r = qr(z)
d = diagonal(r)
ph = d/np.abs(d)
q = np.multiply(q,ph,q)
return q
Now let’s benchmark the scaling of the calculation with the matrix size
[9]:
import time
times = np.empty(n)
for ind,reps in enumerate(nreps):
start = time.time()
for i in range(reps):
size = 2*(ind+1)
nth = 1
matrix = haar_measure(size)
A = matrix @ matrix.T
A = 0.5*(A+A.T)
res = hafnian(A)
end = time.time()
times[ind] = (end - start)/reps
print(2*(ind+1), times[ind])
2 0.00031961894035339354
4 0.00020129009357933858
6 0.0010492227440394878
8 0.001197782429781827
10 0.0013135018172087494
12 0.0017131505553255376
14 0.0013083321707589286
16 0.0031093134738431117
18 0.0061694171330700185
20 0.013150962499471812
22 0.032473312841879355
24 0.06553327595746075
26 0.15734933551989103
28 0.3482480730329241
30 0.7330920457839966
32 1.6255977153778076
34 3.6591072559356688
36 8.095563332239786
38 18.329116582870483
40 41.31634557247162
We can now plot the (average) time it takes to calculate the hafnian vs. the size of the matrix:
[10]:
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_formats=['svg']
[11]:
plt.semilogy(2*np.arange(1,n+1),times,"+")
plt.xlabel(r"Matrix size $n$")
plt.ylabel(r"Time in seconds for 4 threads")
[11]:
Text(0, 0.5, 'Time in seconds for 4 threads')
The specs of the computer on which this benchmark was performed are:
[12]:
!cat /proc/cpuinfo|head -19
processor : 0
vendor_id : AuthenticAMD
cpu family : 21
model : 101
model name : AMD A12-9800 RADEON R7, 12 COMPUTE CORES 4C+8G
stepping : 1
microcode : 0x6006118
cpu MHz : 3992.225
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 16
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
Benchmarking the Permanent¶
This tutorial shows how to use the permanent function using The Walrus, which calculates the permanent using Ryser’s algorithm
The Permanent¶
The permanent of an \(n\)-by-\(n\) matrix A = \(a_{i,j}\) is defined as
\(\text{perm}(A)=\sum_{\sigma\in S_n}\prod_{i=1}^n a_{i,\sigma(i)}.\)
The sum here extends over all elements \(\sigma\) of the symmetric group \(S_n\); i.e. over all permutations of the numbers \(1, 2, \ldots, n\). (see Wikipedia).
The function thewalrus.perm
implements Ryser’s algorithm to calculate the permanent of an arbitrary matrix using Gray code ordering.
Once installed or compiled, one imports the library in the usual way:
[1]:
from thewalrus import perm
To use it we need to pass square numpy arrays thus we also import NumPy:
[2]:
import numpy as np
import time
The library provides functions to compute permanents of real and complex matrices. The functions take as arguments the matrix; the number of threads to be used to do the computation are determined using OpenMP.
[3]:
size = 20
matrix = np.ones([size,size])
perm(matrix)
[3]:
2.43290200817664e+18
[4]:
size = 20
matrix = np.ones([size,size], dtype=np.complex128)
perm(matrix)
[4]:
2.43290200817664e+18
Not surprisingly, the permanent of a matrix containing only ones equals the factorial of the dimension of the matrix, in our case \(20!\).
[5]:
from math import factorial
factorial(20)
[5]:
2432902008176640000
Benchmarking the performance of the code¶
For sizes \(n=1,28\) we will generate random unitary matrices and measure the (average) amount of time it takes to calculate their permanent. The number of samples for each will be geometrically distirbuted with a 1000 samples for size \(n=1\) and 10 samples for \(n=28\). The unitaries will be random Haar distributed.
[6]:
a0 = 1000.
anm1 = 10.
n = 28
r = (anm1/a0)**(1./(n-1))
nreps = [(int)(a0*(r**((i)))) for i in range(n)]
[7]:
nreps
[7]:
[1000,
843,
710,
599,
505,
426,
359,
303,
255,
215,
181,
153,
129,
108,
91,
77,
65,
55,
46,
39,
33,
27,
23,
19,
16,
14,
11,
10]
The following function generates random Haar unitaries of dimensions \(n\)
[8]:
from scipy import diagonal, randn
from scipy.linalg import qr
def haar_measure(n):
'''A Random matrix distributed with Haar measure
See https://arxiv.org/abs/math-ph/0609050
How to generate random matrices from the classical compact groups
by Francesco Mezzadri '''
z = (randn(n,n) + 1j*randn(n,n))/np.sqrt(2.0)
q,r = qr(z)
d = diagonal(r)
ph = d/np.abs(d)
q = np.multiply(q,ph,q)
return q
Now let’s bench mark the scaling of the calculation with the matrix size:
[9]:
times = np.empty(n)
for ind, reps in enumerate(nreps):
#print(ind+1,reps)
start = time.time()
for i in range(reps):
size = ind+1
nth = 1
matrix = haar_measure(size)
res = perm(matrix)
end = time.time()
times[ind] = (end - start)/reps
print(ind+1, times[ind])
1 0.00028934645652770995
2 0.0001495122061081204
3 0.00015489853603739133
4 0.0004637452318194713
5 0.00017665730844629873
6 0.0006603159255265071
7 0.0006937515768832151
8 0.0008643358060629061
9 0.0004252480525596469
10 0.0008683936540470567
11 0.0006751460923674357
12 0.001460242115594203
13 0.002330223719278971
14 0.00457644021069562
15 0.010608830294766268
16 0.01871370959591556
17 0.04012102713951698
18 0.08530152927745473
19 0.17479530106420102
20 0.3602719612610646
21 0.7553631681384463
22 1.603381006805985
23 3.432816049327021
24 7.144709775322362
25 14.937034338712692
26 31.058412892477854
27 64.35744528336959
28 136.51278076171874
We can now plot the (average) time it takes to calculate the permanent vs. the size of the matrix:
[10]:
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_formats=['svg']
[11]:
plt.semilogy(np.arange(1,n+1),times,"+")
plt.xlabel(r"Matrix size $n$")
plt.ylabel(r"Time in seconds for 4 threads")
[11]:
Text(0, 0.5, 'Time in seconds for 4 threads')
We can also fit to the theoretical scaling of $ c n 2^n$ and use it to extrapolate for larger sizes:
[12]:
def fit(n,c):
return c*n*2**n
[13]:
from scipy.optimize import curve_fit
popt, pcov = curve_fit(fit, np.arange(1,n+1)[15:-1],times[15:-1])
The scaling prefactor is
[14]:
popt[0]
[14]:
1.776816058334955e-08
And we can use it to extrapolate the time it takes to calculate permanents of bigger dimensions
[15]:
flags = [3600,3600*24*7, 3600*24*365, 3600*24*365*1000]
labels = ["1 hour", "1 week", "1 year", "1000 years"]
plt.semilogy(np.arange(1,n+1), times, "+", np.arange(1,61), fit(np.arange(1,61),popt[0]))
plt.xlabel(r"Matrix size $n$")
plt.ylabel(r"Time in seconds for single thread")
plt.hlines(flags,0,60,label="1 hr",linestyles=u'dotted')
for i in range(len(flags)):
plt.text(0,2*flags[i], labels[i])
The specs of the computer on which this benchmark was performed are:
[16]:
!cat /proc/cpuinfo|head -19
processor : 0
vendor_id : AuthenticAMD
cpu family : 21
model : 101
model name : AMD A12-9800 RADEON R7, 12 COMPUTE CORES 4C+8G
stepping : 1
microcode : 0x6006118
cpu MHz : 2709.605
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 16
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
The following tutorials introduce core mathematical concepts provided by The Walrus, including the hafnian, loop hafnian, and the permanent.
Non-Gaussian states gallery¶
Here you can find a curated list of Gaussian circuits and photon-number-resolved measurements to prepare non-Gaussian states of interest in quantum optics, information, metrology and computing.
The original idea of using general Gaussian states and photon-number-resolved measurements to generate complex non-Gaussian states was introduced by K.K. Sabapathy, H. Qi, J. Izaac, and C. Weedbrook in Phys. Rev. A 100, 012326, (2019) and it was further theoretically analyzed by D. Su, C.R. Myers, and K.K. Sabapathy in Phys. Rev. A 100, 052301, (2019) and arXiv:1902.02331.
If you develop a new circuit and measurement scheme to prepare a non-Gaussian state, add it to the gallery!
Heralded Preparation of Fock States¶
Author: Nicolás Quesada
In this tutorial we study how to prepare the \(n=2\) Fock state by heralding a two-mode squeezed vacuum state using photon-number-resolving detectors. The idea of preparing Fock states by heralding goes back to at least the following paper: “Experimental realization of a localized one-photon state” Phys. Rev. Lett. 56, 58 (1986) by C. K. Hong and L. Mandel.
[1]:
import numpy as np
from qutip import wigner, Qobj, wigner_cmap
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
import strawberryfields as sf
from strawberryfields.ops import *
from thewalrus.quantum import state_vector, density_matrix
Ideal preparation¶
Here we setup some basic parameters, like the value of the photon-number-resolving detectors we will use to herald and the amount of squeezing to use. We pick the squeezing parameter to be \(r = \text{arcsinh}\left(\sqrt{n}\right)\) so that the mean photon number in each of the modes is precisely \(\bar{n} = \sinh^2 r = n\). This will maximize the probability of generating the given Fock state with \(n\) photons. Note that we could pick a different value of \(n\).
[2]:
n = 2
r = np.arcsinh(np.sqrt(n))
Now we setup a 2-mode quantum circuit in strawberryfields and obtain the covariance matrix and vector of means of the Gaussian state.
[3]:
nmodes = 2
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
S2gate(r)|q
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
[4]:
# Here we use the sf circuit drawer and standard linux utilities
# to generate an svg representing the circuit
file, _ = prog.draw_circuit()
filepdf = file[0:-3]+"pdf"
filepdf = filepdf.replace("circuit_tex/","")
filecrop = filepdf.replace(".pdf","-crop.pdf")
name = "fock_circuit.svg"
!pdflatex $file > /dev/null 2>&1
!pdfcrop $filepdf > /dev/null 2>&1
!pdf2svg $filecrop $name
Here is a graphical representation of the circuit. It is always assumed that the input is vacuum in all the modes.
We can now inspect the covariance matrix and vector of means. Note that the vector of means is zero since we did not use any displacement gates in the circuit above.
[5]:
print(np.round(mu,10))
print(np.round(cov,10))
[0. 0. 0. 0.]
[[ 5. 4.89897949 0. -0. ]
[ 4.89897949 5. -0. 0. ]
[ 0. -0. 5. -4.89897949]
[-0. 0. -4.89897949 5. ]]
We now use the walrus to obtain the Fock representation of the heralded Gaussian state when mode 1 is heralded in the value \(n=1\) in the variable psi
. We also calculate the probability of success in heralding in the variable p_psi
.
[6]:
cutoff = 10
psi = state_vector(mu, cov, post_select={1:n}, normalize=False, cutoff=cutoff)
p_psi = np.linalg.norm(psi)
psi = psi/p_psi
print("The probability of successful heralding is ", np.round(p_psi**2,5))
The probability of successful heralding is 0.14815
We now plot the photon-number distribution of the heralded state. Note that the state is exactly a Fock state with a two photons.
[7]:
plt.bar(np.arange(cutoff),np.abs(psi)**2)
plt.xlabel("$i$")
plt.ylabel(r"$p_i$")
plt.show()

We can now plot the Wigner function of the heralded state,
[8]:
grid = 100
xvec = np.linspace(-5,5,grid)
Wp = wigner(Qobj(psi), xvec, xvec)
wmap = wigner_cmap(Wp)
sc1 = np.max(Wp)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 1, figsize=(5, 4))
plt1 = axes.contourf(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.set_title("Wigner function of the heralded state");
cb1 = fig.colorbar(plt1, ax=axes)
fig.tight_layout()
plt.show()

and a cut of the Wigner function along \(x=0\).
[9]:
plt.plot(xvec, Wp[:,grid//2])
plt.title(r"$W(0,p)$")
plt.xlabel(r"p")
plt.show()

Adding Loss¶
We can now study what happens when loss in the heralding arm is increased. We will add loss in the heralding arm by varying the efficiency of the detector which is parametrized by a transmission \(\eta\) ranging from \(\eta = 50\%\) to \(\eta = 100\%\) (ideal operation).
[10]:
eta_vals = np.arange(1.,0.45,-0.05)
fidelities = np.zeros_like(eta_vals)
success_p = np.zeros_like(eta_vals)
nmodes = 2
for i,eta in enumerate(eta_vals):
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
S2gate(r)|q
LossChannel(eta)|q[1]
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
rho = density_matrix(mu, cov, post_select={1:n}, normalize=False, cutoff=cutoff)
success_p[i] = np.real_if_close(np.trace(rho))
fidelities[i] = np.real_if_close(psi.conj() @ rho @ psi/success_p[i])
We now plot the probability of success of the heralding scheme as a function of the transmission,
[11]:
plt.plot(eta_vals, success_p)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Success probability")
plt.show()

and similarly study the fidelity of the heralded state with respect to the ideal (lossless) state.
[12]:
plt.plot(eta_vals, fidelities)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Fidelity with ideal state")
plt.show()

We can also look at the photon number distribution of the nonideal state. Notice that it has components along many different Fock states with n>2
[13]:
plt.bar(np.arange(cutoff),np.real_if_close(np.diag(rho/np.trace(rho))))
plt.xlabel(r"$i$")
plt.ylabel(r"$p_i$")
plt.show()

Now we plot the Wigner function of the heralded state for \(\eta = 50\%\)
[14]:
sc1 = np.max(Wp)
W = wigner(Qobj(rho/np.trace(rho)), xvec, xvec)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 2, figsize=(7.5, 3),sharey=True)
axes[0].contourf(xvec, xvec, Wp, 60,cmap=cm.RdBu, norm=nrm)
axes[0].contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes[0].set_title(r"$W(x,p)$ for $\eta = 1.0$");
cb1 = fig.colorbar(plt1, ax=axes[0])
plt2 = axes[1].contourf(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
plt2 = axes[1].contour(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
axes[1].set_title(r"$W(x,p)$ for $\eta = 0.5$");
cb2 = fig.colorbar(plt1, ax=axes[1])
fig.tight_layout()
plt.show()

and also a cut of the Wigner function along \(x=0\)
[15]:
plt.plot(xvec, W[:,grid//2], label=r"$\eta=0.5$")
plt.plot(xvec, Wp[:,grid//2], label=r"$\eta=1.0$")
plt.title(r"$W(0,p)$")
plt.xlabel(r"$p$")
plt.legend()
plt.show()

[16]:
%reload_ext version_information
%version_information qutip, strawberryfields, thewalrus
[16]:
Software | Version |
---|---|
Python | 3.7.5 64bit [GCC 7.3.0] |
IPython | 7.10.1 |
OS | Linux 4.15.0 72 generic x86_64 with debian stretch sid |
qutip | 4.4.1 |
strawberryfields | 0.13.0-dev |
thewalrus | 0.11.0-dev |
Mon Dec 30 10:10:38 2019 EST |
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
Kitten states via photon subtraction¶
Author: Nicolás Quesada
In this tutorial we study how to prepare a cat state with a small amplitude (i.e. a kitten state) by doing photon subtraction. This idea goes back to the following paper: “Generating Schrödinger-cat-like states by means of conditional measurements on a beam splitter” Phys. Rev. A 55, 3184, (1997) by M. Dakna, T. Anhut, T. Opatrný, L. Knöll, and D.-G. Welsch.
[1]:
import numpy as np
from qutip import wigner, Qobj, wigner_cmap
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
import strawberryfields as sf
from strawberryfields.ops import *
from thewalrus.quantum import state_vector, density_matrix
Ideal preparation¶
Here we setup some basic parameters, like the value of the photon-number-resolving detectors we will use to herald and the amount of squeezing to use. We pick the squeezing parameter to be \(r = 1.0\). We also fix the beam-splitter angle to have a high transmissivity (i.e. \(\theta \ll 1\)).
[2]:
n = 1
r = -1.0
theta = np.arccos(np.sqrt(0.97))
theta
[2]:
0.17408301063648038
Now we setup a 2-mode quantum circuit in Strawberry Fields and obtain the covariance matrix and vector of means of the Gaussian state
[3]:
nmodes = 2
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
Sgate(r)|q[1]
BSgate(theta,0.0)|q
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
[4]:
# Here we use the sf circuit drawer and standard linux utilities
# to generate an svg representing the circuit
file, _ = prog.draw_circuit()
filepdf = file[0:-3]+"pdf"
filepdf = filepdf.replace("circuit_tex/","")
filecrop = filepdf.replace(".pdf","-crop.pdf")
name = "kitten_circuit.svg"
!pdflatex $file > /dev/null 2>&1
!pdfcrop $filepdf > /dev/null 2>&1
!pdf2svg $filecrop $name
Here is a graphical representation of the circuit. It is always assumed that the input is vacuum in all the modes.
We can now inspect the covariance matrix and vector of means. Note that the vector of means is zero since we did not use any displacement gates in the circuit above.
[5]:
print(np.round(mu,10))
print(np.round(cov,10))
[0. 0. 0. 0.]
[[ 1.19167168 -1.08989133 -0. 0. ]
[-1.08989133 7.19738442 0. -0. ]
[-0. 0. 0.97406006 0.14750075]
[ 0. -0. 0.14750075 0.16127522]]
We now use The Walrus to obtain the Fock representation of the photon-subtracted state when mode 0 is heralded in the value \(n=1\) in the variable psi
. We also calculate the probability of success in heralding in the variable p_psi
.
[6]:
cutoff = 20
psi = state_vector(mu, cov, post_select={0:n}, normalize=False, cutoff=cutoff)
p_psi = np.linalg.norm(psi)
psi = psi/p_psi
print("The probability of successful heralding is ", np.round(p_psi**2,5))
The probability of successful heralding is 0.03551
We now plot the photon-number distribution of the heralded state. Note that the state only has odd photon components,
[7]:
plt.bar(np.arange(cutoff),np.abs(psi)**2)
plt.xlabel("$i$")
plt.ylabel(r"$p_i$")
plt.show()

We can now plot the Wigner function of the heralded state,
[8]:
grid = 100
xvec = np.linspace(-5,5,grid)
Wp = wigner(Qobj(psi), xvec, xvec)
wmap = wigner_cmap(Wp)
sc1 = np.max(Wp)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 1, figsize=(5, 4))
plt1 = axes.contourf(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.set_title("Wigner function of the heralded state");
cb1 = fig.colorbar(plt1, ax=axes)
fig.tight_layout()
plt.show()

and a cut of the Wigner function along \(p=0\).
[9]:
plt.plot(xvec, Wp[grid//2,:])
plt.title(r"$W(0,p)$")
plt.xlabel(r"p")
plt.show()

Adding Loss¶
We can now study what happens when loss in the heralding arm is increased. We will add loss in the heralding arm by varying the efficiency of the detector which is parametrized by a transmission \(\eta\) ranging from \(\eta = 50\%\) to \(\eta = 100\%\) (ideal operation).
[10]:
eta_vals = np.arange(1.,0.45,-0.05)
fidelities = np.zeros_like(eta_vals)
success_p = np.zeros_like(eta_vals)
nmodes = 2
for i,eta in enumerate(eta_vals):
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
Sgate(r)|q[1]
BSgate(theta,0.0)|q
LossChannel(eta)|q[0]
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
rho = density_matrix(mu, cov, post_select={0:n}, normalize=False, cutoff=cutoff)
success_p[i] = np.real_if_close(np.trace(rho))
fidelities[i] = np.real_if_close(psi.conj() @ rho @ psi/success_p[i])
We now plot the probability of success of the heralding scheme as a function of the transmission.
[11]:
plt.plot(eta_vals, success_p)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Success probability")
plt.show()

and similarly study the fidelity of the heralded state with respect to the ideal (lossless) state.
[12]:
plt.plot(eta_vals, fidelities)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Fidelity with ideal state")
plt.show()

We can also look at the photon number distribution of the nonideal state.
[13]:
plt.bar(np.arange(cutoff),np.real_if_close(np.diag(rho/np.trace(rho))))
plt.xlabel(r"$i$")
plt.ylabel(r"$p_i$")
plt.show()

Now we plot the Wigner function of the heralded state for \(\eta = 50\%\).
[14]:
sc1 = np.max(Wp)
W = wigner(Qobj(rho/np.trace(rho)), xvec, xvec)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 2, figsize=(7.5, 3),sharey=True)
axes[0].contourf(xvec, xvec, Wp, 60,cmap=cm.RdBu, norm=nrm)
axes[0].contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes[0].set_title(r"$W(x,p)$ for $\eta = 1.0$");
cb1 = fig.colorbar(plt1, ax=axes[0])
plt2 = axes[1].contourf(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
plt2 = axes[1].contour(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
axes[1].set_title(r"$W(x,p)$ for $\eta = 0.5$");
cb2 = fig.colorbar(plt1, ax=axes[1])
fig.tight_layout()
plt.show()

and also a cut of the Wigner function along \(x=0\).
[15]:
plt.plot(xvec, W[grid//2,:], label=r"$\eta=0.5$")
plt.plot(xvec, Wp[grid//2,:], label=r"$\eta=1.0$")
plt.title(r"$W(0,p)$")
plt.xlabel(r"$p$")
plt.legend()
plt.show()

Note that unlike for the case of a heralded Fock state, here loss in the heralding arms has very little effect. For a simple explanation of why this is the case cf. Sec. IV.B. of “Simulating realistic non-Gaussian state preparation” Phys. Rev. A 100, 022341 (2019) by N. Quesada, L. G. Helt, J. Izaac, J. M. Arrazola, R. Shahrokhshahi, C. R. Myers, and K. K. Sabapathy.
[16]:
%reload_ext version_information
%version_information qutip, strawberryfields, thewalrus
[16]:
Software | Version |
---|---|
Python | 3.7.5 64bit [GCC 7.3.0] |
IPython | 7.10.1 |
OS | Linux 4.15.0 72 generic x86_64 with debian stretch sid |
qutip | 4.4.1 |
strawberryfields | 0.13.0-dev |
thewalrus | 0.11.0-dev |
Mon Dec 30 10:12:11 2019 EST |
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
Cubic phase states using a circuit with three modes¶
Author: Nicolás Quesada
In this tutorial we study how to prepare a cubic phase state \(| \psi \rangle \sim | 0 \rangle +i a \sqrt{\frac{3}{2}} | 1 \rangle + i a | 3 \rangle\) by heralding two modes of a three mode circuit using photon-number-resolving detectors. This idea was proposed in “Production of photonic universal quantum gates enhanced by machine learning” Phys. Rev. A 100, 012326 (2019) by Krishna Kumar Sabapathy, Haoyu Qi, Josh Izaac, and Christian Weedbrook.
[1]:
import numpy as np
from qutip import wigner, Qobj, wigner_cmap
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
import strawberryfields as sf
from strawberryfields.ops import *
from thewalrus.quantum import state_vector, density_matrix
Ideal preparation¶
Here we setup some basic parameters, like the value of the photon-number-resolving detectors we will use to herald and the amount of squeezing and displacement to use. These parameters are taken from the reference cited above.
[2]:
# the Fock state measurement of mode 0 to be post-selected
m1 = 1
# the Fock state measurement of mode 1 to be post-selected
m2 = 2
sq_r = [0.71, 0.67, -0.42]
# squeezing phase
sq_phi = [-2.07, 0.06, -3.79]
# displacement magnitudes
d_r = [-0.02, 0.34, 0.02]
# beamsplitter theta
bs_theta1, bs_theta2, bs_theta3 = [-1.57, 0.68, 2.5]
# beamsplitter phi
bs_phi1, bs_phi2, bs_phi3 = [0.53, -4.51, 0.72]
Now we setup a 3-mode quantum circuit in Strawberry Fields and obtain the covariance matrix and vector of means of the Gaussian state.
[3]:
nmodes = 3
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
for k in range(3):
Sgate(sq_r[k], sq_phi[k]) | q[k]
Dgate(d_r[k]) | q[k]
BSgate(bs_theta1, bs_phi1) | (q[0], q[1])
BSgate(bs_theta2, bs_phi2) | (q[1], q[2])
BSgate(bs_theta3, bs_phi3) | (q[0], q[1])
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
[4]:
# Here we use the sf circuit drawer and standard linux utilities
# to generate an svg representing the circuit
file, _ = prog.draw_circuit()
filepdf = file[0:-3]+"pdf"
filepdf = filepdf.replace("circuit_tex/","")
filecrop = filepdf.replace(".pdf","-crop.pdf")
name = "cubic_circuit.svg"
!pdflatex $file > /dev/null 2>&1
!pdfcrop $filepdf > /dev/null 2>&1
!pdf2svg $filecrop $name
Here is a graphical representation of the circuit. It is always assumed that the input is vacuum in all the modes.
We can now inspect the covariance matrix and vector of means. Note that the vector of means is non-zero since we did use displacement gates in the circuit above.
[5]:
print(np.round(mu,10))
print(np.round(cov,10))
[-0.50047867 0.37373598 0.01421683 0.26999427 0.04450994 0.01903583]
[[ 1.57884241 0.81035494 1.03468307 1.14908791 0.09179507 -0.11893174]
[ 0.81035494 1.06942863 0.89359234 0.20145142 0.16202296 0.4578259 ]
[ 1.03468307 0.89359234 1.87560498 0.16915661 1.0836528 -0.09405278]
[ 1.14908791 0.20145142 0.16915661 2.37765137 -0.93543385 -0.6544286 ]
[ 0.09179507 0.16202296 1.0836528 -0.93543385 2.78903152 -0.76519088]
[-0.11893174 0.4578259 -0.09405278 -0.6544286 -0.76519088 1.51724222]]
We now use The Walrus to obtain the Fock representation of the heralded Gaussian state when mode 1 is heralded in the value \(n=1\) in the variable psi
. We also calculate the probability of success in heralding in the variable p_psi
.
[6]:
cutoff = 10
psi = state_vector(mu, cov, post_select={0: m1, 1: m2}, normalize=False, cutoff=cutoff)
p_psi = np.linalg.norm(psi)
psi = psi/p_psi
print("The probability of successful heralding is ", np.round(p_psi**2,5))
The probability of successful heralding is 0.02016
We now plot the photon-number distribution of the heralded state. Note that the state has zero support on the Fock state with \(i=2\) photons.
[7]:
plt.bar(np.arange(cutoff),np.abs(psi)**2)
plt.xlabel("$i$")
plt.ylabel(r"$p_i$")
plt.show()

We can now plot the Wigner function of the heralded state,
[8]:
grid = 100
xvec = np.linspace(-5,5,grid)
Wp = wigner(Qobj(psi), xvec, xvec)
wmap = wigner_cmap(Wp)
sc1 = np.max(Wp)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 1, figsize=(5, 4))
plt1 = axes.contourf(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.set_title("Wigner function of the heralded state");
cb1 = fig.colorbar(plt1, ax=axes)
fig.tight_layout()
plt.show()

and a cut of the Wigner function along \(x=0\).
[9]:
plt.plot(xvec, Wp[:,grid//2])
plt.title(r"$W(0,p)$")
plt.xlabel(r"p")
plt.show()

Adding Loss¶
We can now study what happens when loss in the heralding arm is increased. We will add loss in the heralding arm by varying the efficiency of the detector which is parametrized by a transmission \(\eta\) ranging from \(\eta = 50\%\) to \(\eta = 100\%\) (ideal operation).
[10]:
eta_vals = np.arange(1.,0.45,-0.05)
fidelities = np.zeros_like(eta_vals)
success_p = np.zeros_like(eta_vals)
nmodes = 3
for i,eta in enumerate(eta_vals):
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
for k in range(3):
Sgate(sq_r[k], sq_phi[k]) | q[k]
Dgate(d_r[k]) | q[k]
BSgate(bs_theta1, bs_phi1) | (q[0], q[1])
BSgate(bs_theta2, bs_phi2) | (q[1], q[2])
BSgate(bs_theta3, bs_phi3) | (q[0], q[1])
LossChannel(eta)|q[0]
LossChannel(eta)|q[1]
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
rho = density_matrix(mu, cov, post_select={0: m1, 1: m2}, normalize=False, cutoff=cutoff)
success_p[i] = np.real_if_close(np.trace(rho))
fidelities[i] = np.real_if_close(psi.conj() @ rho @ psi/success_p[i])
We now plot the probability of success of the heralding scheme as a function of the transmission,
[11]:
plt.plot(eta_vals, success_p)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Success probability")
plt.show()

and similarly study the fidelity of the heralded state with respect to the ideal (lossless) state.
[12]:
plt.plot(eta_vals, fidelities)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Fidelity with ideal state")
plt.show()

We can also look at the photon number distribution of the nonideal state. Notice that now there is a two photon component.
[13]:
plt.bar(np.arange(cutoff),np.real_if_close(np.diag(rho/np.trace(rho))))
plt.xlabel(r"$i$")
plt.ylabel(r"$p_i$")
plt.show()

Now we plot the Wigner function of the heralded state for \(\eta = 50\%\).
[14]:
sc1 = np.max(Wp)
W = wigner(Qobj(rho/np.trace(rho)), xvec, xvec)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 2, figsize=(7.5, 3),sharey=True)
axes[0].contourf(xvec, xvec, Wp, 60,cmap=cm.RdBu, norm=nrm)
axes[0].contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes[0].set_title(r"$W(x,p)$ for $\eta = 1.0$");
cb1 = fig.colorbar(plt1, ax=axes[0])
plt2 = axes[1].contourf(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
plt2 = axes[1].contour(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
axes[1].set_title(r"$W(x,p)$ for $\eta = 0.5$");
cb2 = fig.colorbar(plt1, ax=axes[1])
fig.tight_layout()
plt.show()

and also a cut of the Wigner function along \(x=0\)
[15]:
plt.plot(xvec, W[:,grid//2], label=r"$\eta=0.5$")
plt.plot(xvec, Wp[:,grid//2], label=r"$\eta=1.0$")
plt.title(r"$W(0,p)$")
plt.xlabel(r"$p$")
plt.legend()
plt.show()

[16]:
%reload_ext version_information
%version_information qutip, strawberryfields, thewalrus
[16]:
Software | Version |
---|---|
Python | 3.7.5 64bit [GCC 7.3.0] |
IPython | 7.10.1 |
OS | Linux 4.15.0 72 generic x86_64 with debian stretch sid |
qutip | 4.4.1 |
strawberryfields | 0.13.0-dev |
thewalrus | 0.11.0-dev |
Mon Dec 30 10:14:55 2019 EST |
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
Photon added states without adding a photon¶
Author: Nicolás Quesada
In this tutorial we study how to prepare photon added states following the ideas from “Generating photon-added states without adding a photon” Phys. Rev. A 100, 043802 (2019) by S.U. Shringarpure and J.D. Franson.
[1]:
import numpy as np
from qutip import wigner, Qobj, wigner_cmap, qfunc
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
import strawberryfields as sf
from strawberryfields.ops import *
from thewalrus.quantum import state_vector, density_matrix
Ideal preparation¶
Here we setup some basic parameters, like the value of the photon-number-resolving detectors we will use to herald and the amount of squeezing and displacement to use. These parameters are taken from the reference cited above.
[2]:
# the Fock state measurement of mode 0 to be post-selected
m1 = 1
# the Fock state measurement of mode 1 to be post-selected
m2 = 1
r1 = np.arcsinh(1.0)
g = 1.195
r2 = np.arccosh(g)
alpha = 2.0
Their circuit requires as input a single photon. We cannot prepare single photons directly using only Gaussian resources, but they can be generated by heralding one half of a two-mode squeezed vacuum. We do this by applying a two-mode squeezing gate in modes 1 and 2 and then heralding mode 2 in a single photon event later on. Note that the value of the gain g
is picked to match the parameters from figure 5.(d) of Shringarpure and Franson’s paper. However one can change this value to match
any other parameter setting studied in their paper.
Now we setup a 3-mode quantum circuit in Strawberry Fields and obtain the covariance matrix and vector of means of the Gaussian state
[3]:
nmodes = 3
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
S2gate(r1)|(q[1],q[2])
Dgate(alpha)|q[0]
S2gate(r2)|(q[0],q[1])
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
[4]:
# Here we use the sf circuit drawer and standard linux utilities
# to generate an svg representing the circuit
file, _ = prog.draw_circuit()
filepdf = file[0:-3]+"pdf"
filepdf = filepdf.replace("circuit_tex/","")
filecrop = filepdf.replace(".pdf","-crop.pdf")
name = "added_circuit.svg"
!pdflatex $file > /dev/null 2>&1
!pdfcrop $filepdf > /dev/null 2>&1
!pdf2svg $filecrop $name
Here is a graphical representation of the circuit. It is always assumed that the input is vacuum in all the modes.
We can now inspect the covariance matrix and vector of means. Note that the vector of means is non-zero since we did use displacement gates in the circuit above.
[5]:
print(np.round(mu,10))
print(np.round(cov,10))
[ 4.78 2.61694478 0. 0. -0. 0. ]
[[ 2.7121 3.12724902 1.8504594 0. -0. 0. ]
[ 3.12724902 4.7121 3.37997041 -0. 0. -0. ]
[ 1.8504594 3.37997041 3. 0. -0. 0. ]
[ 0. -0. 0. 2.7121 -3.12724902 1.8504594 ]
[-0. 0. -0. -3.12724902 4.7121 -3.37997041]
[ 0. -0. 0. 1.8504594 -3.37997041 3. ]]
We now use The Walrus to obtain the Fock representation of the heralded Gaussian state when mode 1 is heralded in the value \(n=1\) in the variable psi
. We also calculate the probability of success in heralding in the variable p_psi
.
[6]:
cutoff = 20
psi = state_vector(mu, cov, post_select={1: m1, 2: m2}, normalize=False, cutoff=cutoff)
p_psi = np.linalg.norm(psi)
psi = psi/p_psi
print("The probability of successful heralding is ", np.round(p_psi**2,5))
The probability of successful heralding is 0.02043
We now plot the photon-number distribution of the heralded state. Note that the state has zero support on the Fock state with \(i=2\) photons.
[7]:
plt.bar(np.arange(cutoff),np.abs(psi)**2)
plt.xlabel("$i$")
plt.ylabel(r"$p_i$")
plt.show()

We can plot the Q function and compare it with figure 5.(d) of their manuscript:
[8]:
grid = 100
xvec = np.linspace(-5,5,grid)
Q = qfunc(Qobj(psi), xvec, xvec)
wmap = wigner_cmap(Q)
sc1 = np.max(Q)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 1, figsize=(5, 4))
plt1 = axes.contourf(xvec, xvec, Q, 60, cmap=cm.RdBu, norm=nrm)
axes.contour(xvec, xvec, Q, 60, cmap=cm.RdBu, norm=nrm)
axes.set_title("Q function of the heralded state");
cb1 = fig.colorbar(plt1, ax=axes)
fig.tight_layout()
plt.show()

We can now plot the Wigner function of the heralded state,
[9]:
grid = 100
xvec = np.linspace(-5,5,grid)
Wp = wigner(Qobj(psi), xvec, xvec)
wmap = wigner_cmap(Wp)
sc1 = np.max(Wp)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 1, figsize=(5, 4))
plt1 = axes.contourf(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.set_title("Wigner function of the heralded state");
cb1 = fig.colorbar(plt1, ax=axes)
fig.tight_layout()
plt.show()

and a cut of the Wigner function along \(p=0\).
[10]:
plt.plot(xvec, Wp[grid//2,:])
plt.title(r"$W(x,0)$")
plt.xlabel(r"x")
plt.show()

Adding Loss¶
We can now study what happens when loss in the heralding arm is increased. We will add loss in the heralding arm by varying the efficiency of the detector which is parametrized by a transmission \(\eta\) ranging from \(\eta = 50\%\) to \(\eta = 100\%\) (ideal operation).
[11]:
eta_vals = np.arange(1.,0.45,-0.05)
fidelities = np.zeros_like(eta_vals, dtype=complex)
success_p = np.zeros_like(eta_vals, dtype=complex)
nmodes = 3
for i,eta in enumerate(eta_vals):
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
S2gate(r1)|(q[1],q[2])
Dgate(alpha)|q[0]
S2gate(r2)|(q[0],q[1])
LossChannel(eta)|q[1]
LossChannel(eta)|q[2]
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
rho = density_matrix(mu, cov, post_select={1: m1, 2: m2}, normalize=False, cutoff=cutoff)
success_p[i] = np.trace(rho)
fidelities[i] = psi.conj() @ rho @ psi/success_p[i]
We now plot the probability of success of the heralding scheme as a function of the transmission,
[12]:
plt.plot(eta_vals, success_p.real)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Success probability")
plt.show()

and similarly study the fidelity of the heralded state with respect to the ideal (lossless) state.
[13]:
plt.plot(eta_vals, fidelities.real)
plt.xlabel(r"Transmission $\eta$")
plt.ylabel(r"Fidelity with ideal state")
plt.show()

We can also look at the photon number distribution of the nonideal state.
[14]:
plt.bar(np.arange(cutoff),np.real(np.diag(rho/np.trace(rho))))
plt.xlabel(r"$i$")
plt.ylabel(r"$p_i$")
plt.show()

Now we plot the Wigner function of the heralded state for \(\eta = 50\%\)
[15]:
sc1 = np.max(Wp)
W = wigner(Qobj(rho/np.trace(rho)), xvec, xvec)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 2, figsize=(7.5, 3),sharey=True)
axes[0].contourf(xvec, xvec, Wp, 60,cmap=cm.RdBu, norm=nrm)
axes[0].contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes[0].set_title(r"$W(x,p)$ for $\eta = 1.0$");
cb1 = fig.colorbar(plt1, ax=axes[0])
plt2 = axes[1].contourf(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
plt2 = axes[1].contour(xvec, xvec, W ,60, cmap=cm.RdBu, norm=nrm)
axes[1].set_title(r"$W(x,p)$ for $\eta = 0.5$");
cb2 = fig.colorbar(plt1, ax=axes[1])
fig.tight_layout()
plt.show()

and also a cut of the Wigner function along \(x=0\)
[16]:
plt.plot(xvec, W[grid//2,:], label=r"$\eta=0.5$")
plt.plot(xvec, Wp[grid//2,:], label=r"$\eta=1.0$")
plt.title(r"$W(x,0)$")
plt.xlabel(r"$x$")
plt.legend()
plt.show()

[17]:
%reload_ext version_information
%version_information qutip, strawberryfields, thewalrus
[17]:
Software | Version |
---|---|
Python | 3.7.5 64bit [GCC 7.3.0] |
IPython | 7.10.1 |
OS | Linux 4.15.0 72 generic x86_64 with debian stretch sid |
qutip | 4.4.1 |
strawberryfields | 0.13.0-dev |
thewalrus | 0.11.0-dev |
Mon Dec 30 10:14:07 2019 EST |
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
Making big cat states using even-parity projectors¶
Author: Guillaume Thekkadath
In this tutorial, we numerically simulate the protocol proposed in Engineering Schrödinger cat states with a photonic even-parity detector, Quantum 4, 239 (2020), for engineering superpositions of coherent states. We study how to make big cat states \(|\text{cat}\rangle \sim |\alpha \rangle + |-\alpha \rangle\) with \(|\alpha|^2 = 10\).
[1]:
import numpy as np
from qutip import wigner, Qobj, wigner_cmap
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
import strawberryfields as sf
from strawberryfields.ops import *
from thewalrus.quantum import state_vector, density_matrix
Ideal preparation¶
Here we setup some basic parameters, like the value of the photon-number-resolving detectors we will use to herald and the parameters of the different Gaussian unitaries.
[2]:
Lambda = 0.9 # Lambda is a squeezing parameter in [0,1)
r = np.arctanh(Lambda)
catSize = 10 # One obtains a better fidelity when this number is an integer (see paper for explanation)
alpha = np.sqrt(catSize)*Lambda # Coherent state amplitude is chosen to compensate finite squeezing, Lambda < 1
detOutcome = int(round(catSize))
m0 = detOutcome
m1 = detOutcome
print(m0,m1)
10 10
Now we setup a 3-mode quantum circuit in Strawberry Fields and obtain the covariance matrix and vector of means of the Gaussian state.
[3]:
nmodes = 3
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
S2gate(r) | (q[1],q[2])
Dgate(1j*alpha) | q[0]
BSgate() | (q[0],q[1])
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
[4]:
# Here we use the sf circuit drawer and standard linux utilities
# to generate an svg representing the circuit
file, _ = prog.draw_circuit()
filepdf = file[0:-3]+"pdf"
filepdf = filepdf.replace("circuit_tex/","")
filecrop = filepdf.replace(".pdf","-crop.pdf")
name = "cat_circuit.svg"
!pdflatex $file > /dev/null 2>&1
!pdfcrop $filepdf > /dev/null 2>&1
!pdf2svg $filecrop $name
Here is a graphical representation of the circuit. It is always assumed that the input is vacuum in all the modes.
We can now inspect the covariance matrix and vector of means. Note that the vector of means is non-zero since we used a displacement gate.
[5]:
print(np.round(mu,10))
print(np.round(cov,10))
[0. 0. 0. 4.02492236 4.02492236 0. ]
[[ 5.26315789 -4.26315789 -6.69890635 0. -0. 0. ]
[-4.26315789 5.26315789 6.69890635 -0. 0. -0. ]
[-6.69890635 6.69890635 9.52631579 0. -0. 0. ]
[ 0. -0. 0. 5.26315789 -4.26315789 6.69890635]
[-0. 0. -0. -4.26315789 5.26315789 -6.69890635]
[ 0. -0. 0. 6.69890635 -6.69890635 9.52631579]]
We now use The Walrus to obtain the Fock representation of the state psi
that is heralded when modes 0 and 1 are measured to have the value \(n=10\). We also calculate the probability of success in heralding in the variable p_psi
.
[6]:
cutoff = 18
psi = state_vector(mu, cov, post_select={0:m0,1:m1}, normalize=False, cutoff=cutoff)
p_psi = np.linalg.norm(psi)
psi = psi/p_psi
print("The probability of successful heralding is ", np.round(p_psi**2,5))
The probability of successful heralding is 0.0006
We now plot the photon-number distribution of the heralded state. Note that the state only has even photon components.
[7]:
plt.bar(np.arange(cutoff),np.abs(psi)**2)
plt.xlabel("$i$")
plt.ylabel(r"$p_i$")
plt.show()

We can now plot the Wigner function of the heralded state,
[8]:
grid = 100
xvec = np.linspace(-7,7,grid)
Wp = wigner(Qobj(psi), xvec, xvec)
wmap = wigner_cmap(Wp)
sc1 = np.max(Wp)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 1, figsize=(5, 4))
plt1 = axes.contourf(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.set_title("Wigner function of the heralded state");
cb1 = fig.colorbar(plt1, ax=axes)
fig.tight_layout()
plt.show()

and a cut of the Wigner function along \(p=0\).
[9]:
plt.plot(xvec, Wp[:,grid//2])
plt.title(r"$W(0,p)$")
plt.xlabel(r"p")
plt.show()

[10]:
%reload_ext version_information
%version_information qutip, strawberryfields, thewalrus
[10]:
Software | Version |
---|---|
Python | 3.7.5 64bit [GCC 7.3.0] |
IPython | 7.10.1 |
OS | Linux 4.15.0 72 generic x86_64 with debian stretch sid |
qutip | 4.4.1 |
strawberryfields | 0.13.0-dev |
thewalrus | 0.11.0-dev |
Mon Dec 30 10:15:49 2019 EST |
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
A four-headed cat state¶
Author: Guillaume Thekkadath
In this tutorial, we numerically simulate the protocol proposed in Engineering Schrödinger cat states with a photonic even-parity detector, Quantum 4, 239 (2020), for engineering superpositions of coherent states. In particular we look at how to make a coherent superposition of four coherent states, a four-headed cat state.
[1]:
import numpy as np
from qutip import wigner, Qobj, wigner_cmap
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib import cm
import strawberryfields as sf
from strawberryfields.ops import *
from thewalrus.quantum import state_vector, density_matrix
Ideal preparation¶
Here we setup some basic parameters, like the value of the photon-number-resolving detectors we will use to herald and the parameters of the different Gaussian unitaries.
[2]:
Lambda = 0.999 # As explained in the paper, the four-component cat state scheme works less well for finite squeezing.
r = np.arctanh(Lambda)
fourCatSize = 14 # One obtains a better fidelity when this number is an integer (see paper for explanation)
# As per scheme, we first prepare a two-component cat state with half the size of the desired four-component cat
twoCatSize = fourCatSize / 2
alpha = np.sqrt(twoCatSize)
nModes = 5
nMax = 30
detOutcome1 = int(round(twoCatSize))
detOutcome2 = int(round(fourCatSize))
print(detOutcome1,detOutcome2)
7 14
Now we setup a 3-mode quantum circuit in Strawberry Fields and obtain the covariance matrix and vector of means of the Gaussian state.
[3]:
nmodes = 5
prog = sf.Program(nmodes)
eng = sf.Engine("gaussian")
with prog.context as q:
Dgate(1j*alpha)|q[0]
S2gate(r)|(q[1],q[2])
BSgate()|(q[0],q[1])
Dgate(1j*alpha)|q[2]
S2gate(r)|(q[3],q[4])
BSgate()|(q[2],q[3])
state = eng.run(prog).state
mu = state.means()
cov = state.cov()
[4]:
# Here we use the sf circuit drawer and standard linux utilities
# to generate an svg representing the circuit
file, _ = prog.draw_circuit()
filepdf = file[0:-3]+"pdf"
filepdf = filepdf.replace("circuit_tex/","")
filecrop = filepdf.replace(".pdf","-crop.pdf")
name = "four_cat_circuit.svg"
!pdflatex $file > /dev/null 2>&1
!pdfcrop $filepdf > /dev/null 2>&1
!pdf2svg $filecrop $name
Here is a graphical representation of the circuit. It is always assumed that the input is vacuum in all the modes.
We can now inspect the covariance matrix and vector of means. Note that the vector of means is non-zero since we used a displacement gate.
[5]:
np.set_printoptions(linewidth=120)
print(np.round(mu,3))
print(np.round(cov,3))
[0. 0. 0. 0. 0. 3.742 3.742 3.742 3.742 0. ]
[[ 500.25 -499.25 -499.75 -499.75 0. 0. -0. 0. 0. 0. ]
[-499.25 500.25 499.75 499.75 0. -0. 0. -0. -0. 0. ]
[-499.75 499.75 999.5 0. -706.753 0. -0. 0. -0. 0. ]
[-499.75 499.75 0. 999.5 706.753 0. -0. -0. 0. -0. ]
[ 0. 0. -706.753 706.753 999.5 0. 0. 0. -0. 0. ]
[ 0. -0. 0. 0. 0. 500.25 -499.25 499.75 499.75 0. ]
[ -0. 0. -0. -0. 0. -499.25 500.25 -499.75 -499.75 0. ]
[ 0. -0. 0. -0. 0. 499.75 -499.75 999.5 -0. 706.753]
[ 0. -0. -0. 0. -0. 499.75 -499.75 -0. 999.5 -706.753]
[ 0. 0. 0. -0. 0. 0. 0. 706.753 -706.753 999.5 ]]
We now use the walrus to obtain the Fock representation of the heralded state when modes 0 and 1 are heralded in the value \(n=7\) and modes 2 and 3 are heralded in the value \(n=14\). This information is stored in the variable psi
. We also calculate the probability of success in heralding in the variable p_psi
.
[6]:
psi = state_vector(mu, cov, post_select={0:detOutcome1, 1:detOutcome1, 2:detOutcome2, 3:detOutcome2},
normalize=False, cutoff=nMax)
p_psi = np.linalg.norm(psi)
psi = psi / p_psi
print("The probability of successful heralding is ", np.round(p_psi**2,10))
The probability of successful heralding is 4e-09
We now plot the photon-number distribution of the heralded state. Note that the state only has even photon components.
[7]:
plt.bar(np.arange(nMax),np.abs(psi)**2)
plt.xlabel("$i$")
plt.ylabel(r"$p_i$")
plt.show()

We can now plot the Wigner function of the heralded state,
[8]:
grid = 100
xvec = np.linspace(-9,9,grid)
Wp = wigner(Qobj(psi), xvec, xvec)
wmap = wigner_cmap(Wp)
sc1 = np.max(Wp)
nrm = mpl.colors.Normalize(-sc1, sc1)
fig, axes = plt.subplots(1, 1, figsize=(5, 4))
plt1 = axes.contourf(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.contour(xvec, xvec, Wp, 60, cmap=cm.RdBu, norm=nrm)
axes.set_title("Wigner function of the heralded state");
cb1 = fig.colorbar(plt1, ax=axes)
fig.tight_layout()
plt.show()

and a cut of the Wigner function along \(p=0\).
[9]:
plt.plot(xvec, Wp[:,grid//2])
plt.title(r"$W(0,p)$")
plt.xlabel(r"p")
plt.show()

[10]:
%reload_ext version_information
%version_information qutip, strawberryfields, thewalrus
[10]:
Software | Version |
---|---|
Python | 3.7.5 64bit [GCC 7.3.0] |
IPython | 7.10.1 |
OS | Linux 4.15.0 72 generic x86_64 with debian stretch sid |
qutip | 4.4.1 |
strawberryfields | 0.13.0-dev |
thewalrus | 0.11.0-dev |
Mon Dec 30 10:17:05 2019 EST |
Note
Click here
to download this gallery page as an interactive Jupyter notebook.
The hafnian¶
Section author: Nicolás Quesada <nicolas@xanadu.ai>
The hafnian¶
The hafnian of an \(n \times n\), symmetric matrix \(\bm{A} =\bm{A}^T\) is defined as [30]
where \(\text{PMP}(n)\) stands for the set of perfect matching permutations of \(n\) (even) objects, i.e., permutations \(\sigma:[n]\rightarrow [n]\) such that \(\forall i:\sigma(2i-1)<\sigma(2i)\) and \(\forall i:\sigma(2i-1)<\sigma(2i+1)\).
It was so named by Eduardo R. Caianiello [31] “to mark the fruitful period of stay in Copenhagen (Hafnia in Latin).” [32]
The set PMP(\(n\)) used to define the hafnian contains
elements and thus as defined it takes \((n-1)!!\) additions of products of \(n/2\) numbers to calculate the hafnian of an \(n \times n\) matrix. Note that the diagonal elements of the matrix \(\bm{A}\) do not appear in the calculation of the hafnian and are (conventionally) taken to be zero.
For \(n=4\) the set of perfect matchings is
and the hafnian of a \(4 \times 4\) matrix \(\bm{B}\) is
The hafnian of an odd sized matrix is defined to be zero; if \(\bm{A}=\bm{A}^T\) is \(M\) dimensional and \(M\) is odd then \(\haf(\bm{A}) = 0\). Note that, for convenience, we define the hafnian of an empty matrix, i.e., a matrix of dimension zero by zero, to be 1.
The hafnian is a homogeneous function of degree \(n/2\) in the matrix entries of an \(n \times n\) matrix \(\bm{A}\). This implies that
where \(\mu\) is a scalar. More generally, if \(\bm{W} = \text{diag}(w_0,\ldots,w_{n-1})\), then it holds that (see proposition 4.2.3 of [30])
The definition used to introduce the hafnian is rather algebraic and brings little intuition. To gain more insight in the next section we introduce some graph theory language and use it to present a more intuitive vision of what the hafnian “counts”.
Basics of graphs¶
A graph is an ordered pair \(G=(V,E)\) where \(V\) is the set of vertices, and \(E\) is the set of edges linking the vertices of the graph, i.e., if \(e \in E\) then \(e=(i,j)\) where \(i,j \in V\). In this section we consider graphs without loops (we relax this in the next section), thus we do not allow for edges \(e = (i,i)\) connecting a given vertex to itself. A 6 vertex graph is shown here
the vertices are labelled \(V = \{1,2,3,4,5,6 \}\) and the edges are \(E=\{(1,4),(2,4),(2,5),(3,4),(3,5),(3,6),(5,5) \}\).
A matching \(M\) is a subset of the edges in which no two edges share a vertex. An example of matching is \(M=(1,4)(3,6)\) represented by the blue lines in the following figure
In the figure above we know we have a matching because none of the highlighted edges shares a vertex.
A perfect matching is a matching which matches all the vertices of the graph, such as for example \(M=(1,4)(2,5)(3,6)\), which is represented again by the blue lines in the following figure
The blue lines represent a perfect matching because, they are a matching, i.e., the edges do no overlap on any vertex and all the vertices are covered by one and only edge.
A complete graph is a graph where every vertex is connected to every other vertex. For loopless graphs having \(n\) vertices, the number of perfect matchings is precisely [30]
where we use \(\text{PMP}(n)\) to indicate the set of perfect matchings of introduced in the previous section, and the notation \(|V|\) to indicate the number of elements in the set \(V\). Note that this number is nonzero only for even \(n\), since for odd \(n\) there will always be one unmatched vertex.
In the following figure we illustrate the 3 perfect matchings of a complete graph with 4 vertices
Perfect matchings and hafnians¶
An important question concerning a given graph \(G=(V,E)\) is the number of perfect matchings it has. One possible way to answer this question is to iterate over the perfect matchings of a complete graph and at each step check if the given perfect matching of the complete graph is also a perfect matching of the given graph. A simple way to automatize this process is by constructing the adjacency matrix of the graph. The adjacency matrix \(\bm{A}\) of a graph \(G=(V,E)\) is a 0-1 matrix that has \(\bm{A}_{i,j} = \bm{A}_{j,i}=1\) if, and only if, \((i,j) \in E\) and 0 otherwise. For the example graph in the previous section, the adjacency matrix is
The number of perfect matchings of a (loopless) graph is simply given by the hafnian of its adjacency matrix
For the graph in the previous section we can easily confirm that the perfect matching we found is the only perfect matching since
The definition of the hafnian immediately generalizes to weighted graphs, where we assign a real or complex number to the entries of the symmetric matrix \(\bm{A}\).
Special values of the hafnian¶
Here we list some special values of the hafnian for certain special matrices.
If the matrix \(\bm{A}\) has the following block form
then it holds that \(\text{haf}\left( \bm{A}_{\text{block}} \right) = \text{per}(\bm{C})\) where \(\text{per}\) is the permanent matrix function defined as [30]
The sum here extends over all elements \(\sigma\) of the symmetric group \(S_n\).
If \(\bm{A}_{\text{rank-one}} = \bm{e} \bm{e}^T\) is a rank one matrix of size \(n\) then
In particular, the hafnian of the all ones matrix is precisely \((n-1)!!\).
If \(\bm{A}_{\text{direct sum}} = \bm{A}_1 \oplus \bm{A}_2\) is a block diagonal matrix then
This identity simply expresses the fact that the number of perfect matchings of a graph that is made of two disjoint subgraphs is simply the product of the number of perfect matchings of the two disjoint subgraphs.
The loop hafnian¶
Section author: Nicolás Quesada <nicolas@xanadu.ai>
Graphs with loops¶
In the previous section we introduced the hafnian as a way of counting the number of perfect matchings of a loopless graph. The loop hafnian does the same for graphs with loops. Before defining the loop hafnian let us introduce graphs with loops. A graph will still be an ordered pair \((V,E)\) of vertices and edges but now we will allow edges of the form \((i,i)\) where \(i \in V\). We can now define adjacency matrices in the same way as we did before i.e. if \((i,j) \in E\) then \(M_{i,j}=1\) and otherwise \(M_{i,j}=0\).
Consider the following graph
for which we have \(V = \{1,2,3,4,5,6 \}\), the edges are \(E=\{(1,1),(1,4),(2,4),(2,5),(3,4),(3,5),(3,6),(5,5) \}\) and the adjacency matrix is
Note that there are now nonzero elements in the diagonal indicating that vertices 1 and 5 have loops.
Once we allow for loops we have more options for making perfect matchings. For example for the graph shown above there are now 2 perfect matchings, illustrated in blue in the following figure
As was done before for the hafnian we introduce the set of single pair matchings \(\text{SPM}(n)\) as the set of perfect matchings of a graph of size \(n\). For \(n=4\) we have
For a graph with 4 vertices they are
Note that there is a one to one correspondence (a bijection) between the elements in \(\text{SPM}(n)\) and the number of matchings of a graph with \(n\) vertices \(H(n)\). To see why this is the case, note that any element of \(\text{SPM}(n)\) can be converted into a matching by removing all the vertices that are loops. For example, to the following element \((0,0)(2,2)(1,3)\) we associate the matching \((1,2)\). Note that this mapping is one-to-one since, given a matching, we can always add as loops all the other vertices that are not part of the matching. Using this bijection we conclude that the number of elements in \(\text{SPM}(n)\) is (see The On-Line Encyclopedia of Integer Sequences)
where \(T(n)\) is the \(n^{\text{th}}\) telephone number.
Note that in general for given graph size \(n\) there a lot more single pair matching that there are perfect matchings. Their ratio goes like [6]
The loop hafnian¶
We will also be interested in a generalization of the hafnian function where we will now allow for adjacency matrices that have loops. This new function we call the loop hafnian (lhaf). As explained before, the weight associated with said loops will be allocated in the diagonal elements of the adjacency matrix \(\bm{A}\) (which were previously ignored in the definition of the hafnian). To account for the possibility of loops we generalized the set of perfect matching permutations PMP to the single-pair matchings (SPM). Thus we define [6]
The lhaf of a \(4 \times 4\) matrix \(\bm{B}\) is
Finally, let us comment on the scaling properties of the \(\haf\) and \(\lhaf\). Unlike the hafnian, the loop hafnian is not homogeneous in its matrix entries, i.e.
where \(n\) is the size of the matrix \(\bm{A}\) and \(\mu \geq 0\). However if we split the matrix \(\bm{A}\) in terms of its diagonal \(\bm{A}_{\text{diag}}\) part and its offdiagonal part \(\bm{A}_{\text{off-diag}}\)
then it holds that [6]
One can use the loop hafnian to count the number of matchings of a loopless graph by simply calculating the loop hafnian of its adjacency matrix adding ones in its diagonal.
Finally, if \(\bm{A}_{\text{direct sum}} = \bm{A}_1 \oplus \bm{A}_2\) is a block diagonal matrix then
As for the hafnian, this identity tell us that the number of perfect matchings of a graph that is made of two disjoint subgraphs is simply the product of the number of perfect matchings of the two disjoint subgraphs.
The algorithms¶
Section author: Nicolás Quesada <nicolas@xanadu.ai>
The exact calculation of the number of perfect matchings (and the hafnian) for general graphs (symmetric matrices) has been investigated by several authors in recent years. An algorithm running in \(O(n^2 2^n)\) time was given by Björklund and Husfeldt [1] in 2008. In the same paper an algorithm running in \(O(1.733^n)\) time was presented using fast matrix multiplication. In the same year Kan [2] presented an algorithm running in time \(O(n 2^n)\) by representing the hafnian as a moment of the multinormal distribution. Koivisto [3] gave an \(O^*(\phi^n)\) time and space algorithm, where \(\phi = (1+\sqrt{5})/2 \approx 1.618\) is the Golden ratio. Nederlof [4] provided a polynomial space algorithm running in \(O(1.942^n)\) time.
Finally, Björklund [5] [6] and later Cygan and Pilipczuk [7] provided \(O(\text{poly}(n) 2^{n/2})\) time and polynomial space algorithms for the calculation of the general ring hafnian. These algorithms are believed to be close to optimal unless there are surprisingly efficient algorithms for the Permanent. This is because these two algorithms can also be used to count (up to polynomial corrections) the number of perfect matchings for bipartite graphs with the same exponential growth as Ryser’s algorithm for the permanent [8]. Equivalently, if one could construct an algorithm that calculates hafnians in time \(O(\alpha^{n/2})\) with \(\alpha<2\) one could calculate permanents faster than Ryser’s algorithm (which is the fastest known algorithm to calculate the permanent [9]). This is because of the identity
which states that a bipartite graph with two parts having \(n/2\) elements can always be thought as a simple graph with \(n\) vertices. It should be noted that improving over Ryser’s algorithm is a well-known open problem: e.g. Knuth [10] asks for an arithmetic circuit for the permanent with less than \(2^n\) operations. Also note that since the exact calculation of the permanent of \((0,1)\) matrices is in the #P complete class [11] the above identity shows that deciding if the hafnian of a complex matrix is larger than a given value is also in the #P complete class.
Tip
Permanents can be calculated directly using Ryser’s algorithm via thewalrus.perm()
.
Finally, note that the approximate methods developed for counting perfect matchings are aimed at (weighted-)graphs with real or positive entries [12][13]. Of particular note is the approximate algorithm introduced by Barvinok for matrices with non-negative entries [14] further analyzed in Ref. [15].
In what follows we provide a pseudo-code or equations that give a basic intuition for the algorithms implemented in this library. The reader is referred to the original literature for proof of correctness and complexity.
Reference algorithm¶
We provide a reference implementation of the hafnian and loop hafnian that iterates over the sets \(\text{PMP}(n)\) and \(\text{SPM}(n)\). These implementations are extremely slow for even moderate sized matrices and are only provided for educational purposes.
Tip
Implemented as thewalrus.reference.hafnian()
. The optional argument loops=True
can be used to calculate loop hafnians.
Recursive algorithm¶
In 2012 Björklund [5] introduced the following algorithm to calculate the hafnian of a matrix of size \(n\) (even) in any field \(\mathbb{F}\) in time \(O(n^4 \log(n) 2^{n/2})\)
In the pseudocode above the following notation is used:
\([n]=\{0,1,2,\ldots,n-1\}\) is the set of the first \(n\) integers,
\(|X|\) is used to denote the number of elements in the set \(X\), and
\(P(X)\) is used to denote the power set, which is the set of all the subsets of the set \(X\). Note that if \(X\) has \(|X|\) elements, then its power set has \(2^{|X|}\) elements.
Note that the subindices and superindices in the matrices \(\bm{B}\) are not used for components of the matrices but rather to denote stages in the computation. Furthermore, these matrices contain polynomials in the symbolic variable \(r\) and that the final answer is obtained by adding the coefficients of \(r^{n/2}\) in the polynomial \(g\) at each step. In the implementation provided in this library the algorithm sketched above in pseudocode is turned into a recursion relation, hence the name we give it here.
Unfortunately, there is no known generalization of this algorithm to loop hafnians.
Tip
Implemented as thewalrus.hafnian()
. This is the default algorithm for calculating hafnians.
Eigenvalue-trace algorithm¶
Based on the work of Cygan and Pilipczuk [7], Björklund et al [6] introduced another algorithm to calculate the hafnian of a real or complex matrix of size \(n\) in 2018. This algorithm which runs in time \(O(n^3 2^{n/2})\) and can be more succinctly expressed as an equation
where the matrix \(\bm{X}\) is defined as
\(\mathbb{I}\) is the identity matrix and the function \(f(\bm{C})\) takes a matrix \(\bm{C}\) and returns the coefficient of \(\eta^{n/2}\) in the following polynomial:
This coefficient can be found by taking derivatives [16]
The function \(p_{n/2}(\eta\bm{C})\) requires only the traces of the matrix powers of the matrices \(\bm{C}^k\), hence the name of this algorithm.
Note that these traces can be written in terms of the sums of powers of the eigenvalues of the matrix \(\bm{C}\).
This formula generalizes to the loop hafnian as follows
where now the function \(f(\bm{C})\) takes a matrix \(\bm{C}\) and returns the coefficient of \(\eta^{n/2}\) in the following polynomial:
where \(\bm{v} = \text{diag}(\bm{C})\) and we used the reduction operation (cf. notation) in terms of the set \(S\).
Tip
Implemented as thewalrus.hafnian()
with the argument recursive=False
.
The loop hafnian calculation can be done by setting the option loops=True
.
Repeated-moment algorithm¶
By mapping the calculation of moments of the multinormal distribution to the calculation of the hafnian, Kan [2] derived the following equation for the loop hafnian
where we used the reduction and vector in diagonal (\(\text{vid}\)) operations introduced in the notation section.
Note that if we pick \(m_i=1 \ \forall i\) and \(\bm{v} = \text{diag}(\bm{A})\) we recover the loop hafnian of \(\bm{A}\). In this case, the calculation of the loop hafnian requires \(O(n 2^n)\) operations, which is quadratically worse than Björklund’s algorithms. This formula is however useful when calculating hafnians and loop hafnians of matrices with repeated rows and columns for which column and row \(i\) are repeated \(m_i\) times; taking only \(O(n A G^n)\) operations to calculate the loop hafnian, where
Compare this with Björklund’s algorithm, which requires \(O\left((A n)^3 \left(\sqrt{2}^{A}\right)^n\right)\) operations.
Tip
Implemented as thewalrus.hafnian_repeated()
. The vector \(\bm{m}\) is passed in the variable rpt
. The loop hafnian calculation can be done by passing the variable mu
with the values of the vector \(\bm{u}\) and the option loops=True
.
Batched algorithm¶
Using the multidimensional Hermite polynomials, and their connection to the matrix elements of Gaussian states and hafnians discussed in the next section, one can calculate the hafnians of all reductions of a matrix \(\bm{B} \in \mathbb{C}^{n \times n}\) up to a given cutoff. The reduction of matrix \(\bm{B}\) is precisely the matrix \(\bm{B}_{\bm{m}}\) obtained by repeating (or removing) the \(i^{\text{th}}\) row and column \(m_i\) times. Thus given a cutoff \(m_{\max}\), one can use the batched algorithm to calculate
for all \(0\leq m_i < m_{\max}\), thus this function returns a tensor with \((m_{\max})^n\) components.
One can also use this function to calculate the same loop hafnians that Kan’s algorithm returns
if provided also with a vector \(\bm{u}\). Note that this parameter is optional.
Internally, these hafnians are calculated by using the recursion relation of the multidimensional Hermite polynomials discussed in the next section.
Tip
Implemented as thewalrus.hafnian_batched()
. The loop hafnian calculation can be done by passing the variable mu
with the values of the vector \(\bm{u}\).
Approximate algorithm¶
In 1999 Barvinok [14] provided a surprisingly simple algorithm to approximate the hafnian of a symmetric matrix with non-negative entries. Let the matrix have entries \(A_{i,j}\) and define the antisymmetric stochastic matrix with entries that distribute according to \(W_{i,j} = -W_{i,j} \sim \mathcal{N}(0,A_{i,j})\), where \(\mathcal{N}(\mu,\sigma^2)\) is the normal distribution with mean \(\mu\) and variance \(\sigma^2\). The following now holds:
where \(\mathbb{E}\) denotes the usual statistical expectation value, and \(\text{det}\) is the determinant. This formula has not been generalized to loop hafnians.
Tip
Implemented as thewalrus.hafnian()
with approx=True
. Note that one needs to pass the number of samples used to estimate the expectation value in the formula above; this is specified with the argument num_samples
.
Low-rank algorithm¶
If a symmetric matrix \(\bm{A} \in \mathbb{C}^{n \times n}\) is of low rank it can be written as \(\bm{A} = \bm{G} \bm{G}^T\) where \(\bm{G} \in \mathbb{C}^{n \times r}\) is a rectangular matrix and \(r \leq n\) is the rank of \(\bm{A}\). One can then calculate the hafnian of the matrix \(\bm{A}\) in time and space complexity \({n+r-1 \choose r-1}\) by generalizing the result derived Barvinok [17] for permanents to hafnians as derived in Appendix C of Björklund et al [6]. The algorithm works by defining the following multivariate polynomial
Consider now the \(r\)-partitions of the integer \(n\), there are \({n+r-1 \choose r-1}\) of such partitions. Let \(e=(e_0,\ldots,e_{r-1})\) be an even \(r\)-partition (i.e. an \(r\)-partition where each element is even), call \(\mathcal{E}_{n,r}\) the set of all even \(r\)-partitions, and define \(\lambda_e\) to be the coefficient of \(x_0^{e_0}\ldots x_{r-1}^{e_{r-1}}\) in the polynomial \(q(x_0,\ldots,x_{r-1})\). Then one can write the hafnian of \(\bm{A} = \bm{G} \bm{G}^T\) as
Tip
Implemented as thewalrus.low_rank_hafnian()
. This function takes as argument the matrix \(\bm{G} \in \mathbb{C}^{n \times r}\) and returns the value of the hafnian of the matrix \(\bm{A} = \bm{G} \bm{G}^T\)
Multidimensional Hermite polynomials¶
Section author: Nicolás Quesada <nicolas@xanadu.ai>
In this section we study the multidimensional Hermite polynomials originally introduced by C. Hermite in 1865. See Mizrahi [33], Berkowitz et al. [18] and Kok and Braunstein [34] for more details.
In the next section, where we discuss quantum Gaussian states, we will explain how these polynomials relate to hafnians and loop hafnians. For the moment just let us introduce them and study their formal properties.
Generating function¶
Given two complex vectors \(\alpha,\beta \in \mathbb{C}^\ell\) and a symmetric matrix \(\bm{B} = \bm{B}^T \in \mathbb{C}^{\ell \times \ell}\),
where the notation \(\bm{m} \geq \bm{0}\) is used to indicate that the sum goes over all vectors in \(\mathbb{N}^{\ell}_0\) (the set of vectors of nonnegative integers of size \(\ell\)). This generating function provides an implicit definition of the multidimensional Hermite polynomials. It is also straightforward to verify that \(H_{\bm{0}}^{(\bm{B})}(\alpha) = G_{\bm{0}}^{(\bm{B})}(\alpha) 1\). Finally, one can connect the standard Hermite polynomials \(H_{\bm{m}}^{(\bm{B})}(\alpha)\) to the modified Hermite polynomials \(G_{\bm{m}}^{(\bm{B})}(\alpha)\) via
In the one dimensional case, \(\ell=1\), one can compare the generating function above with the ones for the “probabilists’ Hermite polynomials” \(He_n(x)\) and “physicists’ Hermite polynomials” \(H_n(x)\) to find
Tip
The standard multidimensional Hermite polynomials are implemented as thewalrus.hermite_multidimensional()
. The modified Hermite polynomials can be obtained by passing the extra argument modified=True
.
Recursion relation¶
Based on the generating functions introduced in the previous section one can derive the following recursion relations
where \(\bm{e}_j\) is a vector with zeros in all its entries except in the \(i^{\text{th}}\) entry where it has a one.
From this recursion relation, or by Taylor expanding the generating function, one easily finds
Using this recursion relation one can calculate all the multidimensional Hermite polynomials up to a given cutoff.
The connection between the multidimensional Hermite polynomials and pure Gaussian states was reported by Wolf [35], and later by Kramer, Moshinsky and Seligman [36]. This same connection was also pointed out by Doktorov, Malkin and Man’ko in the context of vibrational modes of molecules [37]. Furthermore, this connection was later generalized to mixed Gaussian states by Dodonov, Man’ko and Man’ko [26]. These matrix elements have the form
To obtain the standard or modified Hermite polynomials renormalized by the square root of the factorial of its index \(\sqrt{\bm{m}!}\) one can pass the optional argument renorm=True
.
Multidimensional Hermite polynomials and hafnians¶
By connecting the results in page 815 of Dodonov et al. [26] with the results in page 546 of Kan [2] one obtains the following relation between the hafnian and the multidimensional Hermite polynomials
and moreover one can generalize it to
for loop hafnians. With these two identifications one can use the recursion relations of the multidimensional Hermite polynomials to calculate all the hafnians of the reductions of a given matrix up to a given cutoff.
With these observations and using the recursion relations for the Hermite polynomials and setting \(\bm{m}=\bm{1} - \bm{e}_i, \ \alpha = 0\) one easily derives the well known Laplace expansion for the hafnian (cf. Sec. 4.1 of [30])
where \(j\) is a fixed index and \(\bm{B}_{-i-j}\) is the matrix obtained from \(\bm{B}\) by removing rows and columns \(i\) and \(j\).
Gaussian States in the Fock basis¶
Section author: Nicolás Quesada <nicolas@xanadu.ai>
In this section we show how the matrix elements of so-called Gaussian states in the Fock basis are related to the hafnians and loop hafnians introduced in the previous sections.
Note
The results presented in this page use the conventions introduced in the notation page and the reader is strongly encouraged to get familiar with them.
Wigner functions and Gaussian states¶
The quantum state \(\varrho\) of an \(\ell\)-mode system can be uniquely characterized by its Wigner function [19]
where \(\vec \alpha = (\alpha_0,\ldots, \alpha_{\ell-1},\alpha_0^*,\ldots, \alpha_{\ell-1}^*)\) and similarly \(\vec \xi = (\xi_0,\ldots, \xi_{\ell-1},\xi_0^*,\ldots, \xi_{\ell-1}^*)\) are bivectors of complex amplitudes where the second half of the vector is the complex conjugate of the first half. The displacement operator is defined as \(\hat D(\xi):=\exp(\vec{\xi}^T \bm{\Omega} \hat \zeta)\), where \(\bm{\Omega}= \left[ \begin{smallmatrix} 0 & \mathbb{I} \\ -\mathbb{I} & 0 \end{smallmatrix} \right]\) is the symplectic form and \(\hat\zeta_j\) is an operator vector of the mode creation and annihilation operators. Recall, \(\ell\) being the number of modes, we have \(\hat\zeta_j=\hat a_j\) and \(\hat \zeta_{\ell+j}=\hat a_j^\dagger\) for \(j=1,\cdots,\ell\). These bosonic creation and annihilation operators satisfy the canonical commutation relations \([\hat a_i, \hat a_j]\) = 0 and \([\hat a_i, \hat a_j^\dagger] = \delta_{ij}\).
A quantum state is called Gaussian if its Wigner function is Gaussian [20]. Any multimode Gaussian state \(\varrho\) is completely parametrized by its first and second moments, namely the vector of means \(\vec{\beta}\) with components
and the Wigner-covariance matrix \(\bm{\sigma}\) with entries
where \(\{x,y\} := xy +yx\) denotes the anti-commutator.
The variables \((\alpha_0,\ldots,\alpha_{\ell-1})\) are said to be complex normal distributed with mean \((\beta_0,\ldots,\beta_{\ell-1}) \in \mathbb{C}^{\ell}\) and covariance matrix \({\bm{\sigma}}\) [21] which furthermore needs to satisfy the uncertainty relation [22]
where \(\bm{Z} = \left( \begin{smallmatrix} \mathbb{I} & 0\\ 0& -\mathbb{I} \end{smallmatrix} \right)\). The covariance matrix \(\bm{\sigma}\) is customarily parametrized in the following block form [21]
where \(\bm{\Gamma}\) is hermitian and positive definite, while \(\bm{C}\) is symmetric.
Gaussian states in the quadrature basis¶
Historically, Gaussian states are parametrized not in terms of the covariance matrix \(\bm{\sigma}\) of the complex amplitudes \(\alpha_j\), but rather in terms of its quadrature components, the canonical positions \(q_j\) and canonical momenta \(p_j\),
where \(\hbar\) is a positive constant. There are at least three conventions for the value of this constant, \(\hbar \in \{1/2,1,2 \}\).
It is convenient to write the following vector \(\bm{r} = (q_0,\ldots,q_{\ell-1},p_0,\ldots,p_{\ell-1})\) whose mean is related to the vector of means \(\vec \beta\) as
Similarly the complex normal covariance matrix \(\bm{\sigma}\) of the variables \((\alpha_0,\ldots,\alpha_{\ell-1})\) is related to the normal covariance matrix \(\bm{V}\) of the variables \(\bm{r} = (q_0,\ldots,q_{\ell-1},p_0,\ldots,p_{\ell-1})\) as
Tip
To convert between the complex covariance matrix \(\bm{\sigma}\) and the quadrature covariance matrix \(\bm{V}\) use the functions thewalrus.quantum.Qmat()
and thewalrus.quantum.Covmat()
An important property of Gaussian states is that reduced (or marginal) states of a global Gaussian state are also Gaussian. This implies that the reduced covariance matrix of a subsystem of a Gaussian state together with a reduced vector of means fully characterize a reduced Gaussian state. The reduced covariance matrix for modes \(S = i_1,\ldots,i_n\) is obtained from the covariance matrix of the global state \(\bm{\sigma}\) or \(\bm{V}\) by keeping the columns and rows \(i_1,\ldots,i_n\) and \(i_1+\ell,\ldots,i_n+\ell\) of the original covariance matrix \(\bm{\sigma}\). Similarly one obtains the vector of means by keeping only entries \(i_1,\ldots,i_n\) and \(i_1+\ell,\ldots,i_n+\ell\) of the original vector of means \(\vec \beta\) or \(\bm{\bar{r}}\). Using the notation previously introduced, one can succinctly write the covariance matrix of modes \(S=i_1,\ldots,i_m\) as \(\bm{\sigma}_{(S)}\) or \(\bm{V}_{(S)}\) , and similarly the vector of means as \(\vec{\beta}_{(S)}\) or \(\bm{\bar{r}}_{(S)}\).
Tip
To obtain the reduced covariance matrix and vector of means for a certain subset of the modes use thewalrus.quantum.reduced_gaussian()
.
Note that for \(\bm{V}\) to be a valid quantum covariance matrix it needs to be symmetric and satisfy the uncertainty relation
Tip
To verify if a given quadrature covariance matrix is a valid quantum covariance matrix use the function thewalrus.quantum.is_valid_cov()
A Gaussian state is pure \(\varrho = \ket{\psi} \bra{\psi}\) if and only if \(\text{det}\left( \tfrac{2}{\hbar} \bm{V} \right) = 1\).
Tip
To verify if a given quadrature covariance matrix is a valid quantum covariance matrix and corresponds to a pure state use the function thewalrus.quantum.is_pure_cov()
Finally, there is a special subset of Gaussian states called classical whose covariance matrix satisfies
This terminology is explained in the next section when sampling is discussed.
Tip
To verify if a given quadrature covariance matrix is a valid quantum covariance matrix and corresponds to a classical state use the function thewalrus.quantum.is_classical_cov()
Gaussian states in the Fock basis¶
In this section we use a generalization [23][24] of the results of Hamilton et al. [25] by providing an explicit expression for Fock basis matrix elements \(\langle \bm{m} | \rho | \bm{n} \rangle\), \(\bm{n} = (n_0,\ldots, n_{\ell-1}), \bm{m} = (m_0,\ldots, m_{\ell-1})\), of an \(\ell\)-mode Gaussian state \(\rho\) with covariance matrix \(\bm{\sigma}\) and displacement vector \(\vec \beta\). Note that these matrix elements can also be calculated using multidimensional Hermite polynomials as shown by Dodonov et al. [26]. Depending on how many of these elements are required one can prefer to calculate loop hafnians or multidimensional Hermite polynomials. In particular if one only needs a few matrix elements it is more advantageous to use the formulas derived below. On the other hand if one requires all the matrix elements up to a certain Fock occupation cutoff it is more efficient to use the methods of Dodonov et al., which are also implemented in this library.
We first define the following useful quantities:
We refer to \(\bm{\Sigma}\) as the Husimi covariance matrix.
As shown in detail in Appendix A of Ref. [24], the Fock matrix elements of a Gaussian state \(\rho\) are given by the expression
where \(\text{lhaf}\) is the loop hafnian and \(\text{vid}\) is the vector in diagonal notation introduced in the notation section.
Note that one can also obtain the probability of detecting a certain photon number pattern \(\bm{n} = (n_0,\ldots,n_{\ell-1})\) by calculating
Tip
To obtain the matrix element of gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) use the function thewalrus.quantum.density_matrix_element()
.
Tip
To obtain the Fock space density matrix of gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) use the function thewalrus.quantum.density_matrix()
.
In the case where the Gaussian state \(\varrho = |\psi \rangle \langle \psi|\) is pure then the matrix element
factorizes into a product of two amplitudes. In Ref. [23] it was shown that the Fock amplitude of a gaussian state is also given by a loop hafnian. Then, for pure states the matrix \(\bm{\bar{A}} = \bm{\bar{B}} \oplus \bm{\bar{B}}^*\).
Tip
To obtain the overlap of a pure gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) and a given Fock state \(\langle \bm{n}|\) use the function thewalrus.quantum.pure_state_amplitude()
.
Tip
To obtain the Fock space state vector (ket) of a pure gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) use the function thewalrus.quantum.state_vector()
.
Gaussian states and threshold detection¶
In the last section we sketched how to obtain the probability that a certain photon-number outcome is obtained when a Gaussian state is measured with photon-number detectors. In this section we show how to obtain the analogous probability for the case of threshold detectors. These binary outcome detectors can only distinguish between the the vacuum state and occupied states, and thus for a single mode they are described by the POVM elements
where \(\ket{0_i}\) is the vacuum state of mode \(i\) and \(1_i\) is the identity in the Hilbert space of mode \(i\).
For an \(\ell\) mode Gaussian state with zero mean, the outcome of threshold detection in all of its modes is described by a bitstring vector \(\bm{n} = (n_0,\ldots,n_{\ell-1})\) and the probability of the event is given by Born’s rule according to
where \(\text{tor}\) is the Torontonian. For \(2 \ell \times 2 \ell\) matrix \(\bm{O}\) the Torontonian is defined as
The torontonian can be thought of as a generating function for hafnians (cf. the trace algorithm formula in algorithms section).
Tip
The torontonian is implemented as thewalrus.tor()
.
Gaussian Boson Sampling¶
Section author: Nicolas Quesada <nicolas@xanadu.ai>
What is Gaussian Boson Sampling¶
Gaussian Boson Sampling was introduced in Ref. [25] as a problem that could potentially show how a non-universal quantum computer shows exponential speed ups over a classical computer. In the ideal scenario a certain number \(N\) of squeezed states are sent into \(M \times M\) interferometer and are then proved by photon-number resolving detectors. Hamilton et al. argue that under certain conjectures and assumptions it should be extremely inefficient for a classical computer to generate the samples that the quantum optical experiment just sketched generates by construction. Note that the setup described by Hamilton et al. is a special case of a Gaussian state. We consider Gaussian states, that except for having zero mean, are arbitrary.
In this section we describe a classical algorithm introduced in Ref. [27], that generates GBS samples in exponential time in the number of photons measured. This algorithm reduces the generation of a sample to the calculation of a chain of conditional probabilities, in which subsequent modes are interrogated as to how many photons should be detected conditioned on the detection record of previously interrogated modes. The exponential complexity of the algorithm stems from the fact that the conditional probabilities necessary to calculate a sample are proportional to hafnians of matrices of size \(2N\times 2N\), where \(N\) is the number of photons detected.
An exponential-time classical algorithm for GBS¶
As explained in the previous section the reduced or marginal density matrices of a Gaussian state are also Gaussian, and it is straightforward to compute their marginal covariance matrices given the covariance matrix of the global quantum state.
Let \(\bm{\Sigma}\) be the Husimi covariance matrix of the Gaussian state being measured with photon-number resolving detectors. Let \(S = (i_0,i_1,\ldots,i_{m-1})\) be a set of indices specifying a subset of the modes. In particular we write \(S=[k] = (0,1,2,\ldots, k-1)\) for the first \(k\) modes. We write \(\bm{\Sigma}_{(S)}\) to indicate the covariance matrix of the modes specified by the index set \(S\) and define \(\bm{A}^{S} := \bm{X} \left[\mathbb{I} - \left( \bm{\Sigma}_{(S)}\right)^{-1} \right]\).
As shown in the previous section, these matrices can be used to calculate photon-number probabilities as
where \(\bm{N}=\left\{N_{i_1},N_{i_2},\ldots,N_{i_m} \right\}\) is a random variable denoting the measurement outcome, and \(\bm{n} = \left\{n_{i_1},n_{i_2},\ldots,n_{i_m} \right\}\) is a set of integers that represent the actual outcomes of the photon number measurements and \(\bm{n}! = \prod_{j=0}^{m-1} n_j!\)
Now, we want to introduce an algorithm to generate samples of the random variable \(\{N_0,\ldots,N_{M-1}\}\) distributed according to the GBS probability distribution. To generate samples, we proceed as follows: First, we can always calculate the following probabilities
for \(n_0=0,1,\ldots, n_{\max}\), where \(n_{\max}\) is a cut-off on the maximum number of photons we can hope to detect in this mode. Having constructed said probabilities, we can always generate a sample for the number of photons in the first mode. This will fix \(N_0 = n_0\). Now we want to sample from \(N_1|N_0=n_0\). To this end, we use the definition of conditional probability
We can, as before, use this formula to sample the number of photons in the second mode conditioned on observing \(n_0\) photons in the first mode. Note that the factor \(p(N_1=n_1)\) is already known from the previous step. By induction, the conditional probability of observing \(n_k\) photons in the \(k\)-th mode satisfies given observations of \(n_0,\ldots,n_{k-1}\) photons in the previous \(k\) modes is given by
where \(p(N_{k-1}=n_{k-1},\ldots,N_0=n_0)\) has already been calculated from previous steps. The algorithm then proceeds as follows: for mode \(k\), we use the previous equation to sample the number of photons in that mode conditioned on the number of photons in the previous \(k\) modes. Repeating this sequentially for all modes produces the desired sample.
Tip
To generate samples from a gaussian state specified by a quadrature covariance matrix use thewalrus.samples.generate_hafnian_sample()
.
Note that the above algorithm can also be generalized to states with finite means for which one only needs to provide the mean with the optional argument
mean
.
Threshold detection samples¶
Note the arguments presented in the previous section can also be generalized to threshold detection. In this case one simple need to replace \(\text{haf} \to \text{tor}\) and \(\bm{A}^{[k+1]}_{(n_0,n_2,\ldots,n_k)} \to \bm{O}^{[k+1]}_{(n_0,n_2,\ldots,n_k)}\) where \(\bm{O}^{S} = \left[\mathbb{I} - \left( \bm{\Sigma}_{(S)}\right)^{-1} \right]\).
Tip
To generate threshold samples from a gaussian state specified by a quadrature covariance matrix use thewalrus.samples.generate_torontonian_sample()
.
Sampling of classical states¶
In the previous section it was mentioned that states whose covariance matrix satisfies \(\bm{V} \geq \frac{\hbar}{2}\mathbb{I}\) are termed classical. These designation is due to the fact that for these states it is possible to obtain a polynomial (cubic) time algorithm to generate photon number or threshold samples [28].
Tip
To generate photon number or threshold samples from a classical gaussian state specified by a quadrature covariance matrix use thewalrus.samples.hafnian_sample_classical_state()
or thewalrus.samples.torontonian_sample_classical_state()
.
Note that one can use this observation to sample from a probability distribution that is proportional to the permanent of a positive semidefinite matrix, for details on how this is done cf. Ref. [29].
Tip
To generate photon number samples from a thermal state parametrized by a positive semidefinite real matrix use the module thewalrus.csamples()
.
Notation¶
Section author: Nicolás Quesada <nicolas@xanadu.ai>
Matrices and vectors¶
In this section we introduce the notation that is used in the rest of the documentation.
We deal with so-called bivectors in which the second half of their components is the complex conjugate of the first half. Thus if \(\bm{u} = (u_1,\ldots u_\ell) \in \mathbb{C}^{\ell}\) then \(\vec{\alpha} = (\bm{u},\bm{u}^*) = (u_1,\ldots,u_\ell,u_1^*,\ldots,u_\ell^*)\) is a bivector. We use uppercase letters for (multi)sets such as \(S = \{1,1,2\}\).
Following standard Python and C convention, we index starting from zero, thus the first \(\ell\) natural numbers are \([\ell]:=\{0,1,\ldots,\ell-1\}\). We define the \(\text{diag}\) function as follows: When acting on a vector \(\bm{u}\) it returns a square diagonal matrix \(\bm{u}\). When acting on a square matrix it returns a vector with its diagonal entries.
Finally, we define the vector in diagonal (\(\text{vid}\)) operation, that takes a matrix \(\bm{A}\) of size \(n \times n\) and a vector \(\bm{u}\) of size \(n\) and returns the matrix
which is simply the matrix \(\bm{A}\) with the vector \(\bm{u}\) placed along its diagonal.
The reduction operation¶
It is very useful to have compact notation to deal with matrices that are constructed by removing or repeating rows and column of a given primitive matrix. Imagine for example a given \(4 \times 4\) matrix
and that you want to construct the following matrix
where the first row and column have been kept as they were, the second row and column have been repeated 3 times, the third row and column have been eliminated and the fourth and the last row and column have been repeated twice. To specify the number of repetitions (or elimination) of a given row-column we simply specify a vector of integers where each value tells us the number of repetitions, and we use the value 0 to indicate that a given row-column is removed. Thus defining \(\bm{m}=(1,3,0,2)\) we find its reduction by \(\bm{m}\) to be precisely the matrix in the last equation
One can also define the reduction operation on vectors. For instance if \(\bm{u}=(u_0,u_1,u_2,u_3)\) and \(\bm{m}=(1,3,0,2)\) then \(\bm{u}_\bm{n} = (u_0,u_1,u_1,u_1,u_3,u_3)\).
Tip
The reduction operation is implemented as
thewalrus.reduction()
.
Reduction on block matrices¶
When dealing with Gaussian states one typically encounters \(2\ell \times 2 \ell\) block matrices of the following form
where \(\bm{C},\bm{D},\bm{E},\bm{F}\) are of size \(\ell \times \ell\). Now imagine that one applies the reduction operation by a vector \(\bm{n} \in \mathbb{N}^{\ell}\) to each of the blocks. We introduce the following notation
where we have used the notation \((\bm{n})\) with the round brackets \(()\) to indicate that the reduction is applied to the blocks of the matrix \(\bm{A}\).
Similarly to block matrices, one can also define a reduction operator for bivectors. Thus if \(\vec \beta = (u_0,u_1,u_2,u_3,u_0^*,u_1^*,u_2^*,u_3^*)\) and \(\bm{m}=(1,3,0,2)\), then
The reduction operation in terms of sets¶
A different way of specifying how many times a given row and column must me repeated is by giving a set in which we simply list the columns to be repeated. Thus for example the reduction index vector \(\bm{n} = (1,3,0,2)\) can alternatively be given as the multiset \(S=\{0,1,1,1,3,3 \}\) where the element 0 appears once to indicate the first row and column is repeated once, the index 1 appears three times to indicate that this row and column are repeated three times, etcetera.
Similarly for matrices of even size for which the following partition makes sense
where \(\bm{A}\) is of size \(2\ell \times 2\ell\) and \(\bm{C},\bm{D},\bm{E},\bm{F}\) are of size \(\ell \times \ell\) we define
This implies that if the index \(i\) appears \(m_i\) times in \(S\) then the columns \(i\) and \(i+\ell\) of \(\bm{A}\) will be repeated \(m_i\) times in \(\bm{A}_S\).
For instance if
and \(S=\{0,0,2,2,2\}\) one finds
The notation also extends in a straightforward fashion for bivectors. For example \(\vec \beta = (u_0,u_1,u_2,u_3,u_0^*,u_1^*,u_2^*,u_3^*)\) and \(S=\{1,1,2\}\) then \(\vec \beta_{(S)} = (u_1,u_1,u_2,u_1^*,u_1^*,u_2^*)\).
This notation becomes handy when describing certain algorithms for the calculation of the hafnian and torontonian introduced in the rest of the documentation.
Combining reduction and vector in diagonal¶
Here we show some basic examples of how the reduction and vector in diagonal operations work together
Consider the following matrix
and bivector \(\vec{\beta} = (\beta_0,\beta_1,\beta_2,\beta_0^*,\beta_1^*,\beta_2^*)\) and we are given the index vector \(\bm{u} = (1,0,3)\). Then we can calculate the following
and finally write
Note that because there are repetitions, the diagonal elements of the matrix \(\bm{A}\) appear off diagonal in \(\bm{A}_{(\bm{n})}\) and also in \(\text{vid}(\bm{A}_{(\bm{n})},\vec{\beta}_{\bm{n}})\).
One can ignore the block structure of the matrix \(A\) and bivector \(\vec{\beta}\), and treat them as 6 dimensional objects and use an index vector of the same length. If we now define \(\bm{p} = (1,0,3,0,2,2)\) one finds
Note that we wrote \(\Sigma_{\bm{p}}\) and not \(\Sigma_{(\bm{p})}\) to indicate that we ignore the block structure of the matrix \(\Sigma\).
References¶
- 1
Andreas Björklund and Thore Husfeldt. Exact algorithms for exact satisfiability and number of perfect matchings. Algorithmica, 52(2):226–249, 2008.
- 2
Raymond Kan. From moments of sum to moments of product. Journal of Multivariate Analysis, 99(3):542–554, 2008.
- 3
Mikko Koivisto. Partitioning into sets of bounded cardinality. In International Workshop on Parameterized and Exact Computation, 258–263. Springer, 2009.
- 4
Jesper Nederlof. Fast polynomial-space algorithms using möbius inversion: improving on steiner tree and related problems. In International Colloquium on Automata, Languages, and Programming, 713–725. Springer, 2009.
- 5
Andreas Björklund. Counting perfect matchings as fast as ryser. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, 914–921. SIAM, 2012.
- 6
Andreas Björklund, Brajesh Gupt, and Nicolás Quesada. A faster hafnian formula for complex matrices and its benchmarking on a supercomputer. Journal of Experimental Algorithmics (JEA), 24(1):11, 2019.
- 7
Marek Cygan and Marcin Pilipczuk. Faster exponential-time algorithms in graphs of bounded average degree. Information and Computation, 243:75–85, 2015.
- 8
Herbert John Ryser. Combinatorial mathematics. Number 14. Mathematical Association of America; distributed by Wiley [New York], 1963.
- 9
Grzegorz Rempala and Jacek Wesolowski. Symmetric functionals on random matrices and random matchings problems. Volume 147. Springer Science & Business Media, 2007.
- 10
Donald E. Knuth. The Art of Computer Programming, Vol. 2: Seminumerical Algorithms. Addison-Wesley, 3 edition, 1998.
- 11
Leslie G Valiant. The complexity of computing the permanent. Theoretical computer science, 8(2):189–201, 1979.
- 12
Alexander Barvinok. Approximating permanents and hafnians. arXiv preprint arXiv:1601.07518, 2016.
- 13
Piotr Sankowski. Alternative algorithms for counting all matchings in graphs. In Helmut Alt and Michel Habib, editors, STACS 2003, 427–438. Berlin, Heidelberg, 2003. Springer Berlin Heidelberg.
- 14
Alexander Barvinok. Polynomial time algorithms to approximate permanents and mixed discriminants within a simply exponential factor. Random Structures & Algorithms, 14(1):29–61, 1999.
- 15
Mark Rudelson, Alex Samorodnitsky, Ofer Zeitouni, and others. Hafnians, perfect matchings and gaussian matrices. The Annals of Probability, 44(4):2858–2888, 2016.
- 16
Nicolás Quesada, Juan Miguel Arrazola, and Nathan Killoran. Gaussian boson sampling using threshold detectors. Physical Review A, 98(6):062322, 2018.
- 17
Alexander I Barvinok. Two algorithmic results for the traveling salesman problem. Mathematics of Operations Research, 21(1):65–84, 1996.
- 18
S Berkowitz and FJ Garner. The calculation of multidimensional Hermite polynomials and Gram-Charlier coefficients. Math. Comput., 24(111):537–545, 1970.
- 19
Alessio Serafini. Quantum Continuous Variables: A Primer of Theoretical Methods. CRC Press, 2017.
- 20
Christian Weedbrook, Stefano Pirandola, Raúl García-Patrón, Nicolas J Cerf, Timothy C Ralph, Jeffrey H Shapiro, and Seth Lloyd. Gaussian quantum information. Reviews of Modern Physics, 84(2):621, 2012.
- 21
Bernard Picinbono. Second-order complex random vectors and normal distributions. IEEE Transactions on Signal Processing, 44(10):2637–2640, 1996.
- 22
R Simon, N Mukunda, and Biswadeb Dutta. Quantum-noise matrix for multimode systems: $U(n)$ invariance, squeezing, and normal forms. Physical Review A, 49(3):1567, 1994.
- 23
Nicolás Quesada. Franck-condon factors by counting perfect matchings of graphs with loops. The Journal of chemical physics, 150(16):164113, 2019.
- 24
N. Quesada, L. G. Helt, J. Izaac, J. M. Arrazola, R. Shahrokhshahi, C. R. Myers, and K. K. Sabapathy. Simulating realistic non-gaussian state preparation. Phys. Rev. A, 100:022341, 2019.
- 25
Craig S Hamilton, Regina Kruse, Linda Sansoni, Sonja Barkhofen, Christine Silberhorn, and Igor Jex. Gaussian boson sampling. Physical review letters, 119(17):170501, 2017.
- 26
VV Dodonov, OV Man’ko, and VI Man’ko. Multidimensional hermite polynomials and photon distribution for polymode mixed light. Physical Review A, 50(1):813, 1994.
- 27
Nicolás Quesada and Juan Miguel Arrazola. Exact simulation of gaussian boson sampling in polynomial space and exponential time. Physical Review Research, 2(2):023005, 2020.
- 28
Saleh Rahimi-Keshari, Austin P Lund, and Timothy C Ralph. What can quantum optics say about computational complexity theory? Physical review letters, 114(6):060501, 2015.
- 29
Soran Jahangiri, Juan Miguel Arrazola, Nicolás Quesada, and Nathan Killoran. Point processes with gaussian boson sampling. Physical Review E, 101(2):022134, 2020.
- 30
Alexander Barvinok. Combinatorics and complexity of partition functions. Volume 274. Springer, 2016.
- 31
Eduardo R Caianiello. On quantum field theory—i: explicit solution of dyson’s equation in electrodynamics without use of feynman graphs. Il Nuovo Cimento (1943-1954), 10(12):1634–1652, 1953.
- 32
Settimo Termini. Imagination and Rigor: Essays on Eduardo R. Caianiello’s Scientific Heritage. Springer, 2006.
- 33
Maurice M Mizrahi. Generalized hermite polynomials. Journal of Computational and Applied Mathematics, 1(3):137–140, 1975.
- 34
Pieter Kok and Samuel L Braunstein. Multi-dimensional hermite polynomials in quantum optics. Journal of Physics A: Mathematical and General, 34(31):6185, 2001.
- 35
Kurt Bernardo Wolf. Canonical transforms. i. complex linear transforms. Journal of Mathematical Physics, 15(8):1295–1301, 1974.
- 36
Kramer P, M Moshinsky, and T.H Seligman. Complex extensions of canonical transformations and quantum mechanics. In E.M. Loebl, editor, Group Theory and Its Applications, pages 250–300. Academic, New York, 1975.
- 37
EV Doktorov, IA Malkin, and VI Man’ko. Dynamical symmetry of vibronic transitions in polyatomic molecules and the franck-condon principle. Journal of Molecular Spectroscopy, 64(2):302–326, 1977.
Overview¶
The Walrus contains a Python interface, and low-level C++ libwalrus
library.
Python interface¶
The
thewalrus
Python interface provides access to various highly optimized hafnian, permanent, and torontonian algorithmsThe
thewalrus.quantum
submodule provides access to various utility functions that act on Gaussian quantum statesThe
thewalrus.samples
submodule provides access to algorithms to sample from the hafnian or the torontonian of Gaussian quantum statesThe
thewalrus.symplectic
submodule provides access to a convenient set of symplectic matrices and utility functions to manipulate themThe
thewalrus.random
submodule provides access to random unitary, symplectic and covariance matricesThe
thewalrus.fock_gradients
submodule provides access to the Fock representation of certain continuous-variable gates and their gradientsThe
thewalrus.reference
submodule provides access to pure-Python reference implementations of the hafnian, loop hafnian, and torontonian
Low-level libraries¶
The low-level libwalrus
C++ library is a header-only library containing various parallelized algorithms for computing the hafnian, loop hafnian, permanent, and Torontonian calculation of complex, real, and integer matrices. This library is used under-the-hood by the Python thewalrus
module.
You can also use the libwalrus
library directly in your C++ projects - just ensure that the include
folder is in your include path, and add
#include <libwalrus.hpp>
at the top of your C++ source file. See the file example.cpp
, as well as the corresponding Makefile, for an example of how the libwalrus
library can be accessed directly from C++ code.
Alternatively, if you install the The Walrus package as a python wheel using pip
, you can link against the static pre-built library provided.
Octave¶
In addition, two auxiallary Octave functions are provided: octave/hafnian.m
and octave/loophafnian.m
.
Python library¶
This is the top level module of the The Walrus Python interface, containing functions for computing the hafnian, loop hafnian, and torontonian of matrices.
Algorithm terminology¶
- Eigenvalue hafnian algorithm
The algorithm described in A faster hafnian formula for complex matrices and its benchmarking on a supercomputer, [6]. This algorithm scales like \(\mathcal{O}(n^3 2^{n/2})\), and supports calculation of the loop hafnian.
- Recursive hafnian algorithm
The algorithm described in Counting perfect matchings as fast as Ryser [5]. This algorithm scales like \(\mathcal{O}(n^4 2^{n/2})\). This algorithm does not currently support the loop hafnian.
- Repeating hafnian algorithm
The algorithm described in From moments of sum to moments of product, [2]. This method is more efficient for matrices with repeated rows and columns, and supports caclulation of the loop hafnian.
- Approximate hafnian algorithm
The algorithm described in Polynomial time algorithms to approximate permanents and mixed discriminants within a simply exponential factor, [14]. This algorithm allows us to efficiently approximate the hafnian of matrices with non-negative elements. This is done by sampling determinants; the larger the number of samples taken, the higher the accuracy.
- Batched hafnian algorithm
An algorithm that allows the calculation of hafnians of all reductions of a given matrix up to the cutoff (resolution) provided. Internally, this algorithm makes use of the multidimensional Hermite polynomials as per The calculation of multidimensional Hermite polynomials and Gram-Charlier coefficients [18].
- Low-rank hafnian algorithm
An algorithm that allows to calculate the hafnian of an \(r\)-rank matrix \(\bm{A}\) of size \(n \times n\) by factorizing it as \(\bm{A} = \bm{G} \bm{G}^T\) where \(\bm{G}\) is of size \(n \times r\). The algorithm is described in Appendix C of A faster hafnian formula for complex matrices and its benchmarking on a supercomputer, [6].
Python wrappers¶
|
Returns the hafnian of a matrix. |
|
Returns the hafnian of matrix with repeated rows/columns. |
|
Calculates the hafnian of |
|
Returns the Torontonian of a matrix. |
|
Returns the permanent of a matrix via the Ryser formula. |
|
Calculates the permanent of matrix \(A\), where the ith row/column of \(A\) is repeated \(rpt_i\) times. |
|
Returns the multidimensional Hermite polynomials \(H_k^{(R)}(y)\). |
Pure Python functions¶
|
Calculates the reduction of an array by a vector of indices. |
|
Get version number of The Walrus |
|
Returns the hafnian of the low rank matrix \(\bm{A} = \bm{G} \bm{G}^T\) where \(\bm{G}\) is rectangular of size \(n \times r\) with \(r \leq n\). |
-
hafnian
(A, loop=False, recursive=True, tol=1e-12, quad=True, approx=False, num_samples=1000)[source]¶ Returns the hafnian of a matrix.
For more direct control, you may wish to call
haf_real()
,haf_complex()
, orhaf_int()
directly.- Parameters
A (array) – a square, symmetric array of even dimensions.
loop (bool) – If
True
, the loop hafnian is returned. Default isFalse
.recursive (bool) – If
True
, the recursive algorithm is used. Note: the recursive algorithm does not currently support the loop hafnian. Ifloop=True
, then this keyword argument is ignored.tol (float) – the tolerance when checking that the matrix is symmetric. Default tolerance is 1e-12.
quad (bool) – If
True
, the hafnian algorithm is performed with quadruple precision.approx (bool) – If
True
, an approximation algorithm is used to estimate the hafnian. Note that the approximation algorithm can only be applied to matricesA
that only have non-negative entries.num_samples (int) – If
approx=True
, the approximation algorithm performsnum_samples
iterations for estimation of the hafnian of the non-negative matrixA
.
- Returns
the hafnian of matrix A.
- Return type
np.int64 or np.float64 or np.complex128
-
hafnian_repeated
(A, rpt, mu=None, loop=False, tol=1e-12)[source]¶ Returns the hafnian of matrix with repeated rows/columns.
The
reduction()
function may be used to show the resulting matrix with repeated rows and columns as perrpt
.As a result, the following are identical:
>>> hafnian_repeated(A, rpt) >>> hafnian(reduction(A, rpt))
However, using
hafnian_repeated
in the case where there are a large number of repeated rows and columns (\(\sum_{i}rpt_i \gg N\)) can be significantly faster.Note
If \(rpt=(1, 1, \dots, 1)\), then
>>> hafnian_repeated(A, rpt) == hafnian(A)
For more direct control, you may wish to call
haf_rpt_real()
orhaf_rpt_complex()
directly.- Parameters
A (array) – a square, symmetric \(N\times N\) array.
rpt (Sequence) – a length-\(N\) positive integer sequence, corresponding to the number of times each row/column of matrix \(A\) is repeated.
mu (array) – a vector of length \(N\) representing the vector of means/displacement. If not provided,
mu
is set to the diagonal of matrixA
. Note that this only affects the loop hafnian.loop (bool) – If
True
, the loop hafnian is returned. Default isFalse
.use_eigen (bool) – if True (default), the Eigen linear algebra library is used for matrix multiplication. If the hafnian library was compiled with BLAS/Lapack support, then BLAS will be used for matrix multiplication.
tol (float) – the tolerance when checking that the matrix is symmetric. Default tolerance is 1e-12.
- Returns
the hafnian of matrix A.
- Return type
np.int64 or np.float64 or np.complex128
-
hafnian_batched
(A, cutoff, mu=None, tol=1e-12, renorm=False, make_tensor=True)[source]¶ Calculates the hafnian of
reduction(A, k)
for all possible values of vectork
below the specified cutoff.Here,
\(A\) is am \(n\times n\) square matrix
\(k\) is a vector of (non-negative) integers with the same dimensions as \(A\), i.e., \(k = (k_0,k_1,\ldots,k_{n-1})\), and where \(0 \leq k_j < \texttt{cutoff}\).
The function
hafnian_repeated()
can be used to calculate the reduced hafnian for a specific value of \(k\); see the documentation for more information.Note
If
mu
is notNone
, this function instead returnshafnian(np.fill_diagonal(reduction(A, k), reduction(mu, k)), loop=True)
. This calculation can only be performed if the matrix \(A\) is invertible.- Parameters
A (array) – a square, symmetric \(N\times N\) array.
cutoff (int) – maximum size of the subindices in the Hermite polynomial
mu (array) – a vector of length \(N\) representing the vector of means/displacement
renorm (bool) – If
True
, the returned hafnians are normalized, that is, \(haf(reduction(A, k))/\ \sqrt{prod_i k_i!}\) (or \(lhaf(fill\_diagonal(reduction(A, k),reduction(mu, k)))\) ifmu
is not None)make_tensor – If
False
, returns a flattened one dimensional array instead of a tensor with the values of the hafnians.
- Returns
the values of the hafnians for each value of \(k\) up to the cutoff
- Return type
(array)
-
tor
(A, fsum=False)[source]¶ Returns the Torontonian of a matrix.
For more direct control, you may wish to call
tor_real()
ortor_complex()
directly.The input matrix is cast to quadruple precision internally for a quadruple precision torontonian computation.
- Parameters
A (array) – a np.complex128, square, symmetric array of even dimensions.
fsum (bool) – if
True
, the Shewchuck algorithm for more accurate summation is performed. This can significantly increase the accuracy of the computation, but no casting to quadruple precision takes place, as the Shewchuck algorithm only supports double precision.
- Returns
the torontonian of matrix A.
- Return type
np.float64 or np.complex128
-
perm
(A, quad=True, fsum=False)[source]¶ Returns the permanent of a matrix via the Ryser formula.
For more direct control, you may wish to call
perm_real()
orperm_complex()
directly.- Parameters
A (array) – a square array.
quad (bool) – If
True
, the input matrix is cast to along double
matrix internally for a quadruple precision hafnian computation.fsum (bool) – Whether to use the
fsum
method for higher accuracy summation. Note that iffsum
is true, double precision will be used, and thequad
keyword argument will be ignored.
- Returns
the permanent of matrix A.
- Return type
np.float64 or np.complex128
-
permanent_repeated
(A, rpt)[source]¶ Calculates the permanent of matrix \(A\), where the ith row/column of \(A\) is repeated \(rpt_i\) times.
This function constructs the matrix
\[\begin{split}B = \begin{bmatrix} 0 & A\\ A^T & 0 \end{bmatrix},\end{split}\]and then calculates \(perm(A)=haf(B)\), by calling
>>> hafnian_repeated(B, rpt*2, loop=False)
- Parameters
A (array) – matrix of size [N, N]
rpt (Sequence) – sequence of N positive integers indicating the corresponding rows/columns of A to be repeated.
- Returns
the permanent of matrix A.
- Return type
np.int64 or np.float64 or np.complex128
-
reduction
(A, rpt)[source]¶ Calculates the reduction of an array by a vector of indices.
This is equivalent to repeating the ith row/column of \(A\), \(rpt_i\) times.
- Parameters
A (array) – matrix of size [N, N]
rpt (Sequence) – sequence of N positive integers indicating the corresponding rows/columns of A to be repeated.
- Returns
the reduction of A by the index vector rpt
- Return type
array
-
hermite_multidimensional
(R, cutoff, y=None, renorm=False, make_tensor=True, modified=False)[source]¶ Returns the multidimensional Hermite polynomials \(H_k^{(R)}(y)\).
Here \(R\) is an \(n \times n\) square matrix, and \(y\) is an \(n\) dimensional vector. The polynomials are parametrized by the multi-index \(k=(k_0,k_1,\ldots,k_{n-1})\), and are calculated for all values \(0 \leq k_j < \text{cutoff}\), thus a tensor of dimensions \(\text{cutoff}^n\) is returned.
This tensor can either be flattened into a vector or returned as an actual tensor with \(n\) indices.
Note
Note that if \(R = (1)\) then \(H_k^{(R)}(y)\) are precisely the well known probabilists’ Hermite polynomials \(He_k(y)\), and if \(R = (2)\) then \(H_k^{(R)}(y)\) are precisely the well known physicists’ Hermite polynomials \(H_k(y)\).
- Parameters
R (array) – square matrix parametrizing the Hermite polynomial family
cutoff (int) – maximum size of the subindices in the Hermite polynomial
y (array) – vector argument of the Hermite polynomial
renorm (bool) – If
True
, normalizes the returned multidimensional Hermite polynomials such that \(H_k^{(R)}(y)/\prod_i k_i!\)make_tensor (bool) – If
False
, returns a flattened one dimensional array containing the values of the polynomialmodified (bool) – whether to return the modified multidimensional Hermite polynomials or the standard ones
- Returns
the multidimensional Hermite polynomials
- Return type
(array)
Quantum algorithms¶
This submodule provides access to various utility functions that act on Gaussian quantum states.
For more details on how the hafnian relates to various properties of Gaussian quantum states, see:
Kruse, R., Hamilton, C. S., Sansoni, L., Barkhofen, S., Silberhorn, C., & Jex, I. “Detailed study of Gaussian boson sampling.” Physical Review A 100, 032326 (2019)
Hamilton, C. S., Kruse, R., Sansoni, L., Barkhofen, S., Silberhorn, C., & Jex, I. “Gaussian boson sampling.” Physical Review Letters, 119(17), 170501 (2017)
Quesada, N. “Franck-Condon factors by counting perfect matchings of graphs with loops.” Journal of Chemical Physics 150, 164113 (2019)
Quesada, N., Helt, L. G., Izaac, J., Arrazola, J. M., Shahrokhshahi, R., Myers, C. R., & Sabapathy, K. K. “Simulating realistic non-Gaussian state preparation.” Physical Review A 100, 022341 (2019)
Fock states and tensors¶
|
Returns the \(\langle i | \psi\rangle\) element of the state ket of a Gaussian state defined by covariance matrix cov. |
|
Returns the state vector of a (PNR post-selected) Gaussian state. |
|
Returns the \(\langle i | \rho | j \rangle\) element of the density matrix of a Gaussian state defined by covariance matrix cov. |
|
Returns the density matrix of a (PNR post-selected) Gaussian state. |
|
Calculates the Fock representation of a Gaussian unitary parametrized by the symplectic matrix S and the displacements alpha up to cutoff in Fock space. |
|
Generate the Fock space probabilities of a Gaussian state up to a Fock space cutoff. |
|
Given a list of transmissivities a tensor of probabilitites, calculate an updated tensor of probabilities after loss is applied. |
|
Given a list of noise probability distributions for each of the modes and a tensor of probabilitites, calculate an updated tensor of probabilities after noise is applied. |
|
Calculates the fidelity between two Gaussian quantum states. |
Details¶
-
pure_state_amplitude
(mu, cov, i, include_prefactor=True, tol=1e-10, hbar=2, check_purity=True)[source]¶ Returns the \(\langle i | \psi\rangle\) element of the state ket of a Gaussian state defined by covariance matrix cov.
- Parameters
mu (array) – length-\(2N\) quadrature displacement vector
cov (array) – length-\(2N\) covariance matrix
i (list) – list of amplitude elements
include_prefactor (bool) – if
True
, the prefactor is automatically calculated used to scale the result.tol (float) – tolerance for determining if displacement is negligible
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
check_purity (bool) – if
True
, the purity of the Gaussian state is checked before calculating the state vector.
- Returns
the pure state amplitude
- Return type
complex
-
state_vector
(mu, cov, post_select=None, normalize=False, cutoff=5, hbar=2, check_purity=True, **kwargs)[source]¶ Returns the state vector of a (PNR post-selected) Gaussian state.
The resulting density matrix will have shape
\[\underbrace{D\times D \times \cdots \times D}_M\]where \(D\) is the Fock space cutoff, and \(M\) is the number of non post-selected modes, i.e.
M = len(mu)//2 - len(post_select)
.If post_select is None then the density matrix elements are calculated using the multidimensional Hermite polynomials which provide a significantly faster evaluation.
- Parameters
mu (array) – length-\(2N\) means vector in xp-ordering
cov (array) – \(2N\times 2N\) covariance matrix in xp-ordering
post_select (dict) – dictionary containing the post-selected modes, of the form
{mode: value}
.normalize (bool) – If
True
, a post-selected density matrix is re-normalized.cutoff (dim) – the final length (i.e., Hilbert space dimension) of each mode in the density matrix.
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
check_purity (bool) – if
True
, the purity of the Gaussian state is checked before calculating the state vector.
- Keyword Arguments
choi_r (float or None) – Value of the two-mode squeezing parameter used in Choi-Jamiolkoski trick in
fock_tensor()
. This keyword argument should only be used whenstate_vector
is called byfock_tensor()
.- Returns
the state vector of the Gaussian state
- Return type
np.array[complex]
-
density_matrix_element
(mu, cov, i, j, include_prefactor=True, tol=1e-10, hbar=2)[source]¶ Returns the \(\langle i | \rho | j \rangle\) element of the density matrix of a Gaussian state defined by covariance matrix cov.
- Parameters
mu (array) – length-\(2N\) quadrature displacement vector
cov (array) – length-\(2N\) covariance matrix
i (list) – list of density matrix rows
j (list) – list of density matrix columns
include_prefactor (bool) – if
True
, the prefactor is automatically calculated used to scale the result.tol (float) – tolerance for determining if displacement is negligible
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
- Returns
the density matrix element
- Return type
complex
-
density_matrix
(mu, cov, post_select=None, normalize=False, cutoff=5, hbar=2)[source]¶ Returns the density matrix of a (PNR post-selected) Gaussian state.
The resulting density matrix will have shape
\[\underbrace{D\times D \times \cdots \times D}_{2M}\]where \(D\) is the Fock space cutoff, and \(M\) is the number of non post-selected modes, i.e.
M = len(mu)//2 - len(post_select)
.Note that we use the Strawberry Fields convention for indexing the density matrix; the first two dimensions correspond to subsystem 1, the second two dimensions correspond to subsystem 2, etc. If post_select is None then the density matrix elements are calculated using the multidimensional Hermite polynomials which provide a significantly faster evaluation.
- Parameters
mu (array) – length-\(2N\) means vector in xp-ordering
cov (array) – \(2N\times 2N\) covariance matrix in xp-ordering
post_select (dict) – dictionary containing the post-selected modes, of the form
{mode: value}
. If post_select is None the whole non post-selected density matrix is calculated directly using (multidimensional) Hermite polynomials, which is significantly faster than calculating one hafnian at a time.normalize (bool) – If
True
, a post-selected density matrix is re-normalized.cutoff (dim) – the final length (i.e., Hilbert space dimension) of each mode in the density matrix.
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
- Returns
the density matrix of the Gaussian state
- Return type
np.array[complex]
-
fock_tensor
(S, alpha, cutoff, choi_r=0.881373587019543, check_symplectic=True, sf_order=False, rtol=1e-05, atol=1e-08)[source]¶ Calculates the Fock representation of a Gaussian unitary parametrized by the symplectic matrix S and the displacements alpha up to cutoff in Fock space.
- Parameters
S (array) – symplectic matrix
alpha (array) – complex vector of displacements
cutoff (int) – cutoff in Fock space
choi_r (float) – squeezing parameter used for the Choi expansion
check_symplectic (boolean) – checks whether the input matrix is symplectic
sf_order (boolean) – reshapes the tensor so that it follows the sf ordering of indices
rtol (float) – the relative tolerance parameter used in np.allclose
atol (float) – the absolute tolerance parameter used in np.allclose
- Returns
Tensor containing the Fock representation of the Gaussian unitary
- Return type
(array)
-
probabilities
(mu, cov, cutoff, hbar=2.0, rtol=1e-05, atol=1e-08)[source]¶ Generate the Fock space probabilities of a Gaussian state up to a Fock space cutoff.
- Parameters
mu (array) – vector of means of length
2*n_modes
cov (array) – covariance matrix of shape
[2*n_modes, 2*n_modes]
cutoff (int) – cutoff in Fock space
hbar (float) – value of \(\hbar\) in the commutation relation :math;`[hat{x}, hat{p}]=ihbar`
rtol (float) – the relative tolerance parameter used in np.allclose
atol (float) – the absolute tolerance parameter used in np.allclose
- Returns
Fock space probabilities up to cutoff. The shape of this tensor is
[cutoff]*num_modes
.- Return type
(array)
-
update_probabilities_with_loss
(etas, probs)[source]¶ Given a list of transmissivities a tensor of probabilitites, calculate an updated tensor of probabilities after loss is applied.
- Parameters
etas (list) – List of transmissitivities describing the loss in each of the modes
probs (array) – Array of probabilitites in the different modes
- Returns
List of loss-updated probabilities with the same shape as probs.
- Return type
array
-
update_probabilities_with_noise
(probs_noise, probs)[source]¶ Given a list of noise probability distributions for each of the modes and a tensor of probabilitites, calculate an updated tensor of probabilities after noise is applied.
- Parameters
probs_noise (list) – List of probability distributions describing the noise in each of the modes
probs (array) – Array of probabilitites in the different modes
- Returns
List of noise-updated probabilities with the same shape as probs.
- Return type
array
Utility functions¶
|
Returns the vector of means and the covariance matrix of the specified modes. |
|
Returns the matrix \(X_n = \begin{bmatrix}0 & I_n\\ I_n & 0\end{bmatrix}\) |
|
Returns the \(Q\) Husimi matrix of the Gaussian state. |
|
Returns the Wigner covariance matrix in the \(xp\)-ordering of the Gaussian state. |
|
Returns the \(A\) matrix of the Gaussian state whose hafnian gives the photon number probabilities. |
|
Returns the vector of complex displacements and conjugate displacements. |
|
Returns the vector of real quadrature displacements. |
|
Returns the prefactor. |
|
Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that endodes it has a total mean photon number n_mean. |
Given an adjacency matrix this function calculates the mean number of clicks. |
|
Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has give a mean number of clicks equal to |
|
|
Returns the Qmat xp-covariance matrix associated to a graph with adjacency matrix \(A\) and with mean photon number \(n_{mean}\). |
|
Calculate the mean photon number of mode j of a Gaussian state. |
|
Calculate the mean photon number of each of the modes in a Gaussian state |
|
Calculate the variance/covariance of the photon number distribution of a Gaussian state. |
|
Calculate the covariance matrix of the photon number distribution of a Gaussian state. |
|
Checks if the covariance matrix is a valid quantum covariance matrix. |
|
Checks if the covariance matrix is a valid quantum covariance matrix that corresponds to a quantum pure state |
|
Checks if the covariance matrix can be efficiently sampled. |
|
Calculates the total photon number distribution of a pure state with zero mean. |
|
Generate the photon number distribution of \(N\) identical single mode squeezed states. |
|
Generates the total photon number distribution of single mode squeezed states with different squeezing values. |
Details¶
-
Amat
(cov, hbar=2, cov_is_qmat=False)[source]¶ Returns the \(A\) matrix of the Gaussian state whose hafnian gives the photon number probabilities.
- Parameters
cov (array) – \(2N\times 2N\) covariance matrix
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
cov_is_qmat (bool) – if
True
, it is assumed thatcov
is in fact the Q matrix.
- Returns
the \(A\) matrix.
- Return type
array
-
Beta
(mu, hbar=2)[source]¶ Returns the vector of complex displacements and conjugate displacements.
- Parameters
mu (array) – length-\(2N\) means vector
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
- Returns
the expectation values \([\langle a_1\rangle, \langle a_2\rangle,\dots,\langle a_N\rangle, \langle a^\dagger_1\rangle, \dots, \langle a^\dagger_N\rangle]\)
- Return type
array
-
Covmat
(Q, hbar=2)[source]¶ Returns the Wigner covariance matrix in the \(xp\)-ordering of the Gaussian state. This is the inverse function of Qmat.
- Parameters
Q (array) – \(2N\times 2N\) Husimi Q matrix
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
- Returns
the \(xp\)-ordered covariance matrix in the xp-ordering.
- Return type
array
-
Means
(beta, hbar=2)[source]¶ Returns the vector of real quadrature displacements.
- Parameters
beta (array) – length-\(2N\) means bivector
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
- Returns
the quadrature expectation values \([\langle q_1\rangle, \langle q_2\rangle,\dots,\langle q_N\rangle, \langle p_1\rangle, \dots, \langle p_N\rangle]\)
- Return type
array
-
Qmat
(cov, hbar=2)[source]¶ Returns the \(Q\) Husimi matrix of the Gaussian state.
- Parameters
cov (array) – \(2N\times 2N xp-\) Wigner covariance matrix
hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).
- Returns
the \(Q\) matrix.
- Return type
array
-
Xmat
(N)[source]¶ Returns the matrix \(X_n = \begin{bmatrix}0 & I_n\\ I_n & 0\end{bmatrix}\)
- Parameters
N (int) – positive integer
- Returns
\(2N\times 2N\) array
- Return type
array
-
fidelity
(mu1, cov1, mu2, cov2, hbar=2, rtol=1e-05, atol=1e-08)[source]¶ Calculates the fidelity between two Gaussian quantum states.
Note that if the covariance matrices correspond to pure states this function reduces to the modulus square of the overlap of their state vectors. For the derivation see ‘Quantum Fidelity for Arbitrary Gaussian States’, Banchi et al..
- Parameters
mu1 (array) – vector of means of the first state
cov1 (array) – covariance matrix of the first state
mu2 (array) – vector of means of the second state
cov2 (array) – covariance matrix of the second state
hbar (float) – value of hbar in the uncertainty relation
rtol (float) – the relative tolerance parameter used in np.allclose
atol (float) – the absolute tolerance parameter used in np.allclose
- Returns
value of the fidelity between the two states
- Return type
(float)
-
find_scaling_adjacency_matrix
(A, n_mean)[source]¶ Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that endodes it has a total mean photon number n_mean.
- Parameters
A (array) – Adjacency matrix
n_mean (float) – Mean photon number of the Gaussian state
- Returns
Scaling parameter
- Return type
float
-
find_scaling_adjacency_matrix_torontonian
(A, c_mean)[source]¶ Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has give a mean number of clicks equal to
c_mean
when measured with threshold detectors.- Parameters
A (array) – adjacency matrix
c_mean (float) – mean photon number of the Gaussian state
- Returns
scaling parameter
- Return type
float
-
fock_tensor
(S, alpha, cutoff, choi_r=0.881373587019543, check_symplectic=True, sf_order=False, rtol=1e-05, atol=1e-08)[source] Calculates the Fock representation of a Gaussian unitary parametrized by the symplectic matrix S and the displacements alpha up to cutoff in Fock space.
- Parameters
S (array) – symplectic matrix
alpha (array) – complex vector of displacements
cutoff (int) – cutoff in Fock space
choi_r (float) – squeezing parameter used for the Choi expansion
check_symplectic (boolean) – checks whether the input matrix is symplectic
sf_order (boolean) – reshapes the tensor so that it follows the sf ordering of indices
rtol (float) – the relative tolerance parameter used in np.allclose
atol (float) – the absolute tolerance parameter used in np.allclose
- Returns
Tensor containing the Fock representation of the Gaussian unitary
- Return type
(array)
-
gen_Qmat_from_graph
(A, n_mean)[source]¶ Returns the Qmat xp-covariance matrix associated to a graph with adjacency matrix \(A\) and with mean photon number \(n_{mean}\).
- Parameters
A (array) – a \(N\times N\)
np.float64
(symmetric) adjacency matrixn_mean (float) – mean photon number of the Gaussian state
- Returns
the \(2N\times 2N\) Q matrix.
- Return type
array
-
gen_multi_mode_dist
(s, cutoff=50, padding_factor=2)[source]¶ Generates the total photon number distribution of single mode squeezed states with different squeezing values.
- Parameters
s (array) – array of squeezing parameters
cutoff (int) – Fock cutoff
- Returns
total photon number distribution
- Return type
(array[int])
-
gen_single_mode_dist
(s, cutoff=50, N=1)[source]¶ Generate the photon number distribution of \(N\) identical single mode squeezed states.
- Parameters
s (float) – squeezing parameter
cutoff (int) – Fock cutoff
N (float) – number of squeezed states
- Returns
Photon number distribution
- Return type
(array)
-
is_classical_cov
(cov, hbar=2, atol=1e-08)[source]¶ Checks if the covariance matrix can be efficiently sampled.
- Parameters
cov (array) – a covariance matrix
hbar (float) – value of hbar in the uncertainty relation
- Returns
whether the given covariance matrix corresponds to a classical state
- Return type
(bool)
-
is_pure_cov
(cov, hbar=2, rtol=1e-05, atol=1e-08)[source]¶ Checks if the covariance matrix is a valid quantum covariance matrix that corresponds to a quantum pure state
- Parameters
cov (array) – a covariance matrix
hbar (float) – value of hbar in the uncertainty relation
rtol (float) – the relative tolerance parameter used in np.allclose
atol (float) – the absolute tolerance parameter used in np.allclose
- Returns
whether the given covariance matrix corresponds to a pure state
- Return type
(bool)
-
is_valid_cov
(cov, hbar=2, rtol=1e-05, atol=1e-08)[source]¶ Checks if the covariance matrix is a valid quantum covariance matrix.
- Parameters
cov (array) – a covariance matrix
hbar (float) – value of hbar in the uncertainty relation
rtol (float) – the relative tolerance parameter used in np.allclose
atol (float) – the absolute tolerance parameter used in np.allclose
- Returns
whether the given covariance matrix is a valid covariance matrix
- Return type
(bool)
-
loss_mat
(eta, cutoff)[source]¶ Constructs a binomial loss matrix with transmission eta up to n photons.
- Parameters
eta (float) – Transmission coefficient.
eta=0.0
corresponds to complete loss andeta=1.0
corresponds to no loss.cutoff (int) – cutoff in Fock space.
- Returns
\(n\times n\) matrix representing the loss.
- Return type
array
-
mean_number_of_clicks
(A)[source]¶ Given an adjacency matrix this function calculates the mean number of clicks. For this to make sense the user must provide a matrix with singular values less than or equal to one. See Appendix A.3 of <https://arxiv.org/abs/1902.00462>`_ by Banchi et al.
- Parameters
A (array) – rescaled adjacency matrix
- Returns
mean number of clicks
- Return type
float
-
photon_number_covar
(mu, cov, j, k, hbar=2)[source]¶ Calculate the variance/covariance of the photon number distribution of a Gaussian state.
Implements the covariance matrix of the photon number distribution of a Gaussian state according to the Last two eq. of Part II. in ‘Multidimensional Hermite polynomials and photon distribution for polymode mixed light’, Dodonov et al.
\[\begin{split}\sigma_{n_j n_j} &= \frac{1}{2}\left(T_j^2 - 2d_j - \frac{1}{2}\right) + \left<\mathbf{Q}_j\right>\mathcal{M}_j\left<\mathbf{Q}_j\right>, \\ \sigma_{n_j n_k} &= \frac{1}{2}\mathrm{Tr}\left(\Lambda_j \mathbf{M} \Lambda_k \mathbf{M}\right) + \frac{1}{2}\left<\mathbf{Q}\right>\Lambda_j \mathbf{M} \Lambda_k\left<\mathbf{Q}\right>,\end{split}\]where \(T_j\) and \(d_j\) are the trace and the determinant of \(2 \times 2\) matrix \(\mathcal{M}_j\) whose elements coincide with the nonzero elements of matrix \(\mathbf{M}_j = \Lambda_j \mathbf{M} \Lambda_k\) while the two-vector \(\mathbf{Q}_j\) has the components \((q_j, p_j)\). \(2N \times 2N\) projector matrix \(\Lambda_j\) has only two nonzero elements: \(\left(\Lambda_j\right)_{jj} = \left(\Lambda_j\right)_{j+N,j+N} = 1\). Note that the convention for
mu
used here differs from the one used in Dodonov et al., They both provide the same results in this particular case.- Parameters
mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)
cov (array) – the covariance matrix of the Gaussian state
j (int) – the j th mode
k (int) – the k th mode
hbar (float) – the
hbar
convention used in the commutation relation \([q, p]=i\hbar\)
- Returns
the covariance for the photon numbers at modes \(j\) and \(k\).
- Return type
float
-
photon_number_covmat
(mu, cov, hbar=2)[source]¶ Calculate the covariance matrix of the photon number distribution of a Gaussian state.
- Parameters
mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)
cov (array) – the covariance matrix of the Gaussian state
hbar (float) – the
hbar
convention used in the commutation relation \([q, p]=i\hbar\)
- Returns
the covariance matrix of the photon number distribution
- Return type
array
-
photon_number_mean
(mu, cov, j, hbar=2)[source]¶ Calculate the mean photon number of mode j of a Gaussian state.
- Parameters
mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)
cov (array) – the covariance matrix of the Gaussian state
j (int) – the j th mode
hbar (float) – the
hbar
convention used in the commutation relation \([q, p]=i\hbar\)
- Returns
the mean photon number in mode \(j\).
- Return type
float
-
photon_number_mean_vector
(mu, cov, hbar=2)[source]¶ Calculate the mean photon number of each of the modes in a Gaussian state
- Parameters
mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)
cov (array) – the covariance matrix of the Gaussian state
hbar (float) – the
hbar
convention used in the commutation relation \([q, p]=i\hbar\)
- Returns
the vector of means of the photon number distribution
- Return type
array
-
prefactor
(mu, cov, hbar=2)[source]¶ Returns the prefactor.
\[prefactor = \frac{e^{-\beta Q^{-1}\beta^*/2}}{n_1!\cdots n_m! \sqrt{|Q|}}\]- Parameters
mu (array) – length-\(2N\) vector of mean values \([\alpha,\alpha^*]\)
cov (array) – length-\(2N\) xp-covariance matrix
- Returns
the prefactor
- Return type
float
-
probabilities
(mu, cov, cutoff, hbar=2.0, rtol=1e-05, atol=1e-08)[source] Generate the Fock space probabilities of a Gaussian state up to a Fock space cutoff.
- Parameters
mu (array) – vector of means of length
2*n_modes
cov (array) – covariance matrix of shape
[2*n_modes, 2*n_modes]
cutoff (int) – cutoff in Fock space
hbar (float) – value of \(\hbar\) in the commutation relation :math;`[hat{x}, hat{p}]=ihbar`
rtol (float) – the relative tolerance parameter used in np.allclose
atol (float) – the absolute tolerance parameter used in np.allclose
- Returns
Fock space probabilities up to cutoff. The shape of this tensor is
[cutoff]*num_modes
.- Return type
(array)
-
reduced_gaussian
(mu, cov, modes)[source]¶ Returns the vector of means and the covariance matrix of the specified modes.
- Parameters
mu (array) – a length-\(2N\)
np.float64
vector of means.cov (array) – a \(2N\times 2N\)
np.float64
covariance matrix representing an \(N\) mode quantum state.modes (int of Sequence[int]) – indices of the requested modes
- Returns
where means is an array containing the vector of means, and cov is a square array containing the covariance matrix.
- Return type
tuple (means, cov)
-
total_photon_num_dist_pure_state
(cov, cutoff=50, hbar=2, padding_factor=2)[source]¶ Calculates the total photon number distribution of a pure state with zero mean.
- Parameters
cov (array) – \(2N\times 2N\) covariance matrix in xp-ordering
cutoff (int) – Fock cutoff
tol (float) – tolerance for determining if displacement is negligible
hbar (float) – the value of \(\hbar\) in the commutation
padding_factor (int) – expanded size of the photon distribution to avoid accumulation of errors
- Returns
Total photon number distribution
- Return type
(array)
-
update_probabilities_with_loss
(etas, probs)[source] Given a list of transmissivities a tensor of probabilitites, calculate an updated tensor of probabilities after loss is applied.
- Parameters
etas (list) – List of transmissitivities describing the loss in each of the modes
probs (array) – Array of probabilitites in the different modes
- Returns
List of loss-updated probabilities with the same shape as probs.
- Return type
array
-
update_probabilities_with_noise
(probs_noise, probs)[source] Given a list of noise probability distributions for each of the modes and a tensor of probabilitites, calculate an updated tensor of probabilities after noise is applied.
- Parameters
probs_noise (list) – List of probability distributions describing the noise in each of the modes
probs (array) – Array of probabilitites in the different modes
- Returns
List of noise-updated probabilities with the same shape as probs.
- Return type
array
Classical Sampling Algorithms¶
This submodule provides access to classical sampling algorithms for thermal states going through an interferometer specified by a real orthogonal matrix. The quantum state to be sampled is specified by a positive semidefinite real matrix and a mean photon number. The algorithm implemented here was first derived in
Saleh Rahimi-Keshari, Austin P Lund, and Timothy C Ralph. “What can quantum optics say about computational complexity theory?” Physical Review Letters, 114(6):060501, (2015).
For more precise details of the implementation see
Soran Jahangiri, Juan Miguel Arrazola, Nicolás Quesada, and Nathan Killoran. “Point processes with Gaussian boson sampling” Phys. Rev. E 101, 022134, (2020)..
Summary¶
|
Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean for thermal sampling. |
|
Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean. |
|
Generates samples of the Gaussian state in terms of the mean photon number parameter ls and the interferometer O. |
Code details¶
-
generate_thermal_samples
(ls, O, num_samples=1)[source]¶ Generates samples of the Gaussian state in terms of the mean photon number parameter ls and the interferometer O.
- Parameters
ls (array) – squashing parameters
O (array) – Orthogonal matrix representing the interferometer
num_samples – Number of samples to generate
- Returns
samples
- Return type
list(array
-
rescale_adjacency_matrix
(A, n_mean, scale, check_positivity=True, check_symmetry=True, rtol=1e-05, atol=1e-08)[source]¶ Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean.
- Parameters
A (array) – Adjacency matrix, assumed to be positive semi-definite and real
n_mean (float) – Mean photon number of the Gaussian state
scale (float) – Determines whether to rescale the matrix for thermal sampling (scale = 1.0) or for squashed sampling (scale = 2.0)
check_positivity (bool) – Checks if the matrix A is positive semidefinite
check_symmetry (bool) – Checks if the matrix is symmetric
rtol – relative tolerance for the checks
atol – absolute tolerance for the checks
- Returns
rescaled eigenvalues and eigenvectors of the matrix A
- Return type
tuple(array,array)
-
rescale_adjacency_matrix_thermal
(A, n_mean, check_positivity=True, check_symmetry=True, rtol=1e-05, atol=1e-08)[source]¶ Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean for thermal sampling.
- Parameters
A (array) – Adjacency matrix, assumed to be positive semi-definite and real
n_mean (float) – Mean photon number of the Gaussian state
check_positivity (bool) – Checks if the matrix A is positive semidefinite
check_symmetry (bool) – Checks if the matrix is symmetric
rtol – relative tolerance for the checks
atol – absolute tolerance for the checks
- Returns
rescaled eigenvalues and eigenvectors of the matrix A
- Return type
tuple(array,array)
Symplectic Operations¶
Module name: thewalrus.symplectic
Contains some Gaussian operations and auxiliary functions.
Auxiliary functions¶
|
Expands a Symplectic matrix S to act on the entire subsystem. |
|
Returns the phase-space displacement vector associated to a displacement. |
|
Returns the vector of means and the covariance matrix of the specified modes. |
|
Checks if matrix S is a symplectic matrix |
|
Returns the matrix \(\Omega_n = \begin{bmatrix}0 & I_n\\ -I_n & 0\end{bmatrix}\) |
Gaussian states¶
|
Returns the vacuum state. |
Gates and operations¶
|
Two-mode squeezing. |
Interferometer. |
|
|
Loss channel acting on a Gaussian state. |
|
Calculates the mean photon number for a given one-mode state. |
|
Beam-splitter. |
|
Rotation gate. |
Code details¶
-
autonne
(A, rtol=1e-05, atol=1e-08, svd_order=True)[source]¶ Autonne-Takagi decomposition of a complex symmetric (not Hermitian!) matrix.
- Parameters
A (array) – square, symmetric matrix
rtol (float) – the relative tolerance parameter between
A
andA.T
atol (float) – the absolute tolerance parameter between
A
andA.T
svd_order (boolean) – whether to return result by ordering the singular values of
A
in descending (True
) or asceding (False
) order.
- Returns
(r, U), where r are the singular values, and U is the Autonne-Takagi unitary, such that \(A = U \diag(r) U^T\).
- Return type
tuple[array, array]
-
beam_splitter
(theta, phi)[source]¶ Beam-splitter.
- Parameters
theta (float) – transmissivity parameter
phi (float) – phase parameter
- Returns
symplectic-orthogonal transformation matrix of an interferometer with angles theta and phi
- Return type
array
-
expand
(S, modes, N)[source]¶ Expands a Symplectic matrix S to act on the entire subsystem.
- Parameters
S (array) – a \(2M\times 2M\) Symplectic matrix
modes (Sequence[int]) – the list of modes S acts on
N (int) – full size of the subsystem
- Returns
the resulting \(2N\times 2N\) Symplectic matrix
- Return type
array
-
expand_vector
(alpha, mode, N, hbar=2.0)[source]¶ Returns the phase-space displacement vector associated to a displacement.
- Parameters
alpha (complex) – complex displacement
mode (int) – mode index
N (int) – number of modes
- Returns
phase-space displacement vector of size 2*N
- Return type
array
-
interferometer
(U)[source]¶ Interferometer.
- Parameters
U (array) – unitary matrix
- Returns
symplectic transformation matrix
- Return type
array
-
is_symplectic
(S, rtol=1e-05, atol=1e-08)[source]¶ Checks if matrix S is a symplectic matrix
- Parameters
S (array) – a square matrix
- Returns
whether the given matrix is symplectic
- Return type
(bool)
-
loss
(mu, cov, T, mode, nbar=0, hbar=2)[source]¶ Loss channel acting on a Gaussian state.
- Parameters
mu (array) – means vector
cov (array) – covariance matri
T (float) – transmission; 1 corresponds to no loss, 0 to full loss.
mode (int) – mode to act on
nbar (float) – thermal mean population (default 0)
hbar (float) – (default 2) the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\)
- Returns
the means vector and covariance matrix of the resulting state
- Return type
tuple[array]
-
mean_photon_number
(mu, cov, hbar=2)[source]¶ Calculates the mean photon number for a given one-mode state.
- Parameters
mu (array) – length-2 vector of means
cov (array) – \(2\times 2\) covariance matrix
hbar (float) – (default 2) the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\)
- Returns
the photon number expectation and variance
- Return type
tuple
-
reduced_state
(mu, cov, modes)[source]¶ Returns the vector of means and the covariance matrix of the specified modes.
- Parameters
modes (int of Sequence[int]) – indices of the requested modes
- Returns
means is an array containing the vector of means, and cov is a square array containing the covariance matrix. Both use the \(xp\)-ordering.
- Return type
tuple (means, cov)
-
rotation
(theta)[source]¶ Rotation gate.
- Parameters
theta (float) – rotation angle
- Returns
rotation matrix by angle theta
- Return type
array
-
squeezing
(r, phi)[source]¶ Squeezing. In fock space this corresponds to exp(tfrac{1}{2}r e^{i phi} (a^2 - a^{dagger 2}) ).
- Parameters
r (float) – squeezing magnitude
phi (float) – rotation parameter
- Returns
symplectic transformation matrix
- Return type
array
-
sympmat
(N)[source]¶ Returns the matrix \(\Omega_n = \begin{bmatrix}0 & I_n\\ -I_n & 0\end{bmatrix}\)
- Parameters
N (int) – positive integer
- Returns
\(2N\times 2N\) array
- Return type
array
-
two_mode_squeezing
(r, phi)[source]¶ Two-mode squeezing.
- Parameters
r (float) – squeezing magnitude
phi (float) – rotation parameter
- Returns
symplectic transformation matrix
- Return type
array
-
vacuum_state
(modes, hbar=2.0)[source]¶ Returns the vacuum state.
- Parameters
modes (str) – Returns the vector of means and the covariance matrix
hbar (float) – (default 2) the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\)
- Returns
the means vector and covariance matrix of the vacuum state
- Return type
list[array]
Random matrices¶
This submodule provides access to utility functions to generate random unitary, symplectic and covariance matrices.
-
random_covariance
(N, hbar=2, pure=False, block_diag=False)[source]¶ Random covariance matrix.
- Parameters
N (int) – number of modes
hbar (float) – the value of \(\hbar\) to use in the definition of the quadrature operators \(x\) and \(p\)
pure (bool) – If True, a random covariance matrix corresponding to a pure state is returned.
block_diag (bool) – If True, uses passive Gaussian transformations that are orthogonal instead of unitary. This implies that the positions \(x\) do not mix with the momenta \(p\) and thus the covariance matrix is block diagonal.
- Returns
random \(2N\times 2N\) covariance matrix
- Return type
array
-
random_interferometer
(N, real=False)[source]¶ Random unitary matrix representing an interferometer. For more details, see [mezzadri2006].
- Parameters
N (int) – number of modes
real (bool) – return a random real orthogonal matrix
- Returns
random \(N\times N\) unitary distributed with the Haar measure
- Return type
array
-
random_symplectic
(N, passive=False, block_diag=False, scale=1.0)[source]¶ Random symplectic matrix representing a Gaussian transformation.
The squeezing parameters \(r\) for active transformations are randomly sampled from the standard normal distribution, while passive transformations are randomly sampled from the Haar measure. Note that for the Symplectic group there is no notion of Haar measure since this is group is not compact.
- Parameters
N (int) – number of modes
passive (bool) – If True, returns a passive Gaussian transformation (i.e., one that preserves photon number). If False, returns an active transformation.
block_diag (bool) – If True, uses passive Gaussian transformations that are orthogonal instead of unitary. This implies that the positions \(q\) do not mix with the momenta \(p\) and thus the symplectic operator is block diagonal
scale (float) – Sets the scale of the random values used as squeezing parameters. They will range from 0 to \(\sqrt{2}\texttt{scale}\)
- Returns
random \(2N\times 2N\) symplectic matrix
- Return type
array
Fock gradients of Gaussian gates¶
This module contains the Fock representation of the standard Gaussian gates as well as their gradients.
|
Calculates the matrix elements of the displacement gate using a recurrence relation. |
|
Calculates the matrix elements of the squeezing gate using a recurrence relation. |
|
Calculates the Fock representation of the beamsplitter. |
|
Calculates the matrix elements of the two-mode squeezing gate recursively. |
|
Calculates the gradients of the displacement gate with respect to the displacement magnitude and angle. |
|
Calculates the gradients of the squeezing gate with respect to the squeezing magnitude and angle |
|
Calculates the gradients of the beamsplitter gate with respect to the transmissivity angle and reflection phase |
|
Calculates the gradients of the two-mode squeezing gate with respect to the squeezing magnitude and angle |
Reference implementations¶
This submodule provides access to pure-Python reference implementations of the hafnian and loop hafnian by summing over the set of perfect matching permutations or the set of single pair matchings.
For more details on these definitions see:
Andreas Björklund, Brajesh Gupt, and Nicolás Quesada. “A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer” arxiv:1805.12498 (2018)
Reference functions¶
|
Returns the (loop) hafnian of the matrix \(M\). |
Details¶
-
hafnian
(M, loop=False)[source]¶ Returns the (loop) hafnian of the matrix \(M\).
\(M\) can be any two-dimensional object of square shape \(m\) for which the elements
(i, j)
can be accessed via Python indexingM[i, j]
, and for which said elements have well defined multiplication__mul__
and addition __add__ special methods.For example, this includes nested lists and NumPy arrays.
In particular, one can use this function to calculate symbolic hafnians implemented as SymPy matrices.
- Parameters
M (array) – a square matrix
loop (boolean) – if set to
True
, the loop hafnian is returned
- Returns
The (loop) hafnian of M
- Return type
scalar
Auxiliary functions¶
|
Decorator used to memoize a generator. |
|
Returns the partitions of a tuple in terms of pairs and singles. |
|
Generator for the set of single pair matchings. |
|
Generator for the set of perfect matching permutations. |
|
Returns the \(n\) th telephone number. |
Details¶
-
memoized
(f, maxsize=1000)[source]¶ Decorator used to memoize a generator.
The standard approach of using
functools.lru_cache
cannot be used, as it only memoizes the generator object, not the results of the generator.See https://stackoverflow.com/a/10726355/10958457 for the original implementation.
- Parameters
f (function or generator) – function or generator to memoize
maxsize (int) – positive integer that defines the maximum size of the cache
- Returns
the memoized function or generator
- Return type
function or generator
-
partitions
(s, singles=True, pairs=True)[source]¶ Returns the partitions of a tuple in terms of pairs and singles.
- Parameters
s (tuple) – a tuple representing the (multi-)set that will be partitioned. Note that it must hold that
len(s) >= 3
.single (boolean) – allow singles in the partitions
pairs (boolean) – allow pairs in the partitions
- Returns
a generators that goes through all the single-double partitions of the tuple
- Return type
generator
-
spm
(s)[source]¶ Generator for the set of single pair matchings.
- Parameters
s (tuple) – an input tuple
- Returns
the set of single pair matching of the tuple s
- Return type
generator
-
pmp
(s)[source]¶ Generator for the set of perfect matching permutations.
- Parameters
s (tuple) – an input tuple
- Returns
the set of perfect matching permutations of the tuple s
- Return type
generator
-
T
(n)[source]¶ Returns the \(n\) th telephone number.
They satisfy the recursion relation \(T(n) = T(n-1)+(n-1)T(n-2)\) and \(T(0)=T(1)=1\).
See https://oeis.org/A000085 for more details.
- Parameters
n (integer) – index
- Returns
the nth telephone number
- Return type
int
The libwalrus C++ library¶
The libwalrus
C++ library is provided as a header-only library, libwalrus.hpp
, which can be included at the top of your source file:
#include <libwalrus.hpp>
The following templated functions are then available for use within the libwalrus
namespace.
Example¶
For instance, consider the following example example.cpp
, which calculates the loop hafnian of several all ones matrices:
#include <iostream>
#include <complex>
#include <vector>
#include <libwalrus.hpp>
int main() {
int nmax = 10;
for (int m = 1; m <= nmax; m++) {
// create a 2m*2m all ones matrix
int n = 2 * m;
std::vector<std::complex<double>> mat(n * n, 1.0);
// calculate the hafnian
std::complex<double> hafval = libwalrus::loop_hafnian(mat);
// print out the result
std::cout << hafval << std::endl;
}
return 0;
};
This can be compiled using the gcc g++
compiler as follows,
$ g++ example.cpp -o example -std=c++11 -O3 -Wall -I/path/to/libwalrus.hpp -I/path/to/Eigen -fopenmp
where /path/to/libwalrus.hpp
is the path to the directory containing libwalrus.hpp
, /path/to/Eigen
is the path to the Eigen C++ linear algebra header library, and the -fopenmp
flag instructs the compiler to parallelize the compiled program using OpenMP.
Additionally, you may instruct Eigen to simply act as a ‘frontend’ to an installed LAPACKE library. To do so, you must pass additional flags:
$ g++ example.cpp -o example -std=c++11 -O3 -Wall -I/path/to/libwalrus.hpp -I/path/to/Eigen \
-fopenmp -DLAPACKE -llapacke -lblas
Below, the main interface (available as templated functions) as well as all auxiliary functions are summarized and listed.
Note
If compiling using the clang
compiler provided by Xcode on MacOS, OpenMP is natively supported, however the libomp.so
library must be installed and linked to separately. One approach is to use the Homebrew packaging manager:
$ brew install eigen libomp
$ clang example.cpp -o example -O3 -Wall -fPIC -shared -Xpreprocessor -fopenmp -lomp \
-I/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/
Main interface¶
The following functions are intended as the main interface to the C++ libwalrus
library. Most support parallelization via OpenMP.
Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5]. |
|
Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498. |
|
Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498. |
|
Returns the hafnian of a matrix with repeated rows and columns using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013. |
|
Returns the approximate hafnian of a matrix with non-negative entries by sampling over determinants. The higher the number of samples, the better the accuracy. |
|
Returns the Torontonian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498. |
|
Returns the torontonian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498, with increased accuracy via the |
|
Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering. |
|
Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering, with increased accuracy via the |
|
Returns photon number statistics of a Gaussian state for a given covariance matrix as described in Multidimensional Hermite polynomials and photon distribution for polymode mixed light arxiv:9308033. |
API¶
See here for full details on the C++ API
and the libwalrus
namespace.
C++ Library API¶
Class Hierarchy¶
File Hierarchy¶
-
- Directory include
- File eigenvalue_hafnian.hpp
- File hafnian_approx.hpp
- File hermite_multidimensional.hpp
- File libwalrus.hpp
- File permanent.hpp
- File recursive_hafnian.hpp
- File repeated_hafnian.hpp
- File stdafx.h
- File torontonian.hpp
- File version.hpp
- Directory include
Full API¶
Contents
Contains all algorithms for computing the hafnian, torontonian, and permanent of matrices.
Function libwalrus::hafnian_eigen(std::vector<std::complex<double>>&)
Function libwalrus::hafnian_recursive_quad(std::vector<std::complex<double>>&)
Function libwalrus::hafnian_recursive_quad(std::vector<double>&)
Function libwalrus::hafnian_rpt_quad(std::vector<std::complex<double>>&, std::vector<int>&)
Function libwalrus::hafnian_rpt_quad(std::vector<double>&, std::vector<int>&)
Function libwalrus::loop_hafnian_eigen(std::vector<std::complex<double>>&)
Function libwalrus::loop_hafnian_eigen(std::vector<double>&)
Function libwalrus::permanent_quad(std::vector<std::complex<double>>&)
Template Function libwalrus::renorm_hermite_multidimensional_cpp
Function libwalrus::torontonian_quad(std::vector<std::complex<double>>&)
Defined in File permanent.hpp
Defined in File stdafx.h
Defined in File stdafx.h
-
Byte
find2
(char *dst, Byte len, Byte *pos)¶ Given a string of length
len
, finds the positions in which it has a 1 and stores its position i, as 2*i and 2*i+1 in consecutive slots of the array pos.It also returns (twice) the number of ones in array dst
- Return
returns twice the number of ones in array
dst
.- Parameters
dst
: character array representing binary digits.len
: length of the arraydst
.pos
: resulting character array of length2*len
storing the indices at whichdst
contains the values 1.
Defined in File permanent.hpp
Defined in File eigenvalue_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
do_chunk
(std::vector<T> &mat, int n, unsigned long long int X, unsigned long long int chunksize)¶ Calculates the partial sum \(X,X+1,\dots,X+\text{chunksize}\) using the Cygan and Pilipczuk formula for the hafnian of matrix
mat
.Note that if
X=0
andchunksize=pow(2.0, n/2)
, then the full hafnian is calculated.This function uses OpenMP (if available) to parallelize the reduction.
- Return
the partial sum for hafnian
- Parameters
mat
: vector representing the flattened matrixn
: size of the matrixX
: initial index of the partial sumchunksize
: length of the partial sum
Defined in File eigenvalue_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
do_chunk_loops
(std::vector<T> &mat, std::vector<T> &C, std::vector<T> &D, int n, unsigned long long int X, unsigned long long int chunksize)¶ Calculates the partial sum \(X,X+1,\dots,X+\text{chunksize}\) using the Cygan and Pilipczuk formula for the loop hafnian of matrix
mat
.Note that if
X=0
andchunksize=pow(2.0, n/2)
, then the full loop hafnian is calculated.This function uses OpenMP (if available) to parallelize the reduction.
- Return
the partial sum for the loop hafnian
- Parameters
mat
: vector representing the flattened matrixC
: contains the diagonal elements of matrixz
D
: the diagonal elements of matrixz
, with every consecutive pair swapped (i.e.,C[0]==D[1]
,C[1]==D[0]
,C[2]==D[3]
,C[3]==D[2]
, etc.).n
: size of the matrixX
: initial index of the partial sumchunksize
: length of the partial sum
Defined in File torontonian.hpp
-
void
libwalrus
::
find2T
(char *dst, Byte len, Byte *pos, char offset)¶ Given a string of length
len
, finds the positions in which it has a 1 and stores its position i, as 2*i and 2*i+1 in consecutive slots of the array pos.It also returns (twice) the number of ones in array dst
- Return
returns twice the number of ones in array
dst
.- Parameters
dst
: character array representing binary digits.len
: length of the arraydst
.pos
: resulting character array of length2*len
storing the indices at whichdst
contains the values 1.
Defined in File repeated_hafnian.hpp
Defined in File eigenvalue_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
hafnian
(std::vector<T> &mat)¶ Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.
- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File hafnian_approx.hpp
-
double
libwalrus
::
hafnian_approx
(std::vector<double> &mat, int &nsamples)¶ Returns the approximation to the hafnian of a matrix with non-negative entries.
The approximation follows an stochastic algorithm according to which the hafnian can be approximated as the sum of determinants of matrices. The accuracy of the approximation increases with increasing number of iterations.
This is a wrapper around the templated function
libwalrus::hafnian_nonneg
for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
long double
, allowing for greater precision than supported by Python and NumPy.- Return
the approximate hafnian
- Parameters
mat
: vector representing the flattened matrixnsamples
: positive integer representing the number of samples to perform
Defined in File eigenvalue_hafnian.hpp
-
std::complex<double>
libwalrus
::
hafnian_eigen
(std::vector<std::complex<double>> &mat) Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.
This is a wrapper around the templated function hafnian() for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.
- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File eigenvalue_hafnian.hpp
-
double
libwalrus
::
hafnian_eigen
(std::vector<double> &mat) Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.
This is a wrapper around the templated function hafnian() for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.
- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File hafnian_approx.hpp
-
template<typename
T
>
long doublelibwalrus
::
hafnian_nonneg
(std::vector<T> &mat, int &nsamples)¶ Returns the approximation to the hafnian of a matrix with non-negative entries.
The approximation follows an stochastic algorithm according to which the hafnian can be approximated as the sum of determinants of matrices. The accuracy of the approximation increases with increasing number of iterations.
- Return
the approximate hafnian
- Parameters
mat
: vector representing the flattened matrixnsamples
: positive integer representing the number of samples to perform
Defined in File recursive_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
hafnian_recursive
(std::vector<T> &mat)¶ Returns the hafnian of an matrix.
Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5], where it is labelled as ‘Algorithm 2’.
Modified with permission from https://github.com/eklotek/Hafnian.
- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File recursive_hafnian.hpp
-
std::complex<double>
libwalrus
::
hafnian_recursive_quad
(std::vector<std::complex<double>> &mat) Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5], where it is labelled as ‘Algorithm 2’.
Modified with permission from https://github.com/eklotek/Hafnian.
This is a wrapper around the templated function
libwalrus::hafnian_recursive
for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
complex<long double>
, allowing for greater precision than supported by Python and NumPy.- Return
the hafnian
- Parameters
mat
: vector representing the flattened matrix
Defined in File recursive_hafnian.hpp
-
double
libwalrus
::
hafnian_recursive_quad
(std::vector<double> &mat) Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5], where it is labelled as ‘Algorithm 2’.
Modified with permission from https://github.com/eklotek/Hafnian.
This is a wrapper around the templated function
libwalrus::hafnian_recursive
for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
long double
, allowing for greater precision than supported by Python and NumPy.- Return
the hafnian
- Parameters
mat
: vector representing the flattened matrix
Defined in File repeated_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
hafnian_rpt
(std::vector<T> &mat, std::vector<int> &rpt)¶ Returns the hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.
Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.
- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.rpt
: a vector of integers, representing the number of times each row/column inmat
is repeated. For example,mat = [1]
andrpt = [6]
represents a \(6\times 6\) matrix of all ones.
Defined in File repeated_hafnian.hpp
-
std::complex<double>
libwalrus
::
hafnian_rpt_quad
(std::vector<std::complex<double>> &mat, std::vector<int> &rpt) Returns the hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.
Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.
This is a wrapper around the templated function
libwalrus::hafnian_rpt
for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
complex<long double>
, allowing for greater precision than supported by Python and NumPy.- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.rpt
: a vector of integers, representing the number of times each row/column inmat
is repeated. For example,mat = [1]
andrpt = [6]
represents a \(6\times 6\) matrix of all ones.
Defined in File repeated_hafnian.hpp
-
double
libwalrus
::
hafnian_rpt_quad
(std::vector<double> &mat, std::vector<int> &rpt) Returns the hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.
Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.
This is a wrapper around the templated function
libwalrus::hafnian_rpt
for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
long double
, allowing for greater precision than supported by Python and NumPy.- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.rpt
: a vector of integers, representing the number of times each row/column inmat
is repeated. For example,mat = [1]
andrpt = [6]
represents a \(6\times 6\) matrix of all ones.
Defined in File hermite_multidimensional.hpp
-
template<typename
T
>
T *libwalrus
::
hermite_multidimensional_cpp
(const std::vector<T> &R, const std::vector<T> &y, const int &cutoff)¶ Returns the multidimensional Hermite polynomials \(H_k^{(R)}(y)\).
This implementation is based on the MATLAB code available at https://github.com/clementsw/gaussian-optics
- Parameters
R
: a flattened vector of size \(n^2\), representing a \(n\times n\) symmetric matrix.y
: a flattened vector of size \(n\).cutoff
: highest number of photons to be resolved.
Defined in File hermite_multidimensional.hpp
-
template<typename
T
>
T *libwalrus
::
interferometer_cpp
(const std::vector<T> &R, const int &cutoff)¶ Returns the matrix elements of an interferometer parametrized in terms of its R matrix
- Parameters
R
: a flattened vector of size \(n^2\), representing a \(n\times n\) symmetric matrix.cutoff
: highest number of photons to be resolved.
Defined in File repeated_hafnian.hpp
-
std::vector<int>
libwalrus
::
lin_to_multi
(unsigned long long int linear_index, const std::vector<int> &maxes)¶ Converts a linear index to a multi index e.g. if we wanted the multi-index (i,j) of an element in a 2x2 matrix given a linear index of 3 in the array storing the matrix, the maxes vector would be {1,1} and this function would return (1,0)
- Return
multi-index corresponding to the linear index
- Parameters
linear_index
: the “flattened” indexmaxes
: a vector of integers, representing the max index value of each indice in the multi-index object.
Defined in File eigenvalue_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
loop_hafnian
(std::vector<T> &mat)¶ Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.
- Return
hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File eigenvalue_hafnian.hpp
-
std::complex<double>
libwalrus
::
loop_hafnian_eigen
(std::vector<std::complex<double>> &mat) Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.
This is a wrapper around the templated function libwalrus::loop_hafnian() for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.
- Return
loop hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File eigenvalue_hafnian.hpp
-
double
libwalrus
::
loop_hafnian_eigen
(std::vector<double> &mat) Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.
This is a wrapper around the templated function loop_hafnian() for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.
- Return
loop hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File repeated_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
loop_hafnian_rpt
(std::vector<T> &mat, std::vector<T> &mu, std::vector<int> &rpt)¶ Returns the loop hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.
Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.
- Return
loop hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.mu
: a vector of length \(n\) representing the vector of means/displacement.rpt
: a vector of integers, representing the number of times each row/column inmat
is repeated. For example,mat = [1]
andrpt = [6]
represents a \(6\times 6\) matrix of all ones.
Defined in File repeated_hafnian.hpp
-
std::complex<double>
libwalrus
::
loop_hafnian_rpt_quad
(std::vector<std::complex<double>> &mat, std::vector<std::complex<double>> &mu, std::vector<int> &rpt) Returns the loop hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.
Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.
This is a wrapper around the templated function
libwalrus::hafnian_rpt_loop
for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
complex<long double>
, allowing for greater precision than supported by Python and NumPy.- Return
loop hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.mu
: a vector of length \(n\) representing the vector of means/displacement.rpt
: a vector of integers, representing the number of times each row/column inmat
is repeated. For example,mat = [1]
andrpt = [6]
represents a \(6\times 6\) matrix of all ones.
Defined in File repeated_hafnian.hpp
-
double
libwalrus
::
loop_hafnian_rpt_quad
(std::vector<double> &mat, std::vector<double> &mu, std::vector<int> &rpt) Returns the loop hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.
Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.
This is a wrapper around the templated function
libwalrus::hafnian_rpt_loop
for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
long double
, allowing for greater precision than supported by Python and NumPy.- Return
loop hafnian of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.mu
: a vector of length \(n\) representing the vector of means/displacement.rpt
: a vector of integers, representing the number of times each row/column inmat
is repeated. For example,mat = [1]
andrpt = [6]
represents a \(6\times 6\) matrix of all ones.
Defined in File permanent.hpp
-
template<typename
T
>
doublelibwalrus
::
perm_fsum
(std::vector<T> &mat)¶ Returns the permanent of an matrix using fsum.
Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering.
- Return
permanent of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File permanent.hpp
-
template<typename
T
>
Tlibwalrus
::
permanent
(std::vector<T> &mat)¶ Returns the permanent of an matrix.
Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering.
- Return
permanent of the input matrix
- Parameters
mat
: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File permanent.hpp
-
double
libwalrus
::
permanent_fsum
(std::vector<double> &mat)¶ Returns the permanent of a matrix using Ryser’s algo with Gray code ordering with fsum
This is a wrapper around the templated function
libwalrus::perm_fsum
for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.- Return
the permanent
- Parameters
mat
: vector representing the flattened matrix
Defined in File permanent.hpp
-
std::complex<double>
libwalrus
::
permanent_quad
(std::vector<std::complex<double>> &mat) Returns the permanent of a matrix using the Ryser’s algo with Gray code ordering
This is a wrapper around the templated function
libwalrus::permanent
for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
complex<long double>
, allowing for greater precision than supported by Python and NumPy.- Return
the permanent
- Parameters
mat
: vector representing the flattened matrix
Defined in File permanent.hpp
-
double
libwalrus
::
permanent_quad
(std::vector<double> &mat) Returns the permanent of a matrix using Ryser’s algo with Gray code ordering
This is a wrapper around the templated function
libwalrus::permanent
for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
long double
, allowing for greater precision than supported by Python and NumPy.- Return
the permanent
- Parameters
mat
: vector representing the flattened matrix
Defined in File eigenvalue_hafnian.hpp
-
vec_complex
libwalrus
::
powtrace
(vec_complex &z, int n, int l) Given a complex matrix \(z\) of dimensions \(n\times n\), it calculates \(Tr(z^j)~\forall~1\leq j\leq l\).
- Return
a vector containing the power traces of matrix
z
to power \(1\leq j \leq l\).- Parameters
z
: a flattened complex vector of size \(n^2\), representing an \(n\times n\) row-ordered matrix.n
: size of the matrixz
.l
: maximum matrix power when calculating the power trace.
Defined in File eigenvalue_hafnian.hpp
-
vec_double
libwalrus
::
powtrace
(vec_double &z, int n, int l) Given a real matrix \(z\) of dimensions \(n\times n\), it calculates \(Tr(z^j)~\forall~1\leq j\leq l\).
- Return
a vector containing the power traces of matrix
z
to power \(1\leq j \leq l\).- Parameters
z
: a flattened complex vector of size \(n^2\), representing an \(n\times n\) row-ordered matrix.n
: size of the matrixz
.l
: maximum matrix power when calculating the power trace.
Defined in File recursive_hafnian.hpp
-
template<typename
T
>
Tlibwalrus
::
recursive_chunk
(std::vector<T> b, int s, int w, std::vector<T> g, int n)¶ Recursive hafnian solver.
Modified with permission from https://github.com/eklotek/Hafnian.
This function uses OpenMP (if available) to parallelize the reduction.
- Return
the hafnian
- Parameters
b
:s
:w
:g
:n
:
Defined in File hermite_multidimensional.hpp
-
template<typename
T
>
T *libwalrus
::
renorm_hermite_multidimensional_cpp
(const std::vector<T> &R, const std::vector<T> &y, const int &cutoff)¶ Returns the normalized multidimensional Hermite polynomials \(\tilde{H}_k^{(R)}(y)\).
This implementation is based on the MATLAB code available at https://github.com/clementsw/gaussian-optics
- Parameters
R
: a flattened vector of size \(n^2\), representing a \(n\times n\) symmetric matrix.y
: a flattened vector of size \(n\).cutoff
: highest number of photons to be resolved.
Defined in File torontonian.hpp
Defined in File torontonian.hpp
-
template<typename
T
>
Tlibwalrus
::
torontonian
(std::vector<T> &mat)¶ Computes the Torontonian of an input matrix.
If the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.
This function uses OpenMP (if available) to parallelize the reduction.
- Return
Torontonian of the input matrix
- Parameters
mat
: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File torontonian.hpp
-
template<typename
T
>
doublelibwalrus
::
torontonian_fsum
(std::vector<T> &mat)¶ Computes the Torontonian of an input matrix using the Shewchuck algorithm, a significantly more accurate summation algorithm.
Note that the fsum implementation currently only allows for double precision, and precludes use of OpenMP parallelization.
Note: if the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.
- Return
Torontonian of the input matrix
- Parameters
mat
: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File torontonian.hpp
-
std::complex<double>
libwalrus
::
torontonian_quad
(std::vector<std::complex<double>> &mat) Computes the Torontonian of an input matrix.
If the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.
This is a wrapper around the templated function
libwalrus::torontonian
for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
complex<long double>
, allowing for greater precision than supported by Python and NumPy.- Return
Torontonian of the input matrix
- Parameters
mat
: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File torontonian.hpp
-
double
libwalrus
::
torontonian_quad
(std::vector<double> &mat) Computes the Torontonian of an input matrix.
If the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.
This is a wrapper around the templated function
libwalrus::torontonian
for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.In addition, this wrapper function automatically casts all matrices to type
long double
, allowing for greater precision than supported by Python and NumPy.- Return
Torontonian of the input matrix
- Parameters
mat
: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.
Defined in File hermite_multidimensional.hpp
-
int
update_iterator
(std::vector<int> &nextPos, std::vector<int> &jumpFrom, int &jump, const int &cutoff, const int &dim)¶ Updates the iterators needed for the calculation of the Hermite multidimensional functions
index necessary for knowing which elements are needed from the input vector y and matrix R
- Parameters
nextPos
: a vector of integersjumpFrom
: a vector of integersjump
: integer specifying whether to jump to the next indexcutoff
: integer specifying the cuotff dimension of the R matrix
Defined in File hermite_multidimensional.hpp
Defined in File version.hpp
Defined in File version.hpp
Defined in File version.hpp
Defined in File version.hpp
Defined in File version.hpp
Defined in File stdafx.h
Defined in File hermite_multidimensional.hpp
Defined in File permanent.hpp
-
typedef unsigned long long int
ullint
Defined in File stdafx.h
-
typedef std::vector<double_complex>
vec_complex
¶
Defined in File stdafx.h
Contents
Getting started
Background
The Walrus API
Downloads