The Walrus Documentation

Release

0.13.0-dev

Responsive image

A library for the calculation of hafnians, Hermite polynomials, and Gaussian boson sampling.

Features

  • Fast calculation of hafnians, loop hafnians, and torontonians of general and certain structured matrices.

  • An easy to use interface to use the loop hafnian for Gaussian quantum state calculations.

  • Sampling algorithms for hafnian and torontonians of graphs.

  • Efficient classical methods for approximating the hafnian of non-negative matrices.

  • Easy to use implementations of the multidimensional Hermite polynomials, which can also be used to calculate hafnians of all reductions of a given matrix.

Getting started

To get the The Walrus installed and running on your system, begin at the download and installation guide. Then, familiarise yourself with some background information on the Hafnian and the computational algorithm.

For getting started with using the The Walrus in your own code, have a look at the Python tutorial.

Finally, detailed documentation on the code and API is provided.

Support

If you are having issues, please let us know, either by email or by posting the issue on our Github issue tracker.

Authors

Nicolás Quesada, Brajesh Gupt, and Josh Izaac.

If you are doing research using The Walrus, please cite our paper:

Brajesh Gupt, Josh Izaac and Nicolás Quesada. The Walrus: a library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Journal of Open Source Software, 4(44), 1705 (2019)

License

The Walrus library is free and open source, released under the Apache License, Version 2.0.

Installation and Downloads

Installation

Pre-built binary wheels are available for the following platforms:

macOS 10.6+

manylinux x86_64

Windows 64bit

Python 3.6

X

X

X

Python 3.7

X

X

X

Python 3.8

X

X

X

To install, simply run

pip install thewalrus

Compiling from source

The Walrus depends on the following Python packages:

In addition, to compile the C++ extension, the following dependencies are required:

  • A C++11 compiler, such as g++ >= 4.8.1, clang >= 3.3, MSVC >= 14.0/2015

  • Eigen3 - a C++ header library for linear algebra.

On Debian-based systems, these can be installed via apt and curl:

$ sudo apt install g++ libeigen3-dev

or using Homebrew on MacOS:

$ brew install gcc eigen

Alternatively, you can download the Eigen headers manually:

$ mkdir ~/.local/eigen3 && cd ~/.local/eigen3
$ wget http://bitbucket.org/eigen/eigen/get/3.3.7.tar.gz -O eigen3.tar.gz
$ tar xzf eigen3.tar.gz eigen-eigen-323c052e1731/Eigen --strip-components 1
$ export EIGEN_INCLUDE_DIR=$HOME/.local/eigen3

Note that we export the environment variable EIGEN_INCLUDE_DIR so that The Walrus can find the Eigen3 header files (if not provided, The Walrus will by default look in /use/include/eigen3 and /usr/local/include/eigen3).

Once all dependencies are installed, you can compile the latest stable version of the The Walrus library as follows:

$ python -m pip install thewalrus --no-binary :all:

Alternatively, you can compile the latest development version by cloning the git repository, and installing using pip in development mode.

$ git clone https://github.com/XanaduAI/thewalrus.git
$ cd thewalrus && python -m pip install -e .
OpenMP

libwalrus uses OpenMP to parallelize both the permanent and the hafnian calculation. At the moment, this is only supported on Linux using the GNU g++ compiler, due to insufficient support using Windows/MSCV and MacOS/Clang.

Using LAPACK, OpenBLAS, or MKL

If you would like to take advantage of the highly optimized matrix routines of LAPACK, OpenBLAS, or MKL, you can optionally compile the libwalrus such that Eigen uses these frameworks as backends. As a result, all calls in the libwalrus library to Eigen functions are silently substituted with calls to LAPACK/OpenBLAS/MKL.

For example, for LAPACK integration, make sure you have the lapacke C++ LAPACK bindings installed (sudo apt install liblapacke-dev in Ubuntu-based Linux distributions), and then compile with the environment variable USE_LAPACK=1:

$ USE_LAPACK=1 python -m pip install thewalrus --no-binary :all:

Alternatively, you may pass USE_OPENBLAS=1 to use the OpenBLAS library.

Software tests

To ensure that The Walrus library is working correctly after installation, the test suite can be run by navigating to the source code folder and running

$ make test

To run the low-level C++ test suite, Googletest will need to be installed. In Ubuntu-based distributions, this can be done as follows:

sudo apt-get install cmake libgtest-dev
cd /usr/src/googletest/googletest
sudo cmake
sudo make
sudo cp libgtest* /usr/lib/
sudo mkdir /usr/local/lib/googletest
sudo ln -s /usr/lib/libgtest.a /usr/local/lib/googletest/libgtest.a
sudo ln -s /usr/lib/libgtest_main.a /usr/local/lib/googletest/libgtest_main.a

Alternatively, the latest Googletest release can be installed from source:

sudo apt install cmake
wget -qO - https://github.com/google/googletest/archive/release-1.8.1.tar.gz | tar -xz
cmake -D CMAKE_INSTALL_PREFIX:PATH=$HOME/googletest -D CMAKE_BUILD_TYPE=Release googletest-release-1.8.1
make install

If installing Googletest from source, make sure that the included headers and libraries are available on your include/library paths.

Documentation

The Walrus documentation is available online on Read the Docs.

To build it locally, you need to have the following packages installed:

They can be installed via a combination of pip and apt if on a Debian-based system:

$ sudo apt install pandoc doxygen
$ pip3 install sphinx sphinxcontrib-bibtex nbsphinx breathe exhale

To build the HTML documentation, go to the top-level directory and run the command

$ make doc

The documentation can then be found in the docs/_build/html/ directory.

Research and contribution

Research

If you are doing research using The Walrus, please cite our paper:

Brajesh Gupt, Josh Izaac and Nicolás Quesada. The Walrus: a library for the calculation of hafnians, Hermite polynomials and Gaussian boson sampling. Journal of Open Source Software, 4(44), 1705 (2019)

We are always open for collaboration, and can be contacted at research@xanadu.ai.

Contribution

If you would like to get involved with The Walrus, or you would like to contribute, simply fork our Github repository, and make a descriptive pull request.

Support

If you are having issues, please let us know, either by email or by posting the issue on our Github issue tracker.

We have a mailing list located at: support@xanadu.ai

Quick guide

This section provides a quick guide to find which function does what in The Walrus.

What you want

Corresponding function

Numerical hafnian

thewalrus.hafnian()

Symbolic hafnian

thewalrus.reference.hafnian()

Hafnian of a matrix with repeated rows and columns

thewalrus.hafnian_repeated()

Hafnians of all possible reductions of a given matrix

thewalrus.hafnian_batched()

Hafnian samples of a Gaussian state

thewalrus.samples.hafnian_sample_state()

Torontonian samples of a Gaussian state

thewalrus.samples.torontonian_sample_state()

Hafnian samples of a graph

thewalrus.samples.hafnian_sample_graph()

Torontonian samples of a graph

thewalrus.samples.torontonian_sample_graph()

All probability amplitudes of a pure Gaussian state

thewalrus.quantum.state_vector()

All matrix elements of a general Gaussian state

thewalrus.quantum.density_matrix()

A particular probability amplitude of pure Gaussian state

thewalrus.quantum.pure_state_amplitude()

A particular matrix element of a general Gaussian state

thewalrus.quantum.density_matrix_element()

The Fock representation of a Gaussian unitary

thewalrus.quantum.fock_tensor()

Note that all the hafnian functions listed above generalize to loop hafnians.

The hafnian

Section author: Nicolás Quesada <nicolas@xanadu.ai>

The hafnian

The hafnian of an \(n \times n\), symmetric matrix \(\bm{A} =\bm{A}^T\) is defined as [30]

\[\label{eq:hafA} \haf(\bm{A}) = \sum_{M \in \text{PMP}(n)} \prod_{\scriptscriptstyle (i, j) \in M} A_{i, j},\]

where \(\text{PMP}(n)\) stands for the set of perfect matching permutations of \(n\) (even) objects, i.e., permutations \(\sigma:[n]\rightarrow [n]\) such that \(\forall i:\sigma(2i-1)<\sigma(2i)\) and \(\forall i:\sigma(2i-1)<\sigma(2i+1)\).

It was so named by Eduardo R. Caianiello [31] “to mark the fruitful period of stay in Copenhagen (Hafnia in Latin).” [32]

The set PMP(\(n\)) used to define the hafnian contains

\[\label{eq:haf1} |\text{PMP}(n)|=(n-1)!! = 1 \times 3 \times 5 \times \ldots \times (n -1),\]

elements and thus as defined it takes \((n-1)!!\) additions of products of \(n/2\) numbers to calculate the hafnian of an \(n \times n\) matrix. Note that the diagonal elements of the matrix \(\bm{A}\) do not appear in the calculation of the hafnian and are (conventionally) taken to be zero.

For \(n=4\) the set of perfect matchings is

\[\label{eq:PMP4} \text{PMP}(4) = \big\{ (0,1)(2,3),\ (0,2)(1,3),\ (0,3),(1,2) \big\},\]

and the hafnian of a \(4 \times 4\) matrix \(\bm{B}\) is

\[\label{eq:hafB} \haf(\bm{B}) = B_{0,1} B_{2,3}+B_{0,2}B_{1,3}+B_{0,3} B_{1,2}.\]

The hafnian of an odd sized matrix is defined to be zero; if \(\bm{A}=\bm{A}^T\) is \(M\) dimensional and \(M\) is odd then \(\haf(\bm{A}) = 0\). Note that, for convenience, we define the hafnian of an empty matrix, i.e., a matrix of dimension zero by zero, to be 1.

The hafnian is a homogeneous function of degree \(n/2\) in the matrix entries of an \(n \times n\) matrix \(\bm{A}\). This implies that

\[\haf(\mu \bm{A}) = \mu ^{n/2} \haf(\bm{A}),\]

where \(\mu\) is a scalar. More generally, if \(\bm{W} = \text{diag}(w_0,\ldots,w_{n-1})\), then it holds that (see proposition 4.2.3 of [30])

\[\haf( \bm{W} \bm{A} \bm{W} ) = \left(\prod_{i=0}^{n-1} w_i\right) \haf(\bm{A}).\]

The definition used to introduce the hafnian is rather algebraic and brings little intuition. To gain more insight in the next section we introduce some graph theory language and use it to present a more intuitive vision of what the hafnian “counts”.

Basics of graphs

A graph is an ordered pair \(G=(V,E)\) where \(V\) is the set of vertices, and \(E\) is the set of edges linking the vertices of the graph, i.e., if \(e \in E\) then \(e=(i,j)\) where \(i,j \in V\). In this section we consider graphs without loops (we relax this in the next section), thus we do not allow for edges \(e = (i,i)\) connecting a given vertex to itself. A 6 vertex graph is shown here

_images/graph.svg

the vertices are labelled \(V = \{1,2,3,4,5,6 \}\) and the edges are \(E=\{(1,4),(2,4),(2,5),(3,4),(3,5),(3,6),(5,5) \}\).

A matching \(M\) is a subset of the edges in which no two edges share a vertex. An example of matching is \(M=(1,4)(3,6)\) represented by the blue lines in the following figure

_images/matching.svg

In the figure above we know we have a matching because none of the highlighted edges shares a vertex.

A perfect matching is a matching which matches all the vertices of the graph, such as for example \(M=(1,4)(2,5)(3,6)\), which is represented again by the blue lines in the following figure

_images/pm.svg

The blue lines represent a perfect matching because, they are a matching, i.e., the edges do no overlap on any vertex and all the vertices are covered by one and only edge.

A complete graph is a graph where every vertex is connected to every other vertex. For loopless graphs having \(n\) vertices, the number of perfect matchings is precisely [30]

\[|\text{PMP}(n)|=(n-1)!! = 1 \times 3 \times \ldots \times (n-1).\]

where we use \(\text{PMP}(n)\) to indicate the set of perfect matchings of introduced in the previous section, and the notation \(|V|\) to indicate the number of elements in the set \(V\). Note that this number is nonzero only for even \(n\), since for odd \(n\) there will always be one unmatched vertex.

In the following figure we illustrate the 3 perfect matchings of a complete graph with 4 vertices

_images/pmp4.svg

Perfect matchings and hafnians

An important question concerning a given graph \(G=(V,E)\) is the number of perfect matchings it has. One possible way to answer this question is to iterate over the perfect matchings of a complete graph and at each step check if the given perfect matching of the complete graph is also a perfect matching of the given graph. A simple way to automatize this process is by constructing the adjacency matrix of the graph. The adjacency matrix \(\bm{A}\) of a graph \(G=(V,E)\) is a 0-1 matrix that has \(\bm{A}_{i,j} = \bm{A}_{j,i}=1\) if, and only if, \((i,j) \in E\) and 0 otherwise. For the example graph in the previous section, the adjacency matrix is

\[\begin{split}\bm{A}' = \begin{bmatrix} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix}.\end{split}\]

The number of perfect matchings of a (loopless) graph is simply given by the hafnian of its adjacency matrix

\[\text{haf}(\bm{A}) = \sum_{M \in \text{PMP}(n)} \prod_{\scriptscriptstyle (i,j) \in M} {A}_{i,j}.\]

For the graph in the previous section we can easily confirm that the perfect matching we found is the only perfect matching since

\[\text{haf}(\bm{A}') = 1.\]

The definition of the hafnian immediately generalizes to weighted graphs, where we assign a real or complex number to the entries of the symmetric matrix \(\bm{A}\).

Special values of the hafnian

Here we list some special values of the hafnian for certain special matrices.

  • If the matrix \(\bm{A}\) has the following block form

\[\begin{split}\bm{A}_{\text{block}} = \left[\begin{array}{c|c} 0 & \bm{C} \\ \hline \bm{C}^T & 0 \\ \end{array}\right],\end{split}\]

then it holds that \(\text{haf}\left( \bm{A}_{\text{block}} \right) = \text{per}(\bm{C})\) where \(\text{per}\) is the permanent matrix function defined as [30]

\[\text{per}(\bm{C})=\sum_{\sigma\in S_n}\prod_{i=1}^n C_{i,\sigma(i)}.\]

The sum here extends over all elements \(\sigma\) of the symmetric group \(S_n\).

  • If \(\bm{A}_{\text{rank-one}} = \bm{e} \bm{e}^T\) is a rank one matrix of size \(n\) then

\[\text{haf}\left( \bm{A}_{\text{rank-one}} \right) = (n-1)!! \prod_{i=1}^{n-1} e_i.\]

In particular, the hafnian of the all ones matrix is precisely \((n-1)!!\).

  • If \(\bm{A}_{\text{direct sum}} = \bm{A}_1 \oplus \bm{A}_2\) is a block diagonal matrix then

\[\text{haf}\left(\bm{A}_{\text{direct sum}}\right) = \text{haf}\left( \bm{A}_1 \oplus \bm{A}_2 \right) = \text{haf}\left( \bm{A}_1 \right) \text{haf}\left( \bm{A}_2 \right).\]

This identity simply expresses the fact that the number of perfect matchings of a graph that is made of two disjoint subgraphs is simply the product of the number of perfect matchings of the two disjoint subgraphs.

The loop hafnian

Section author: Nicolás Quesada <nicolas@xanadu.ai>

Graphs with loops

In the previous section we introduced the hafnian as a way of counting the number of perfect matchings of a loopless graph. The loop hafnian does the same for graphs with loops. Before defining the loop hafnian let us introduce graphs with loops. A graph will still be an ordered pair \((V,E)\) of vertices and edges but now we will allow edges of the form \((i,i)\) where \(i \in V\). We can now define adjacency matrices in the same way as we did before i.e. if \((i,j) \in E\) then \(M_{i,j}=1\) and otherwise \(M_{i,j}=0\).

Consider the following graph

_images/loop_fig1.svg

for which we have \(V = \{1,2,3,4,5,6 \}\), the edges are \(E=\{(1,1),(1,4),(2,4),(2,5),(3,4),(3,5),(3,6),(5,5) \}\) and the adjacency matrix is

\[\begin{split}\bm{A}'' = \begin{bmatrix} 1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix}.\end{split}\]

Note that there are now nonzero elements in the diagonal indicating that vertices 1 and 5 have loops.

Once we allow for loops we have more options for making perfect matchings. For example for the graph shown above there are now 2 perfect matchings, illustrated in blue in the following figure

_images/loop_fig2.svg

As was done before for the hafnian we introduce the set of single pair matchings \(\text{SPM}(n)\) as the set of perfect matchings of a graph of size \(n\). For \(n=4\) we have

\[\begin{split}\text{SPM}(4) = \big\{ (0,1)(2,3),\ (0,2)(1,3),\ (0,3),(1,2),\ (0,0)(1,1)(2,3), \ (0,1)(2,2)(3,3),\\ (0,0)(2,2)(1,3),\ (0,2)(1,1)(3,3),\ (0,0)(3,3)(1,2),\ (0,3)(1,1)(2,2),\ (0,0)(1,1)(2,2)(3,3)\big\}.\end{split}\]

For a graph with 4 vertices they are

_images/fig2.svg

Note that there is a one to one correspondence (a bijection) between the elements in \(\text{SPM}(n)\) and the number of matchings of a graph with \(n\) vertices \(H(n)\). To see why this is the case, note that any element of \(\text{SPM}(n)\) can be converted into a matching by removing all the vertices that are loops. For example, to the following element \((0,0)(2,2)(1,3)\) we associate the matching \((1,2)\). Note that this mapping is one-to-one since, given a matching, we can always add as loops all the other vertices that are not part of the matching. Using this bijection we conclude that the number of elements in \(\text{SPM}(n)\) is (see The On-Line Encyclopedia of Integer Sequences)

\[|\text{SPM}(n)| = T(n),\]

where \(T(n)\) is the \(n^{\text{th}}\) telephone number.

Note that in general for given graph size \(n\) there a lot more single pair matching that there are perfect matchings. Their ratio goes like [6]

\[\frac{\text{SPM}(n)}{\text{PMP}(n)} = \frac{T(n)}{(n-1)!!} \sim e^{\sqrt{n}}.\]

The loop hafnian

We will also be interested in a generalization of the hafnian function where we will now allow for adjacency matrices that have loops. This new function we call the loop hafnian (lhaf). As explained before, the weight associated with said loops will be allocated in the diagonal elements of the adjacency matrix \(\bm{A}\) (which were previously ignored in the definition of the hafnian). To account for the possibility of loops we generalized the set of perfect matching permutations PMP to the single-pair matchings (SPM). Thus we define [6]

\[\lhaf(\bm{A}) = \sum_{M \in \text{SPM}(n)} \prod_{\scriptscriptstyle (i,j) \in M} A_{i,j}.\]

The lhaf of a \(4 \times 4\) matrix \(\bm{B}\) is

\[\begin{split}\lhaf(\bm{B}) =& B_{0,1} B_{2,3}+B_{0,2}B_{1,3}+B_{0,3} B_{1,2}\\ &+ B_{0,0} B_{1,1} B_{2,3}+B_{0,1} B_{2,2} B_{3,3}+B_{0,2}B_{1,1}B_{3,3}\nonumber\\ &+ B_{0,0} B_{2,2} B_{1,3}+B_{0,0}B_{3,3}B_{1,2}+B_{0,3} B_{1,1} B_{2,2}\nonumber\\ &+ B_{0,0} B_{1,1} B_{2,2} B_{3,3}. \nonumber\end{split}\]

Finally, let us comment on the scaling properties of the \(\haf\) and \(\lhaf\). Unlike the hafnian, the loop hafnian is not homogeneous in its matrix entries, i.e.

\[\begin{split}\haf(\mu \bm{A}) &= \mu ^{n/2} \haf(\bm{A}) \text{ but},\\ \lhaf(\mu \bm{A}) &\neq \mu ^{n/2} \lhaf(\bm{A}).\end{split}\]

where \(n\) is the size of the matrix \(\bm{A}\) and \(\mu \geq 0\). However if we split the matrix \(\bm{A}\) in terms of its diagonal \(\bm{A}_{\text{diag}}\) part and its offdiagonal part \(\bm{A}_{\text{off-diag}}\)

\[\bm{A} = \bm{A}_{\text{diag}}+\bm{A}_{\text{off-diag}},\]

then it holds that [6]

\[\lhaf(\sqrt{\mu} \bm{A}_{\text{diag}}+ \mu \bm{A}_{\text{off-diag}}) = \mu^{n/2} \lhaf(\bm{A}_{\text{diag}}+ \bm{A}_{\text{off-diag}}) =\mu^{n/2} \lhaf(\bm{A}).\]

One can use the loop hafnian to count the number of matchings of a loopless graph by simply calculating the loop hafnian of its adjacency matrix adding ones in its diagonal.

Finally, if \(\bm{A}_{\text{direct sum}} = \bm{A}_1 \oplus \bm{A}_2\) is a block diagonal matrix then

\[\text{lhaf}\left(\bm{A}_{\text{direct sum}}\right) = \text{lhaf}\left( \bm{A}_1 \oplus \bm{A}_2 \right) = \text{lhaf}\left( \bm{A}_1 \right) \text{lhaf}\left( \bm{A}_2 \right).\]

As for the hafnian, this identity tell us that the number of perfect matchings of a graph that is made of two disjoint subgraphs is simply the product of the number of perfect matchings of the two disjoint subgraphs.

The algorithms

Section author: Nicolás Quesada <nicolas@xanadu.ai>

The exact calculation of the number of perfect matchings (and the hafnian) for general graphs (symmetric matrices) has been investigated by several authors in recent years. An algorithm running in \(O(n^2 2^n)\) time was given by Björklund and Husfeldt [1] in 2008. In the same paper an algorithm running in \(O(1.733^n)\) time was presented using fast matrix multiplication. In the same year Kan [2] presented an algorithm running in time \(O(n 2^n)\) by representing the hafnian as a moment of the multinormal distribution. Koivisto [3] gave an \(O^*(\phi^n)\) time and space algorithm, where \(\phi = (1+\sqrt{5})/2 \approx 1.618\) is the Golden ratio. Nederlof [4] provided a polynomial space algorithm running in \(O(1.942^n)\) time.

Finally, Björklund [5] [6] and later Cygan and Pilipczuk [7] provided \(O(\text{poly}(n) 2^{n/2})\) time and polynomial space algorithms for the calculation of the general ring hafnian. These algorithms are believed to be close to optimal unless there are surprisingly efficient algorithms for the Permanent. This is because these two algorithms can also be used to count (up to polynomial corrections) the number of perfect matchings for bipartite graphs with the same exponential growth as Ryser’s algorithm for the permanent [8]. Equivalently, if one could construct an algorithm that calculates hafnians in time \(O(\alpha^{n/2})\) with \(\alpha<2\) one could calculate permanents faster than Ryser’s algorithm (which is the fastest known algorithm to calculate the permanent [9]). This is because of the identity

\[\begin{split}\text{haf} \left( \left[ \begin{array}{cc} 0 & \bm{W} \\ \bm{W}^T & 0 \\ \end{array} \right]\right) = \text{per}(\bm{W}),\end{split}\]

which states that a bipartite graph with two parts having \(n/2\) elements can always be thought as a simple graph with \(n\) vertices. It should be noted that improving over Ryser’s algorithm is a well-known open problem: e.g. Knuth [10] asks for an arithmetic circuit for the permanent with less than \(2^n\) operations. Also note that since the exact calculation of the permanent of \((0,1)\) matrices is in the #P complete class [11] the above identity shows that deciding if the hafnian of a complex matrix is larger than a given value is also in the #P complete class.

Tip

Permanents can be calculated directly using Ryser’s algorithm via thewalrus.perm().

Finally, note that the approximate methods developed for counting perfect matchings are aimed at (weighted-)graphs with real or positive entries [12][13]. Of particular note is the approximate algorithm introduced by Barvinok for matrices with non-negative entries [14] further analyzed in Ref. [15].

In what follows we provide a pseudo-code or equations that give a basic intuition for the algorithms implemented in this library. The reader is referred to the original literature for proof of correctness and complexity.

Reference algorithm

We provide a reference implementation of the hafnian and loop hafnian that iterates over the sets \(\text{PMP}(n)\) and \(\text{SPM}(n)\). These implementations are extremely slow for even moderate sized matrices and are only provided for educational purposes.

Tip

Implemented as thewalrus.reference.hafnian(). The optional argument loops=True can be used to calculate loop hafnians.

Recursive algorithm

In 2012 Björklund [5] introduced the following algorithm to calculate the hafnian of a matrix of size \(n\) (even) in any field \(\mathbb{F}\) in time \(O(n^4 \log(n) 2^{n/2})\)

_images/bjorklund.svg

In the pseudocode above the following notation is used:

  • \([n]=\{0,1,2,\ldots,n-1\}\) is the set of the first \(n\) integers,

  • \(|X|\) is used to denote the number of elements in the set \(X\), and

  • \(P(X)\) is used to denote the power set, which is the set of all the subsets of the set \(X\). Note that if \(X\) has \(|X|\) elements, then its power set has \(2^{|X|}\) elements.

Note that the subindices and superindices in the matrices \(\bm{B}\) are not used for components of the matrices but rather to denote stages in the computation. Furthermore, these matrices contain polynomials in the symbolic variable \(r\) and that the final answer is obtained by adding the coefficients of \(r^{n/2}\) in the polynomial \(g\) at each step. In the implementation provided in this library the algorithm sketched above in pseudocode is turned into a recursion relation, hence the name we give it here.

Unfortunately, there is no known generalization of this algorithm to loop hafnians.

Tip

Implemented as thewalrus.hafnian(). This is the default algorithm for calculating hafnians.

Eigenvalue-trace algorithm

Based on the work of Cygan and Pilipczuk [7], Björklund et al [6] introduced another algorithm to calculate the hafnian of a real or complex matrix of size \(n\) in 2018. This algorithm which runs in time \(O(n^3 2^{n/2})\) and can be more succinctly expressed as an equation

\[\text{haf}(\bm{A}) = \sum_{S \in P([n/2])} (-1)^{ |S|} f\left((\bm{A} \bm{X})_{S}\right),\]

where the matrix \(\bm{X}\) is defined as

\[\begin{split}\bm{X}= \bm{X}^T=\bm{X}^{-1}= \begin{bmatrix} \bm{0} & \mathbb{I} \\ \mathbb{I} & \bm{0} \end{bmatrix},\end{split}\]

\(\mathbb{I}\) is the identity matrix and the function \(f(\bm{C})\) takes a matrix \(\bm{C}\) and returns the coefficient of \(\eta^{n/2}\) in the following polynomial:

\[p_{n/2}(\eta \bm{C}) = \sum_{j=1}^{n/2} \frac{1}{j!}\left(\sum_{k=1}^{n/2} \frac{\text{tr}(\bm{C}^k)}{2k}\eta^k \right)^j.\]

This coefficient can be found by taking derivatives [16]

\[f(\bm{C}) = \frac{1}{(n/2)!} \left. \frac{d^{n/2}}{d\eta^{n/2}} p_{n/2}(\eta \bm{C}) \right|_{\eta=0} = \frac{1}{(n/2)!} \left. \frac{d^{n/2}}{d\eta^{n/2}} \frac{1}{\sqrt{\det(\mathbb{I} - \eta \bm{C})}} \right|_{\eta=0}.\]

The function \(p_{n/2}(\eta\bm{C})\) requires only the traces of the matrix powers of the matrices \(\bm{C}^k\), hence the name of this algorithm.

Note that these traces can be written in terms of the sums of powers of the eigenvalues of the matrix \(\bm{C}\).

This formula generalizes to the loop hafnian as follows

\[\text{lhaf}(\bm{A}) = \sum_{S \in P([n/2])} (-1)^{ |S|} f\left((\bm{A} \bm{X})_{S}\right),\]

where now the function \(f(\bm{C})\) takes a matrix \(\bm{C}\) and returns the coefficient of \(\eta^{n/2}\) in the following polynomial:

\[q(\eta, \bm{C}, \bm{v} ) = \sum_{j=1}^{n/2} \frac{1}{j!} \left(\sum_{k=1}^{n/2} \left( \frac{\text{Tr}(\bm{C}^k) }{(2k)} +\frac{\bm{v} (\bm{X} \bm{B})^{k-1} \bm{v}^T}{2} \right) \eta^k \right)^j.\]

where \(\bm{v} = \text{diag}(\bm{C})\) and we used the reduction operation (cf. notation) in terms of the set \(S\).

Tip

Implemented as thewalrus.hafnian() with the argument recursive=False. The loop hafnian calculation can be done by setting the option loops=True.

Repeated-moment algorithm

By mapping the calculation of moments of the multinormal distribution to the calculation of the hafnian, Kan [2] derived the following equation for the loop hafnian

\[\begin{split}\text{lhaf}\left( \text{vid}(\bm{B}_\bm{m},\bm{u}_\bm{m}) \right) &= \sum_{\nu_0=0}^{m_0} \ldots \sum_{\nu_{n-1}}^{m_{n-1}} \sum_{r=0}^{[m/2]} (-1)^{\sum_{i=0}^{n-1} \nu_i} {m_0 \choose \nu_0} \ldots {m_{n-1} \choose \nu_{n-1}} \frac{\left( \frac{\bm{h}^T \bm{B} \ \bm{h}}{2} \right)^r \left(\bm{h}^T \bm{u} \right)^{m-2r}}{r! (m-2r)!}, \\ \bm{h} &= \left(\tfrac{m_{0}}{2}-\nu_0,\ldots,\tfrac{m_{n-1}}{2}-\nu_{n-1} \right),\\ m&=m_0+\ldots+m_{n-1},\end{split}\]

where we used the reduction and vector in diagonal (\(\text{vid}\)) operations introduced in the notation section.

Note that if we pick \(m_i=1 \ \forall i\) and \(\bm{v} = \text{diag}(\bm{A})\) we recover the loop hafnian of \(\bm{A}\). In this case, the calculation of the loop hafnian requires \(O(n 2^n)\) operations, which is quadratically worse than Björklund’s algorithms. This formula is however useful when calculating hafnians and loop hafnians of matrices with repeated rows and columns for which column and row \(i\) are repeated \(m_i\) times; taking only \(O(n A G^n)\) operations to calculate the loop hafnian, where

\[\begin{split}A &= \frac{1}{n} \sum_{i=0}^{n-1} (m_i+1), \\ G &= \left( \prod_{i=0}^{n-1}(m_i+1) \right)^{1/n}.\end{split}\]

Compare this with Björklund’s algorithm, which requires \(O\left((A n)^3 \left(\sqrt{2}^{A}\right)^n\right)\) operations.

Tip

Implemented as thewalrus.hafnian_repeated(). The vector \(\bm{m}\) is passed in the variable rpt. The loop hafnian calculation can be done by passing the variable mu with the values of the vector \(\bm{u}\) and the option loops=True.

Batched algorithm

Using the multidimensional Hermite polynomials, and their connection to the matrix elements of Gaussian states and hafnians discussed in the next section, one can calculate the hafnians of all reductions of a matrix \(\bm{B} \in \mathbb{C}^{n \times n}\) up to a given cutoff. The reduction of matrix \(\bm{B}\) is precisely the matrix \(\bm{B}_{\bm{m}}\) obtained by repeating (or removing) the \(i^{\text{th}}\) row and column \(m_i\) times. Thus given a cutoff \(m_{\max}\), one can use the batched algorithm to calculate

\[\text{haf}\left( \bm{B}_\bm{m} \right)\]

for all \(0\leq m_i < m_{\max}\), thus this function returns a tensor with \((m_{\max})^n\) components.

One can also use this function to calculate the same loop hafnians that Kan’s algorithm returns

\[\text{lhaf}\left( \text{vid}(\bm{B}_\bm{m},\bm{u}_\bm{m}) \right)\]

if provided also with a vector \(\bm{u}\). Note that this parameter is optional.

Internally, these hafnians are calculated by using the recursion relation of the multidimensional Hermite polynomials discussed in the next section.

Tip

Implemented as thewalrus.hafnian_batched(). The loop hafnian calculation can be done by passing the variable mu with the values of the vector \(\bm{u}\).

Approximate algorithm

In 1999 Barvinok [14] provided a surprisingly simple algorithm to approximate the hafnian of a symmetric matrix with non-negative entries. Let the matrix have entries \(A_{i,j}\) and define the antisymmetric stochastic matrix with entries that distribute according to \(W_{i,j} = -W_{i,j} \sim \mathcal{N}(0,A_{i,j})\), where \(\mathcal{N}(\mu,\sigma^2)\) is the normal distribution with mean \(\mu\) and variance \(\sigma^2\). The following now holds:

\[\text{haf}(\bm{A}) = \mathbb{E} \left( \text{det}(\bm{W}) \right)\]

where \(\mathbb{E}\) denotes the usual statistical expectation value, and \(\text{det}\) is the determinant. This formula has not been generalized to loop hafnians.

Tip

Implemented as thewalrus.hafnian() with approx=True. Note that one needs to pass the number of samples used to estimate the expectation value in the formula above; this is specified with the argument num_samples.

Low-rank algorithm

If a symmetric matrix \(\bm{A} \in \mathbb{C}^{n \times n}\) is of low rank it can be written as \(\bm{A} = \bm{G} \bm{G}^T\) where \(\bm{G} \in \mathbb{C}^{n \times r}\) is a rectangular matrix and \(r \leq n\) is the rank of \(\bm{A}\). One can then calculate the hafnian of the matrix \(\bm{A}\) in time and space complexity \({n+r-1 \choose r-1}\) by generalizing the result derived Barvinok [17] for permanents to hafnians as derived in Appendix C of Björklund et al [6]. The algorithm works by defining the following multivariate polynomial

\[q(x_0,\ldots,x_{r-1}) = \prod_{i=0}^{n-1} \sum_{j=0}^{r-1} G_{i,j} x_j.\]

Consider now the \(r\)-partitions of the integer \(n\), there are \({n+r-1 \choose r-1}\) of such partitions. Let \(e=(e_0,\ldots,e_{r-1})\) be an even \(r\)-partition (i.e. an \(r\)-partition where each element is even), call \(\mathcal{E}_{n,r}\) the set of all even \(r\)-partitions, and define \(\lambda_e\) to be the coefficient of \(x_0^{e_0}\ldots x_{r-1}^{e_{r-1}}\) in the polynomial \(q(x_0,\ldots,x_{r-1})\). Then one can write the hafnian of \(\bm{A} = \bm{G} \bm{G}^T\) as

\[\text{haf}(\bm{A}) = \sum_{e \in \mathcal{E}_{n,r}} \lambda_e \prod_{i=0}^{r-1} (e_i - 1)!! .\]

Tip

Implemented as thewalrus.low_rank_hafnian(). This function takes as argument the matrix \(\bm{G} \in \mathbb{C}^{n \times r}\) and returns the value of the hafnian of the matrix \(\bm{A} = \bm{G} \bm{G}^T\)

Multidimensional Hermite polynomials

Section author: Nicolás Quesada <nicolas@xanadu.ai>

In this section we study the multidimensional Hermite polynomials originally introduced by C. Hermite in 1865. See Mizrahi [33], Berkowitz et al. [18] and Kok and Braunstein [34] for more details.

In the next section, where we discuss quantum Gaussian states, we will explain how these polynomials relate to hafnians and loop hafnians. For the moment just let us introduce them and study their formal properties.

Generating function

Given two complex vectors \(\alpha,\beta \in \mathbb{C}^\ell\) and a symmetric matrix \(\bm{B} = \bm{B}^T \in \mathbb{C}^{\ell \times \ell}\),

\[\begin{split}F_B(\alpha,\beta) &= \exp\left( \alpha \bm{B} \beta^T - \tfrac{1}{2}\beta \bm{B} \beta^T\right) = \sum_{\bm{m} \geq \bm{0}} \prod_{i=1}^{\ell} \frac{\beta_i^{m_i}}{m_i!} H_{\bm{m}}^{(\bm{B})}(\alpha),\\ K_B(\alpha,\beta) &= \exp\left( \alpha \beta^T - \tfrac{1}{2}\beta \bm{B} \beta^T\right) = \sum_{\bm{m} \geq \bm{0}} \prod_{i=1}^{\ell} \frac{\beta_i^{m_i}}{m_i!} G_{\bm{m}}^{(\bm{B})}(\alpha),\end{split}\]

where the notation \(\bm{m} \geq \bm{0}\) is used to indicate that the sum goes over all vectors in \(\mathbb{N}^{\ell}_0\) (the set of vectors of nonnegative integers of size \(\ell\)). This generating function provides an implicit definition of the multidimensional Hermite polynomials. It is also straightforward to verify that \(H_{\bm{0}}^{(\bm{B})}(\alpha) = G_{\bm{0}}^{(\bm{B})}(\alpha) 1\). Finally, one can connect the standard Hermite polynomials \(H_{\bm{m}}^{(\bm{B})}(\alpha)\) to the modified Hermite polynomials \(G_{\bm{m}}^{(\bm{B})}(\alpha)\) via

\[H_{\bm{m}}^{(\bm{B})}(\alpha) = G_{\bm{m}}^{(\bm{B})}(\alpha \bm{B}).\]

In the one dimensional case, \(\ell=1\), one can compare the generating function above with the ones for the “probabilists’ Hermite polynomials” \(He_n(x)\) and “physicists’ Hermite polynomials” \(H_n(x)\) to find

\[\begin{split}He_n(x) = H_{n}^{([1])}(x), \\ H_n(x) = H_{n}^{([2])}(x).\end{split}\]

Tip

The standard multidimensional Hermite polynomials are implemented as thewalrus.hermite_multidimensional(). The modified Hermite polynomials can be obtained by passing the extra argument modified=True.

Recursion relation

Based on the generating functions introduced in the previous section one can derive the following recursion relations

\[\begin{split}H_{\bm{m}+\bm{e}_i}^{(\bm{B})}(\alpha) - \left(\sum_{j=1}^\ell B_{i,j} \alpha_j \right) H_{\bm{m}}^{(\bm{B})}(\alpha) + \sum_{j=1}^\ell B_{i,j} m_j H_{\bm{m}-\bm{e}_j}^{(\bm{B})}(\alpha) &= 0,\\ G_{\bm{m}+\bm{e}_i}^{(\bm{B})}(\alpha) - \alpha_i G_{\bm{m}}^{(\bm{B})}(\alpha) + \sum_{j=1}^\ell B_{i,j} m_j G_{\bm{m}-\bm{e}_j}^{(\bm{B})}(\alpha) &= 0,\end{split}\]

where \(\bm{e}_j\) is a vector with zeros in all its entries except in the \(i^{\text{th}}\) entry where it has a one.

From this recursion relation, or by Taylor expanding the generating function, one easily finds

\[\begin{split}H_{\bm{e}_i}^{(\bm{B})}(\alpha) &= \sum_{j=1}^\ell B_{ij} \alpha_j,\\ G_{\bm{e}_i}^{(\bm{B})}(\alpha) &= \alpha_i.\end{split}\]

Using this recursion relation one can calculate all the multidimensional Hermite polynomials up to a given cutoff.

The connection between the multidimensional Hermite polynomials and pure Gaussian states was reported by Wolf [35], and later by Kramer, Moshinsky and Seligman [36]. This same connection was also pointed out by Doktorov, Malkin and Man’ko in the context of vibrational modes of molecules [37]. Furthermore, this connection was later generalized to mixed Gaussian states by Dodonov, Man’ko and Man’ko [26]. These matrix elements have the form

\[C \times \frac{H_{\bm{m}}^{(\bm{B})}(\alpha)}{\sqrt{\bm{m}!}} = C \times \frac{G_{\bm{m}}^{(\bm{B})}(\alpha \bm{B})}{\sqrt{\bm{m}!}}.\]

To obtain the standard or modified Hermite polynomials renormalized by the square root of the factorial of its index \(\sqrt{\bm{m}!}\) one can pass the optional argument renorm=True.

Multidimensional Hermite polynomials and hafnians

By connecting the results in page 815 of Dodonov et al. [26] with the results in page 546 of Kan [2] one obtains the following relation between the hafnian and the multidimensional Hermite polynomials

\[H_{\bm{m}}^{(-\bm{B})}(\bm{0}) = G_{\bm{m}}^{(-\bm{B})}(\bm{0})= \text{haf}(\bm{B}_{\bm{m}}),\]

and moreover one can generalize it to

\[G_{\bm{m}}^{(-\bm{B})}(\alpha) = \text{lhaf}\left(\text{vid}(\bm{B}_{\bm{m}},\alpha_{\bm{m}})\right),\]

for loop hafnians. With these two identifications one can use the recursion relations of the multidimensional Hermite polynomials to calculate all the hafnians of the reductions of a given matrix up to a given cutoff.

With these observations and using the recursion relations for the Hermite polynomials and setting \(\bm{m}=\bm{1} - \bm{e}_i, \ \alpha = 0\) one easily derives the well known Laplace expansion for the hafnian (cf. Sec. 4.1 of [30])

\[\text{haf}(\bm{B}) = \sum_{j \neq i} B_{i,j} \text{haf}(\bm{B}_{-i-j}),\]

where \(j\) is a fixed index and \(\bm{B}_{-i-j}\) is the matrix obtained from \(\bm{B}\) by removing rows and columns \(i\) and \(j\).

Gaussian States in the Fock basis

Section author: Nicolás Quesada <nicolas@xanadu.ai>

In this section we show how the matrix elements of so-called Gaussian states in the Fock basis are related to the hafnians and loop hafnians introduced in the previous sections.

Note

The results presented in this page use the conventions introduced in the notation page and the reader is strongly encouraged to get familiar with them.

Wigner functions and Gaussian states

The quantum state \(\varrho\) of an \(\ell\)-mode system can be uniquely characterized by its Wigner function [19]

\[\label{Eq: Wigner} W(\vec \alpha; \varrho) = \int \frac{\text{d}\vec \xi}{\pi^{2\ell}} \text{Tr}[\varrho \hat D(\vec \xi)] \exp\left(\vec \alpha^T \bm{\Omega} \ \vec \xi\right),\]

where \(\vec \alpha = (\alpha_0,\ldots, \alpha_{\ell-1},\alpha_0^*,\ldots, \alpha_{\ell-1}^*)\) and similarly \(\vec \xi = (\xi_0,\ldots, \xi_{\ell-1},\xi_0^*,\ldots, \xi_{\ell-1}^*)\) are bivectors of complex amplitudes where the second half of the vector is the complex conjugate of the first half. The displacement operator is defined as \(\hat D(\xi):=\exp(\vec{\xi}^T \bm{\Omega} \hat \zeta)\), where \(\bm{\Omega}= \left[ \begin{smallmatrix} 0 & \mathbb{I} \\ -\mathbb{I} & 0 \end{smallmatrix} \right]\) is the symplectic form and \(\hat\zeta_j\) is an operator vector of the mode creation and annihilation operators. Recall, \(\ell\) being the number of modes, we have \(\hat\zeta_j=\hat a_j\) and \(\hat \zeta_{\ell+j}=\hat a_j^\dagger\) for \(j=1,\cdots,\ell\). These bosonic creation and annihilation operators satisfy the canonical commutation relations \([\hat a_i, \hat a_j]\) = 0 and \([\hat a_i, \hat a_j^\dagger] = \delta_{ij}\).

A quantum state is called Gaussian if its Wigner function is Gaussian [20]. Any multimode Gaussian state \(\varrho\) is completely parametrized by its first and second moments, namely the vector of means \(\vec{\beta}\) with components

\[\label{eq:WignerMeans} \vec \beta_j = \text{Tr}[\varrho\hat\zeta_j],\]

and the Wigner-covariance matrix \(\bm{\sigma}\) with entries

\[\label{eq:WignerCovMat} \bm{\sigma}_{jk} = \text{Tr}[\varrho \{\hat{\zeta}_j,\hat{\zeta}_k^\dagger \}]/2 - \vec \beta_j \vec \beta_k^*,\]

where \(\{x,y\} := xy +yx\) denotes the anti-commutator.

The variables \((\alpha_0,\ldots,\alpha_{\ell-1})\) are said to be complex normal distributed with mean \((\beta_0,\ldots,\beta_{\ell-1}) \in \mathbb{C}^{\ell}\) and covariance matrix \({\bm{\sigma}}\) [21] which furthermore needs to satisfy the uncertainty relation [22]

\[\label{eq:uncertainty} {\bm{\sigma}} + \bm{Z}/2 \geq 0,\]

where \(\bm{Z} = \left( \begin{smallmatrix} \mathbb{I} & 0\\ 0& -\mathbb{I} \end{smallmatrix} \right)\). The covariance matrix \(\bm{\sigma}\) is customarily parametrized in the following block form [21]

\[\begin{split}\bm{\sigma} = \left[\begin{array}{c|c} \bm{\Gamma} & \bm{C} \\ \hline \bm{C}^* & \bm{\Gamma}^* \end{array} \right],\end{split}\]

where \(\bm{\Gamma}\) is hermitian and positive definite, while \(\bm{C}\) is symmetric.

Gaussian states in the quadrature basis

Historically, Gaussian states are parametrized not in terms of the covariance matrix \(\bm{\sigma}\) of the complex amplitudes \(\alpha_j\), but rather in terms of its quadrature components, the canonical positions \(q_j\) and canonical momenta \(p_j\),

\[\alpha_j = \frac{1}{\sqrt{2 \hbar}} \left( q_j+ i p_j \right),\]

where \(\hbar\) is a positive constant. There are at least three conventions for the value of this constant, \(\hbar \in \{1/2,1,2 \}\).

It is convenient to write the following vector \(\bm{r} = (q_0,\ldots,q_{\ell-1},p_0,\ldots,p_{\ell-1})\) whose mean is related to the vector of means \(\vec \beta\) as

\[\begin{split}\vec \beta &= \frac{1}{\sqrt{\hbar}}\bm{R} \bm{r}, \\ \bm{\bar{r}} &= \sqrt{\hbar} \bm{R}^\dagger \vec \beta, \\ \bm{R} &= \frac{1}{\sqrt{2}}\begin{bmatrix} \mathbb{I} & i \mathbb{I} \\ \mathbb{I} & -i \mathbb{I} \end{bmatrix}.\end{split}\]

Similarly the complex normal covariance matrix \(\bm{\sigma}\) of the variables \((\alpha_0,\ldots,\alpha_{\ell-1})\) is related to the normal covariance matrix \(\bm{V}\) of the variables \(\bm{r} = (q_0,\ldots,q_{\ell-1},p_0,\ldots,p_{\ell-1})\) as

\[\begin{split}\bm{\sigma} &= \frac{1}{\hbar} \ \bm{R} \bm{V} \bm{R}^\dagger \\ \bm{V} &= {\hbar} \ \bm{R}^\dagger \bm{\sigma} \bm{R}.\end{split}\]

Tip

To convert between the complex covariance matrix \(\bm{\sigma}\) and the quadrature covariance matrix \(\bm{V}\) use the functions thewalrus.quantum.Qmat() and thewalrus.quantum.Covmat()

An important property of Gaussian states is that reduced (or marginal) states of a global Gaussian state are also Gaussian. This implies that the reduced covariance matrix of a subsystem of a Gaussian state together with a reduced vector of means fully characterize a reduced Gaussian state. The reduced covariance matrix for modes \(S = i_1,\ldots,i_n\) is obtained from the covariance matrix of the global state \(\bm{\sigma}\) or \(\bm{V}\) by keeping the columns and rows \(i_1,\ldots,i_n\) and \(i_1+\ell,\ldots,i_n+\ell\) of the original covariance matrix \(\bm{\sigma}\). Similarly one obtains the vector of means by keeping only entries \(i_1,\ldots,i_n\) and \(i_1+\ell,\ldots,i_n+\ell\) of the original vector of means \(\vec \beta\) or \(\bm{\bar{r}}\). Using the notation previously introduced, one can succinctly write the covariance matrix of modes \(S=i_1,\ldots,i_m\) as \(\bm{\sigma}_{(S)}\) or \(\bm{V}_{(S)}\) , and similarly the vector of means as \(\vec{\beta}_{(S)}\) or \(\bm{\bar{r}}_{(S)}\).

Tip

To obtain the reduced covariance matrix and vector of means for a certain subset of the modes use thewalrus.quantum.reduced_gaussian().

Note that for \(\bm{V}\) to be a valid quantum covariance matrix it needs to be symmetric and satisfy the uncertainty relation

\[\bm{V} + i \frac{\hbar}{2} \bm{\Omega} \geq 0.\]

Tip

To verify if a given quadrature covariance matrix is a valid quantum covariance matrix use the function thewalrus.quantum.is_valid_cov()

A Gaussian state is pure \(\varrho = \ket{\psi} \bra{\psi}\) if and only if \(\text{det}\left( \tfrac{2}{\hbar} \bm{V} \right) = 1\).

Tip

To verify if a given quadrature covariance matrix is a valid quantum covariance matrix and corresponds to a pure state use the function thewalrus.quantum.is_pure_cov()

Finally, there is a special subset of Gaussian states called classical whose covariance matrix satisfies

\[\bm{V} \geq \tfrac{\hbar}{2}\mathbb{I}.\]

This terminology is explained in the next section when sampling is discussed.

Tip

To verify if a given quadrature covariance matrix is a valid quantum covariance matrix and corresponds to a classical state use the function thewalrus.quantum.is_classical_cov()

Gaussian states in the Fock basis

In this section we use a generalization [23][24] of the results of Hamilton et al. [25] by providing an explicit expression for Fock basis matrix elements \(\langle \bm{m} | \rho | \bm{n} \rangle\), \(\bm{n} = (n_0,\ldots, n_{\ell-1}), \bm{m} = (m_0,\ldots, m_{\ell-1})\), of an \(\ell\)-mode Gaussian state \(\rho\) with covariance matrix \(\bm{\sigma}\) and displacement vector \(\vec \beta\). Note that these matrix elements can also be calculated using multidimensional Hermite polynomials as shown by Dodonov et al. [26]. Depending on how many of these elements are required one can prefer to calculate loop hafnians or multidimensional Hermite polynomials. In particular if one only needs a few matrix elements it is more advantageous to use the formulas derived below. On the other hand if one requires all the matrix elements up to a certain Fock occupation cutoff it is more efficient to use the methods of Dodonov et al., which are also implemented in this library.

We first define the following useful quantities:

\[\begin{split}\bm{X} &= \begin{bmatrix} 0 & \mathbb{I} \\ \mathbb{I} & 0 \end{bmatrix} , \\ \bm{\Sigma} &= \bm{\sigma} +\tfrac{1}{2} \mathbb{I}_{2\ell},\\ T &=\frac{\exp\left(-\tfrac{1}{2} \vec \beta^\dagger \bm{\Sigma}^{-1} \vec \beta \right)}{ \sqrt{\text{det}(\bm{\Sigma}) \prod_{s=1}^\ell n_s! m_s!}},\\ \bm{p} &= (n_0,\ldots,n_{\ell-1},m_0,\ldots,m_{\ell-1}).\end{split}\]

We refer to \(\bm{\Sigma}\) as the Husimi covariance matrix.

As shown in detail in Appendix A of Ref. [24], the Fock matrix elements of a Gaussian state \(\rho\) are given by the expression

\[\label{Eq: lhaf} \langle \bm{m} | \rho | \bm{n} \rangle = T \times \text{lhaf}( \text{vid}(\bm{A}_{\bm{p}}, \gamma_{ \bm{p}}) ),\]

where \(\text{lhaf}\) is the loop hafnian and \(\text{vid}\) is the vector in diagonal notation introduced in the notation section.

Note that one can also obtain the probability of detecting a certain photon number pattern \(\bm{n} = (n_0,\ldots,n_{\ell-1})\) by calculating

\[p(\bm{n}|\varrho) = \langle \bm{n} | \varrho | \bm{n} \rangle.\]

Tip

To obtain the matrix element of gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) use the function thewalrus.quantum.density_matrix_element().

Tip

To obtain the Fock space density matrix of gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) use the function thewalrus.quantum.density_matrix().

In the case where the Gaussian state \(\varrho = |\psi \rangle \langle \psi|\) is pure then the matrix element

\[\langle \bm{m} | \varrho | \bm{n} \rangle = \langle \bm{m} | \psi \rangle \langle \psi| \bm{n} \rangle\]

factorizes into a product of two amplitudes. In Ref. [23] it was shown that the Fock amplitude of a gaussian state is also given by a loop hafnian. Then, for pure states the matrix \(\bm{\bar{A}} = \bm{\bar{B}} \oplus \bm{\bar{B}}^*\).

Tip

To obtain the overlap of a pure gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) and a given Fock state \(\langle \bm{n}|\) use the function thewalrus.quantum.pure_state_amplitude().

Tip

To obtain the Fock space state vector (ket) of a pure gaussian state with quadrature covariance matrix \(\bm{V}\) and vector of means \(\bm{r}\) use the function thewalrus.quantum.state_vector().

Gaussian states and threshold detection

In the last section we sketched how to obtain the probability that a certain photon-number outcome is obtained when a Gaussian state is measured with photon-number detectors. In this section we show how to obtain the analogous probability for the case of threshold detectors. These binary outcome detectors can only distinguish between the the vacuum state and occupied states, and thus for a single mode they are described by the POVM elements

\[\hat{\Pi}_0^{(i)} = \ket{0_i} \bra{0_i} \text{ and } \hat{\Pi}_1^{(i)} = 1_i - \hat{\Pi}_0^{(i)},\]

where \(\ket{0_i}\) is the vacuum state of mode \(i\) and \(1_i\) is the identity in the Hilbert space of mode \(i\).

For an \(\ell\) mode Gaussian state with zero mean, the outcome of threshold detection in all of its modes is described by a bitstring vector \(\bm{n} = (n_0,\ldots,n_{\ell-1})\) and the probability of the event is given by Born’s rule according to

\[\begin{split}p(\bm{n}|\varrho) &= \text{Tr} \left( \prod_{i=1}^{\ell} \Pi_{n_i}^{(i)} \varrho \right) = \frac{\text{tor} \left(\bm{O}_{\{ \bm{n}\}} \right)}{\sqrt{\text{det}(\bm{\sigma})}}, \\ \bm{O} &= \left(\mathbb{I}_{2\ell} - \bm{\Sigma}^{-1} \right)\end{split}\]

where \(\text{tor}\) is the Torontonian. For \(2 \ell \times 2 \ell\) matrix \(\bm{O}\) the Torontonian is defined as

\[\text{tor}(\bm{O}) = \sum_{S \in P([\ell])} (-1)^{|S|} \frac{1}{\sqrt{\det\left(\mathbb{I} - \bm{O}_{(S)}\right)}}\]

The torontonian can be thought of as a generating function for hafnians (cf. the trace algorithm formula in algorithms section).

Tip

The torontonian is implemented as thewalrus.tor().

Gaussian Boson Sampling

Section author: Nicolas Quesada <nicolas@xanadu.ai>

What is Gaussian Boson Sampling

Gaussian Boson Sampling was introduced in Ref. [25] as a problem that could potentially show how a non-universal quantum computer shows exponential speed ups over a classical computer. In the ideal scenario a certain number \(N\) of squeezed states are sent into \(M \times M\) interferometer and are then proved by photon-number resolving detectors. Hamilton et al. argue that under certain conjectures and assumptions it should be extremely inefficient for a classical computer to generate the samples that the quantum optical experiment just sketched generates by construction. Note that the setup described by Hamilton et al. is a special case of a Gaussian state. We consider Gaussian states, that except for having zero mean, are arbitrary.

In this section we describe a classical algorithm introduced in Ref. [27], that generates GBS samples in exponential time in the number of photons measured. This algorithm reduces the generation of a sample to the calculation of a chain of conditional probabilities, in which subsequent modes are interrogated as to how many photons should be detected conditioned on the detection record of previously interrogated modes. The exponential complexity of the algorithm stems from the fact that the conditional probabilities necessary to calculate a sample are proportional to hafnians of matrices of size \(2N\times 2N\), where \(N\) is the number of photons detected.

An exponential-time classical algorithm for GBS

As explained in the previous section the reduced or marginal density matrices of a Gaussian state are also Gaussian, and it is straightforward to compute their marginal covariance matrices given the covariance matrix of the global quantum state.

Let \(\bm{\Sigma}\) be the Husimi covariance matrix of the Gaussian state being measured with photon-number resolving detectors. Let \(S = (i_0,i_1,\ldots,i_{m-1})\) be a set of indices specifying a subset of the modes. In particular we write \(S=[k] = (0,1,2,\ldots, k-1)\) for the first \(k\) modes. We write \(\bm{\Sigma}_{(S)}\) to indicate the covariance matrix of the modes specified by the index set \(S\) and define \(\bm{A}^{S} := \bm{X} \left[\mathbb{I} - \left( \bm{\Sigma}_{(S)}\right)^{-1} \right]\).

As shown in the previous section, these matrices can be used to calculate photon-number probabilities as

\[p(\bm{N} = \bm{n}) = \frac{\text{haf}(\bm{A}^{S}_{(\bm{n})})}{ \bm{n}! \sqrt{\det(\bm{\Sigma}_{(S)})}}\]

where \(\bm{N}=\left\{N_{i_1},N_{i_2},\ldots,N_{i_m} \right\}\) is a random variable denoting the measurement outcome, and \(\bm{n} = \left\{n_{i_1},n_{i_2},\ldots,n_{i_m} \right\}\) is a set of integers that represent the actual outcomes of the photon number measurements and \(\bm{n}! = \prod_{j=0}^{m-1} n_j!\)

Now, we want to introduce an algorithm to generate samples of the random variable \(\{N_0,\ldots,N_{M-1}\}\) distributed according to the GBS probability distribution. To generate samples, we proceed as follows: First, we can always calculate the following probabilities

\[p(N_0=n_0) = \frac{\text{Haf}\left(\bm{A}^{[0]}_{(n_0)}\right)}{ n_0! \sqrt{\det(\bm{\Sigma}_{(0)})}},\]

for \(n_0=0,1,\ldots, n_{\max}\), where \(n_{\max}\) is a cut-off on the maximum number of photons we can hope to detect in this mode. Having constructed said probabilities, we can always generate a sample for the number of photons in the first mode. This will fix \(N_0 = n_0\). Now we want to sample from \(N_1|N_0=n_0\). To this end, we use the definition of conditional probability

\[p(N_1=n_1|N_0=n_0)= \frac{p(N_1=n_1,N_0=n_0)}{p(N_0=n_0)} =\frac{\text{haf}\left(\bm{A}^{[1]}_{(n_0,n_1)}\right)}{n_0! n_1! \sqrt{\det(\bm{\Sigma}_{([2])})}} \frac{1}{p(N_0=n_0)}.\]

We can, as before, use this formula to sample the number of photons in the second mode conditioned on observing \(n_0\) photons in the first mode. Note that the factor \(p(N_1=n_1)\) is already known from the previous step. By induction, the conditional probability of observing \(n_k\) photons in the \(k\)-th mode satisfies given observations of \(n_0,\ldots,n_{k-1}\) photons in the previous \(k\) modes is given by

\[\begin{split}p(N_k=n_k|N_{k-1}=n_{k-1},\ldots,N_0=n_0) &= \frac{p(N_k=n_k,N_{k-1}=n_{k-1},\ldots,N_0=n_0) }{p(N_{k-1}=n_{k-1},\ldots,N_0=n_0)} \\ &=\frac{\text{haf}\left(\bm{A}^{[k+1]}_{(n_0,n_2,\ldots,n_k)}\right)}{n_0! n_1! \ldots n_{k+1}! \sqrt{\det(\bm{\Sigma}_{([k])})}} \frac{1}{p(N_{k-1}=n_{k-1},\ldots,N_0=n_0)},\end{split}\]

where \(p(N_{k-1}=n_{k-1},\ldots,N_0=n_0)\) has already been calculated from previous steps. The algorithm then proceeds as follows: for mode \(k\), we use the previous equation to sample the number of photons in that mode conditioned on the number of photons in the previous \(k\) modes. Repeating this sequentially for all modes produces the desired sample.

Tip

To generate samples from a gaussian state specified by a quadrature covariance matrix use thewalrus.samples.generate_hafnian_sample().

Note that the above algorithm can also be generalized to states with finite means for which one only needs to provide the mean with the optional argument mean.

Threshold detection samples

Note the arguments presented in the previous section can also be generalized to threshold detection. In this case one simple need to replace \(\text{haf} \to \text{tor}\) and \(\bm{A}^{[k+1]}_{(n_0,n_2,\ldots,n_k)} \to \bm{O}^{[k+1]}_{(n_0,n_2,\ldots,n_k)}\) where \(\bm{O}^{S} = \left[\mathbb{I} - \left( \bm{\Sigma}_{(S)}\right)^{-1} \right]\).

Tip

To generate threshold samples from a gaussian state specified by a quadrature covariance matrix use thewalrus.samples.generate_torontonian_sample().

Sampling of classical states

In the previous section it was mentioned that states whose covariance matrix satisfies \(\bm{V} \geq \frac{\hbar}{2}\mathbb{I}\) are termed classical. These designation is due to the fact that for these states it is possible to obtain a polynomial (cubic) time algorithm to generate photon number or threshold samples [28].

Tip

To generate photon number or threshold samples from a classical gaussian state specified by a quadrature covariance matrix use thewalrus.samples.hafnian_sample_classical_state() or thewalrus.samples.torontonian_sample_classical_state().

Note that one can use this observation to sample from a probability distribution that is proportional to the permanent of a positive semidefinite matrix, for details on how this is done cf. Ref. [29].

Tip

To generate photon number samples from a thermal state parametrized by a positive semidefinite real matrix use the module thewalrus.csamples().

Notation

Section author: Nicolás Quesada <nicolas@xanadu.ai>

Matrices and vectors

In this section we introduce the notation that is used in the rest of the documentation.

We deal with so-called bivectors in which the second half of their components is the complex conjugate of the first half. Thus if \(\bm{u} = (u_1,\ldots u_\ell) \in \mathbb{C}^{\ell}\) then \(\vec{\alpha} = (\bm{u},\bm{u}^*) = (u_1,\ldots,u_\ell,u_1^*,\ldots,u_\ell^*)\) is a bivector. We use uppercase letters for (multi)sets such as \(S = \{1,1,2\}\).

Following standard Python and C convention, we index starting from zero, thus the first \(\ell\) natural numbers are \([\ell]:=\{0,1,\ldots,\ell-1\}\). We define the \(\text{diag}\) function as follows: When acting on a vector \(\bm{u}\) it returns a square diagonal matrix \(\bm{u}\). When acting on a square matrix it returns a vector with its diagonal entries.

Finally, we define the vector in diagonal (\(\text{vid}\)) operation, that takes a matrix \(\bm{A}\) of size \(n \times n\) and a vector \(\bm{u}\) of size \(n\) and returns the matrix

\[\text{vid}(\bm{A},\bm{u}) = \bm{A} - \text{diag}(\text{diag}( \bm{A})) + \text{diag}(\bm{u}),\]

which is simply the matrix \(\bm{A}\) with the vector \(\bm{u}\) placed along its diagonal.

The reduction operation

It is very useful to have compact notation to deal with matrices that are constructed by removing or repeating rows and column of a given primitive matrix. Imagine for example a given \(4 \times 4\) matrix

\[\begin{split}\bm{A} = \left( \begin{array}{cccc} A_{0,0} & A_{0,1} & A_{0,2} & A_{0,3} \\ A_{1,0} & A_{1,1} & A_{1,2} & A_{1,3} \\ A_{2,0} & A_{2,1} & A_{2,2} & A_{2,3} \\ A_{3,0} & A_{3,1} & A_{3,2} & A_{3,3} \\ \end{array} \right),\end{split}\]

and that you want to construct the following matrix

\[\begin{split}\bm{A}'= \left( \begin{array}{c|ccc||cc} A_{0,0} & A_{0,1} & A_{0,1} & A_{0,1} & A_{0,3} & A_{0,3} \\ \hline A_{1,0} & A_{1,1} & A_{1,1} & A_{1,1} & A_{1,3} & A_{1,3} \\ A_{1,0} & A_{1,1} & A_{1,1} & A_{1,1} & A_{1,3} & A_{1,3} \\ A_{1,0} & A_{1,1} & A_{1,1} & A_{1,1} & A_{1,3} & A_{1,3} \\ \hline \hline A_{3,0} & A_{3,1} & A_{3,1} & A_{3,1} & A_{3,3} & A_{3,3} \\ A_{3,0} & A_{3,1} & A_{3,1} & A_{3,1} & A_{3,3} & A_{3,3} \\ \end{array} \right),\end{split}\]

where the first row and column have been kept as they were, the second row and column have been repeated 3 times, the third row and column have been eliminated and the fourth and the last row and column have been repeated twice. To specify the number of repetitions (or elimination) of a given row-column we simply specify a vector of integers where each value tells us the number of repetitions, and we use the value 0 to indicate that a given row-column is removed. Thus defining \(\bm{m}=(1,3,0,2)\) we find its reduction by \(\bm{m}\) to be precisely the matrix in the last equation

\[\bm{A}' = \bm{A}_{\bm{n}}.\]

One can also define the reduction operation on vectors. For instance if \(\bm{u}=(u_0,u_1,u_2,u_3)\) and \(\bm{m}=(1,3,0,2)\) then \(\bm{u}_\bm{n} = (u_0,u_1,u_1,u_1,u_3,u_3)\).

Tip

Reduction on block matrices

When dealing with Gaussian states one typically encounters \(2\ell \times 2 \ell\) block matrices of the following form

\[\begin{split}\bm{A} = \left(\begin{array}{c|c} \bm{C} & \bm{D} \\ \hline \bm{E} & \bm{F} \\ \end{array} \right),\end{split}\]

where \(\bm{C},\bm{D},\bm{E},\bm{F}\) are of size \(\ell \times \ell\). Now imagine that one applies the reduction operation by a vector \(\bm{n} \in \mathbb{N}^{\ell}\) to each of the blocks. We introduce the following notation

\[\begin{split}\bm{A}_{(\bm{n})} = \left(\begin{array}{c|c} \bm{C}_{\bm{n}} & \bm{D}_{\bm{n}} \\ \hline \bm{E}_{\bm{n}} & \bm{F}_{\bm{n}} \\ \end{array} \right),\end{split}\]

where we have used the notation \((\bm{n})\) with the round brackets \(()\) to indicate that the reduction is applied to the blocks of the matrix \(\bm{A}\).

Similarly to block matrices, one can also define a reduction operator for bivectors. Thus if \(\vec \beta = (u_0,u_1,u_2,u_3,u_0^*,u_1^*,u_2^*,u_3^*)\) and \(\bm{m}=(1,3,0,2)\), then

\[\vec \beta_{(\bm{n} ) } = (u_0,u_1,u_1,u_1,u_3,u_3,u_0^*,u_1^*,u_1^*,u_1^*,u_3^*,u_3^*).\]

The reduction operation in terms of sets

A different way of specifying how many times a given row and column must me repeated is by giving a set in which we simply list the columns to be repeated. Thus for example the reduction index vector \(\bm{n} = (1,3,0,2)\) can alternatively be given as the multiset \(S=\{0,1,1,1,3,3 \}\) where the element 0 appears once to indicate the first row and column is repeated once, the index 1 appears three times to indicate that this row and column are repeated three times, etcetera.

Similarly for matrices of even size for which the following partition makes sense

\[\begin{split}\bm{A} = \left(\begin{array}{c|c} \bm{C} & \bm{D} \\ \hline \bm{E} & \bm{F} \\ \end{array} \right),\end{split}\]

where \(\bm{A}\) is of size \(2\ell \times 2\ell\) and \(\bm{C},\bm{D},\bm{E},\bm{F}\) are of size \(\ell \times \ell\) we define

\[\begin{split}\bm{A}_{(S)} = \left(\begin{array}{c|c} \bm{C}_S & \bm{D}_S \\ \hline \bm{E}_S & \bm{F}_S \\ \end{array} \right).\end{split}\]

This implies that if the index \(i\) appears \(m_i\) times in \(S\) then the columns \(i\) and \(i+\ell\) of \(\bm{A}\) will be repeated \(m_i\) times in \(\bm{A}_S\).

For instance if

\[\begin{split}\bm{A} = \left( \begin{array}{ccc|ccc} A_{0,0} & A_{0,1} & A_{0,2} & A_{0,3} & A_{0,4} & A_{0,5} \\ A_{1,0} & A_{1,1} & A_{1,2} & A_{1,3} & A_{1,4} & A_{1,5} \\ A_{2,0} & A_{2,1} & A_{2,2} & A_{2,3} & A_{2,4} & A_{2,5} \\ \hline A_{3,0} & A_{3,1} & A_{3,2} & A_{3,3} & A_{3,4} & A_{3,5} \\ A_{4,0} & A_{4,1} & A_{4,2} & A_{4,3} & A_{4,4} & A_{4,5} \\ A_{5,0} & A_{5,1} & A_{5,2} & A_{5,3} & A_{5,4} & A_{5,5} \\ \end{array} \right),\end{split}\]

and \(S=\{0,0,2,2,2\}\) one finds

\[\begin{split}\bm{A}_{(S)} = \left( \begin{array}{cc|ccc|cc|ccc} A_{0,0} & A_{0,0} & A_{0,2} & A_{0,2} & A_{0,2} & A_{0,3} & A_{0,3} & A_{0,5} & A_{0,5} & A_{0,5} \\ A_{0,0} & A_{0,0} & A_{0,2} & A_{0,2} & A_{0,2} & A_{0,3} & A_{0,3} & A_{0,5} & A_{0,5} & A_{0,5} \\ \hline A_{2,0} & A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & A_{2,3} & A_{2,3} & A_{2,5} & A_{2,5} & A_{2,5} \\ A_{2,0} & A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & A_{2,3} & A_{2,3} & A_{2,5} & A_{2,5} & A_{2,5} \\ A_{2,0} & A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & A_{2,3} & A_{2,3} & A_{2,5} & A_{2,5} & A_{2,5} \\ \hline A_{3,0} & A_{3,0} & A_{3,2} & A_{3,2} & A_{3,2} & A_{3,3} & A_{3,3} & A_{3,5} & A_{3,5} & A_{3,5} \\ A_{3,0} & A_{3,0} & A_{3,2} & A_{3,2} & A_{3,2} & A_{3,3} & A_{3,3} & A_{3,5} & A_{3,5} & A_{3,5} \\ \hline A_{5,0} & A_{5,0} & A_{5,2} & A_{5,2} & A_{5,2} & A_{5,3} & A_{5,3} & A_{5,5} & A_{5,5} & A_{5,5} \\ A_{5,0} & A_{5,0} & A_{5,2} & A_{5,2} & A_{5,2} & A_{5,3} & A_{5,3} & A_{5,5} & A_{5,5} & A_{5,5} \\ A_{5,0} & A_{5,0} & A_{5,2} & A_{5,2} & A_{5,2} & A_{5,3} & A_{5,3} & A_{5,5} & A_{5,5} & A_{5,5} \\ \end{array} \right).\end{split}\]

The notation also extends in a straightforward fashion for bivectors. For example \(\vec \beta = (u_0,u_1,u_2,u_3,u_0^*,u_1^*,u_2^*,u_3^*)\) and \(S=\{1,1,2\}\) then \(\vec \beta_{(S)} = (u_1,u_1,u_2,u_1^*,u_1^*,u_2^*)\).

This notation becomes handy when describing certain algorithms for the calculation of the hafnian and torontonian introduced in the rest of the documentation.

Combining reduction and vector in diagonal

Here we show some basic examples of how the reduction and vector in diagonal operations work together

Consider the following matrix

\[\begin{split}\Sigma = \left( \begin{array}{ccc|ccc} A_{0,0} & A_{0,1} & A_{0,2} & B_{0,0} & B_{0,1} & B_{0,2} \\ A_{1,0} & A_{1,1} & A_{1,2} & B_{1,0} & B_{1,1} & B_{1,2} \\ A_{2,0} & A_{2,1} & A_{2,2} & B_{2,0} & B_{2,1} & B_{2,2} \\ \hline C_{0,0} & C_{0,1} & C_{0,2} & D_{0,0} & D_{0,1} & D_{0,2} \\ C_{1,0} & C_{1,1} & C_{1,2} & D_{1,0} & D_{1,1} & D_{1,2} \\ C_{2,0} & C_{2,1} & C_{2,2} & D_{2,0} & D_{2,1} & D_{2,2} \\ \end{array} \right),\end{split}\]

and bivector \(\vec{\beta} = (\beta_0,\beta_1,\beta_2,\beta_0^*,\beta_1^*,\beta_2^*)\) and we are given the index vector \(\bm{u} = (1,0,3)\). Then we can calculate the following

\[\begin{split}\Sigma_{(\bm{u})} &= \left( \begin{array}{cccc|cccc} A_{0,0} & A_{0,2} & A_{0,2} & A_{0,2} & B_{0,0} & B_{0,2} & B_{0,2} & B_{0,2} \\ A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & B_{2,0} & B_{2,2} & B_{2,2} & B_{2,2} \\ A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & B_{2,0} & B_{2,2} & B_{2,2} & B_{2,2} \\ A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & B_{2,0} & B_{2,2} & B_{2,2} & B_{2,2} \\ \hline C_{0,0} & C_{0,2} & C_{0,2} & C_{0,2} & D_{0,0} & D_{0,2} & D_{0,2} & D_{0,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,0} & D_{2,2} & D_{2,2} & D_{2,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,0} & D_{2,2} & D_{2,2} & D_{2,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,0} & D_{2,2} & D_{2,2} & D_{2,2} \\ \end{array} \right),\\ \vec \beta_{(\bm{u})} &= (\beta_0,\beta_2,\beta_2,\beta_2,\beta_0^*,\beta_2^*,\beta_2^*,\beta_2^*),\end{split}\]

and finally write

\[\begin{split}\text{vid}(\Sigma_{(\bm{u})},\vec \beta_{(\bm{u})})= \left( \begin{array}{cccc|cccc} \beta_{0} & A_{0,2} & A_{0,2} & A_{0,2} & B_{0,0} & B_{0,2} & B_{0,2} & B_{0,2} \\ A_{2,0} & \beta_{2} & A_{2,2} & A_{2,2} & B_{2,0} & B_{2,2} & B_{2,2} & B_{2,2} \\ A_{2,0} & A_{2,2} & \beta_{2} & A_{2,2} & B_{2,0} & B_{2,2} & B_{2,2} & B_{2,2} \\ A_{2,0} & A_{2,2} & A_{2,2} & \beta_{2} & B_{2,0} & B_{2,2} & B_{2,2} & B_{2,2} \\ \hline C_{0,0} & C_{0,2} & C_{0,2} & C_{0,2} & \beta_{0}^* & D_{0,2} & D_{0,2} & D_{0,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,0} & \beta_{2}^* & D_{2,2} & D_{2,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,0} & D_{2,2} & \beta_{2}^* & D_{2,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,0} & D_{2,2} & D_{2,2} & \beta_{2}^* \\ \end{array} \right).\end{split}\]

Note that because there are repetitions, the diagonal elements of the matrix \(\bm{A}\) appear off diagonal in \(\bm{A}_{(\bm{n})}\) and also in \(\text{vid}(\bm{A}_{(\bm{n})},\vec{\beta}_{\bm{n}})\).

One can ignore the block structure of the matrix \(A\) and bivector \(\vec{\beta}\), and treat them as 6 dimensional objects and use an index vector of the same length. If we now define \(\bm{p} = (1,0,3,0,2,2)\) one finds

\[\begin{split}\Sigma_{\bm{p}} &= \left( \begin{array}{cccccccc} A_{0,0} & A_{0,2} & A_{0,2} & A_{0,2} & B_{0,1} & B_{0,1} & B_{0,2} & B_{0,2} \\ A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & B_{2,1} & B_{2,1} & B_{2,2} & B_{2,2} \\ A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & B_{2,1} & B_{2,1} & B_{2,2} & B_{2,2} \\ A_{2,0} & A_{2,2} & A_{2,2} & A_{2,2} & B_{2,1} & B_{2,1} & B_{2,2} & B_{2,2} \\ C_{1,0} & C_{1,2} & C_{1,2} & C_{1,2} & D_{1,1} & D_{1,1} & D_{1,2} & D_{1,2} \\ C_{1,0} & C_{1,2} & C_{1,2} & C_{1,2} & D_{1,1} & D_{1,1} & D_{1,2} & D_{1,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,1} & D_{2,1} & D_{2,2} & D_{2,2} \\ C_{2,0} & C_{2,2} & C_{2,2} & C_{2,2} & D_{2,1} & D_{2,1} & D_{2,2} & D_{2,2} \\ \end{array} \right),\\ \vec{\beta}_{\bm{p}}&=(\beta_0,\beta_2,\beta_2,\beta_2,\beta_1^*,\beta_1^*,\beta_2^*,\beta_2^*).\end{split}\]

Note that we wrote \(\Sigma_{\bm{p}}\) and not \(\Sigma_{(\bm{p})}\) to indicate that we ignore the block structure of the matrix \(\Sigma\).

References

1

Andreas Björklund and Thore Husfeldt. Exact algorithms for exact satisfiability and number of perfect matchings. Algorithmica, 52(2):226–249, 2008.

2

Raymond Kan. From moments of sum to moments of product. Journal of Multivariate Analysis, 99(3):542–554, 2008.

3

Mikko Koivisto. Partitioning into sets of bounded cardinality. In International Workshop on Parameterized and Exact Computation, 258–263. Springer, 2009.

4

Jesper Nederlof. Fast polynomial-space algorithms using möbius inversion: improving on steiner tree and related problems. In International Colloquium on Automata, Languages, and Programming, 713–725. Springer, 2009.

5

Andreas Björklund. Counting perfect matchings as fast as ryser. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, 914–921. SIAM, 2012.

6

Andreas Björklund, Brajesh Gupt, and Nicolás Quesada. A faster hafnian formula for complex matrices and its benchmarking on a supercomputer. Journal of Experimental Algorithmics (JEA), 24(1):11, 2019.

7

Marek Cygan and Marcin Pilipczuk. Faster exponential-time algorithms in graphs of bounded average degree. Information and Computation, 243:75–85, 2015.

8

Herbert John Ryser. Combinatorial mathematics. Number 14. Mathematical Association of America; distributed by Wiley [New York], 1963.

9

Grzegorz Rempala and Jacek Wesolowski. Symmetric functionals on random matrices and random matchings problems. Volume 147. Springer Science & Business Media, 2007.

10

Donald E. Knuth. The Art of Computer Programming, Vol. 2: Seminumerical Algorithms. Addison-Wesley, 3 edition, 1998.

11

Leslie G Valiant. The complexity of computing the permanent. Theoretical computer science, 8(2):189–201, 1979.

12

Alexander Barvinok. Approximating permanents and hafnians. arXiv preprint arXiv:1601.07518, 2016.

13

Piotr Sankowski. Alternative algorithms for counting all matchings in graphs. In Helmut Alt and Michel Habib, editors, STACS 2003, 427–438. Berlin, Heidelberg, 2003. Springer Berlin Heidelberg.

14

Alexander Barvinok. Polynomial time algorithms to approximate permanents and mixed discriminants within a simply exponential factor. Random Structures & Algorithms, 14(1):29–61, 1999.

15

Mark Rudelson, Alex Samorodnitsky, Ofer Zeitouni, and others. Hafnians, perfect matchings and gaussian matrices. The Annals of Probability, 44(4):2858–2888, 2016.

16

Nicolás Quesada, Juan Miguel Arrazola, and Nathan Killoran. Gaussian boson sampling using threshold detectors. Physical Review A, 98(6):062322, 2018.

17

Alexander I Barvinok. Two algorithmic results for the traveling salesman problem. Mathematics of Operations Research, 21(1):65–84, 1996.

18

S Berkowitz and FJ Garner. The calculation of multidimensional Hermite polynomials and Gram-Charlier coefficients. Math. Comput., 24(111):537–545, 1970.

19

Alessio Serafini. Quantum Continuous Variables: A Primer of Theoretical Methods. CRC Press, 2017.

20

Christian Weedbrook, Stefano Pirandola, Raúl García-Patrón, Nicolas J Cerf, Timothy C Ralph, Jeffrey H Shapiro, and Seth Lloyd. Gaussian quantum information. Reviews of Modern Physics, 84(2):621, 2012.

21

Bernard Picinbono. Second-order complex random vectors and normal distributions. IEEE Transactions on Signal Processing, 44(10):2637–2640, 1996.

22

R Simon, N Mukunda, and Biswadeb Dutta. Quantum-noise matrix for multimode systems: $U(n)$ invariance, squeezing, and normal forms. Physical Review A, 49(3):1567, 1994.

23

Nicolás Quesada. Franck-condon factors by counting perfect matchings of graphs with loops. The Journal of chemical physics, 150(16):164113, 2019.

24

N. Quesada, L. G. Helt, J. Izaac, J. M. Arrazola, R. Shahrokhshahi, C. R. Myers, and K. K. Sabapathy. Simulating realistic non-gaussian state preparation. Phys. Rev. A, 100:022341, 2019.

25

Craig S Hamilton, Regina Kruse, Linda Sansoni, Sonja Barkhofen, Christine Silberhorn, and Igor Jex. Gaussian boson sampling. Physical review letters, 119(17):170501, 2017.

26

VV Dodonov, OV Man’ko, and VI Man’ko. Multidimensional hermite polynomials and photon distribution for polymode mixed light. Physical Review A, 50(1):813, 1994.

27

Nicolás Quesada and Juan Miguel Arrazola. Exact simulation of gaussian boson sampling in polynomial space and exponential time. Physical Review Research, 2(2):023005, 2020.

28

Saleh Rahimi-Keshari, Austin P Lund, and Timothy C Ralph. What can quantum optics say about computational complexity theory? Physical review letters, 114(6):060501, 2015.

29

Soran Jahangiri, Juan Miguel Arrazola, Nicolás Quesada, and Nathan Killoran. Point processes with gaussian boson sampling. Physical Review E, 101(2):022134, 2020.

30

Alexander Barvinok. Combinatorics and complexity of partition functions. Volume 274. Springer, 2016.

31

Eduardo R Caianiello. On quantum field theory—i: explicit solution of dyson’s equation in electrodynamics without use of feynman graphs. Il Nuovo Cimento (1943-1954), 10(12):1634–1652, 1953.

32

Settimo Termini. Imagination and Rigor: Essays on Eduardo R. Caianiello’s Scientific Heritage. Springer, 2006.

33

Maurice M Mizrahi. Generalized hermite polynomials. Journal of Computational and Applied Mathematics, 1(3):137–140, 1975.

34

Pieter Kok and Samuel L Braunstein. Multi-dimensional hermite polynomials in quantum optics. Journal of Physics A: Mathematical and General, 34(31):6185, 2001.

35

Kurt Bernardo Wolf. Canonical transforms. i. complex linear transforms. Journal of Mathematical Physics, 15(8):1295–1301, 1974.

36

Kramer P, M Moshinsky, and T.H Seligman. Complex extensions of canonical transformations and quantum mechanics. In E.M. Loebl, editor, Group Theory and Its Applications, pages 250–300. Academic, New York, 1975.

37

EV Doktorov, IA Malkin, and VI Man’ko. Dynamical symmetry of vibronic transitions in polyatomic molecules and the franck-condon principle. Journal of Molecular Spectroscopy, 64(2):302–326, 1977.

Overview

The Walrus contains a Python interface, and low-level C++ libwalrus library.

Python interface

  • The thewalrus Python interface provides access to various highly optimized hafnian, permanent, and torontonian algorithms

  • The thewalrus.quantum submodule provides access to various utility functions that act on Gaussian quantum states

  • The thewalrus.samples submodule provides access to algorithms to sample from the hafnian or the torontonian of Gaussian quantum states

  • The thewalrus.symplectic submodule provides access to a convenient set of symplectic matrices and utility functions to manipulate them

  • The thewalrus.random submodule provides access to random unitary, symplectic and covariance matrices

  • The thewalrus.fock_gradients submodule provides access to the Fock representation of certain continuous-variable gates and their gradients

  • The thewalrus.reference submodule provides access to pure-Python reference implementations of the hafnian, loop hafnian, and torontonian

Low-level libraries

The low-level libwalrus C++ library is a header-only library containing various parallelized algorithms for computing the hafnian, loop hafnian, permanent, and Torontonian calculation of complex, real, and integer matrices. This library is used under-the-hood by the Python thewalrus module.

You can also use the libwalrus library directly in your C++ projects - just ensure that the include folder is in your include path, and add

#include <libwalrus.hpp>

at the top of your C++ source file. See the file example.cpp, as well as the corresponding Makefile, for an example of how the libwalrus library can be accessed directly from C++ code.

Alternatively, if you install the The Walrus package as a python wheel using pip, you can link against the static pre-built library provided.

Octave

In addition, two auxiallary Octave functions are provided: octave/hafnian.m and octave/loophafnian.m.

Python library

This is the top level module of the The Walrus Python interface, containing functions for computing the hafnian, loop hafnian, and torontonian of matrices.

Algorithm terminology

Eigenvalue hafnian algorithm

The algorithm described in A faster hafnian formula for complex matrices and its benchmarking on a supercomputer, [6]. This algorithm scales like \(\mathcal{O}(n^3 2^{n/2})\), and supports calculation of the loop hafnian.

Recursive hafnian algorithm

The algorithm described in Counting perfect matchings as fast as Ryser [5]. This algorithm scales like \(\mathcal{O}(n^4 2^{n/2})\). This algorithm does not currently support the loop hafnian.

Repeating hafnian algorithm

The algorithm described in From moments of sum to moments of product, [2]. This method is more efficient for matrices with repeated rows and columns, and supports caclulation of the loop hafnian.

Approximate hafnian algorithm

The algorithm described in Polynomial time algorithms to approximate permanents and mixed discriminants within a simply exponential factor, [14]. This algorithm allows us to efficiently approximate the hafnian of matrices with non-negative elements. This is done by sampling determinants; the larger the number of samples taken, the higher the accuracy.

Batched hafnian algorithm

An algorithm that allows the calculation of hafnians of all reductions of a given matrix up to the cutoff (resolution) provided. Internally, this algorithm makes use of the multidimensional Hermite polynomials as per The calculation of multidimensional Hermite polynomials and Gram-Charlier coefficients [18].

Low-rank hafnian algorithm

An algorithm that allows to calculate the hafnian of an \(r\)-rank matrix \(\bm{A}\) of size \(n \times n\) by factorizing it as \(\bm{A} = \bm{G} \bm{G}^T\) where \(\bm{G}\) is of size \(n \times r\). The algorithm is described in Appendix C of A faster hafnian formula for complex matrices and its benchmarking on a supercomputer, [6].

Python wrappers

hafnian(A[, loop, recursive, tol, quad, …])

Returns the hafnian of a matrix.

hafnian_repeated(A, rpt[, mu, loop, tol])

Returns the hafnian of matrix with repeated rows/columns.

hafnian_batched(A, cutoff[, mu, tol, …])

Calculates the hafnian of reduction(A, k) for all possible values of vector k below the specified cutoff.

tor(A[, fsum])

Returns the Torontonian of a matrix.

perm(A[, quad, fsum])

Returns the permanent of a matrix via the Ryser formula.

permanent_repeated(A, rpt)

Calculates the permanent of matrix \(A\), where the ith row/column of \(A\) is repeated \(rpt_i\) times.

hermite_multidimensional(R, cutoff[, y, …])

Returns the multidimensional Hermite polynomials \(H_k^{(R)}(y)\).

Pure Python functions

reduction(A, rpt)

Calculates the reduction of an array by a vector of indices.

version()

Get version number of The Walrus

low_rank_hafnian(G)

Returns the hafnian of the low rank matrix \(\bm{A} = \bm{G} \bm{G}^T\) where \(\bm{G}\) is rectangular of size \(n \times r\) with \(r \leq n\).

hafnian(A, loop=False, recursive=True, tol=1e-12, quad=True, approx=False, num_samples=1000)[source]

Returns the hafnian of a matrix.

For more direct control, you may wish to call haf_real(), haf_complex(), or haf_int() directly.

Parameters
  • A (array) – a square, symmetric array of even dimensions.

  • loop (bool) – If True, the loop hafnian is returned. Default is False.

  • recursive (bool) – If True, the recursive algorithm is used. Note: the recursive algorithm does not currently support the loop hafnian. If loop=True, then this keyword argument is ignored.

  • tol (float) – the tolerance when checking that the matrix is symmetric. Default tolerance is 1e-12.

  • quad (bool) – If True, the hafnian algorithm is performed with quadruple precision.

  • approx (bool) – If True, an approximation algorithm is used to estimate the hafnian. Note that the approximation algorithm can only be applied to matrices A that only have non-negative entries.

  • num_samples (int) – If approx=True, the approximation algorithm performs num_samples iterations for estimation of the hafnian of the non-negative matrix A.

Returns

the hafnian of matrix A.

Return type

np.int64 or np.float64 or np.complex128

hafnian_repeated(A, rpt, mu=None, loop=False, tol=1e-12)[source]

Returns the hafnian of matrix with repeated rows/columns.

The reduction() function may be used to show the resulting matrix with repeated rows and columns as per rpt.

As a result, the following are identical:

>>> hafnian_repeated(A, rpt)
>>> hafnian(reduction(A, rpt))

However, using hafnian_repeated in the case where there are a large number of repeated rows and columns (\(\sum_{i}rpt_i \gg N\)) can be significantly faster.

Note

If \(rpt=(1, 1, \dots, 1)\), then

>>> hafnian_repeated(A, rpt) == hafnian(A)

For more direct control, you may wish to call haf_rpt_real() or haf_rpt_complex() directly.

Parameters
  • A (array) – a square, symmetric \(N\times N\) array.

  • rpt (Sequence) – a length-\(N\) positive integer sequence, corresponding to the number of times each row/column of matrix \(A\) is repeated.

  • mu (array) – a vector of length \(N\) representing the vector of means/displacement. If not provided, mu is set to the diagonal of matrix A. Note that this only affects the loop hafnian.

  • loop (bool) – If True, the loop hafnian is returned. Default is False.

  • use_eigen (bool) – if True (default), the Eigen linear algebra library is used for matrix multiplication. If the hafnian library was compiled with BLAS/Lapack support, then BLAS will be used for matrix multiplication.

  • tol (float) – the tolerance when checking that the matrix is symmetric. Default tolerance is 1e-12.

Returns

the hafnian of matrix A.

Return type

np.int64 or np.float64 or np.complex128

hafnian_batched(A, cutoff, mu=None, tol=1e-12, renorm=False, make_tensor=True)[source]

Calculates the hafnian of reduction(A, k) for all possible values of vector k below the specified cutoff.

Here,

  • \(A\) is am \(n\times n\) square matrix

  • \(k\) is a vector of (non-negative) integers with the same dimensions as \(A\), i.e., \(k = (k_0,k_1,\ldots,k_{n-1})\), and where \(0 \leq k_j < \texttt{cutoff}\).

The function hafnian_repeated() can be used to calculate the reduced hafnian for a specific value of \(k\); see the documentation for more information.

Note

If mu is not None, this function instead returns hafnian(np.fill_diagonal(reduction(A, k), reduction(mu, k)), loop=True). This calculation can only be performed if the matrix \(A\) is invertible.

Parameters
  • A (array) – a square, symmetric \(N\times N\) array.

  • cutoff (int) – maximum size of the subindices in the Hermite polynomial

  • mu (array) – a vector of length \(N\) representing the vector of means/displacement

  • renorm (bool) – If True, the returned hafnians are normalized, that is, \(haf(reduction(A, k))/\ \sqrt{prod_i k_i!}\) (or \(lhaf(fill\_diagonal(reduction(A, k),reduction(mu, k)))\) if mu is not None)

  • make_tensor – If False, returns a flattened one dimensional array instead of a tensor with the values of the hafnians.

Returns

the values of the hafnians for each value of \(k\) up to the cutoff

Return type

(array)

tor(A, fsum=False)[source]

Returns the Torontonian of a matrix.

For more direct control, you may wish to call tor_real() or tor_complex() directly.

The input matrix is cast to quadruple precision internally for a quadruple precision torontonian computation.

Parameters
  • A (array) – a np.complex128, square, symmetric array of even dimensions.

  • fsum (bool) – if True, the Shewchuck algorithm for more accurate summation is performed. This can significantly increase the accuracy of the computation, but no casting to quadruple precision takes place, as the Shewchuck algorithm only supports double precision.

Returns

the torontonian of matrix A.

Return type

np.float64 or np.complex128

perm(A, quad=True, fsum=False)[source]

Returns the permanent of a matrix via the Ryser formula.

For more direct control, you may wish to call perm_real() or perm_complex() directly.

Parameters
  • A (array) – a square array.

  • quad (bool) – If True, the input matrix is cast to a long double matrix internally for a quadruple precision hafnian computation.

  • fsum (bool) – Whether to use the fsum method for higher accuracy summation. Note that if fsum is true, double precision will be used, and the quad keyword argument will be ignored.

Returns

the permanent of matrix A.

Return type

np.float64 or np.complex128

permanent_repeated(A, rpt)[source]

Calculates the permanent of matrix \(A\), where the ith row/column of \(A\) is repeated \(rpt_i\) times.

This function constructs the matrix

\[\begin{split}B = \begin{bmatrix} 0 & A\\ A^T & 0 \end{bmatrix},\end{split}\]

and then calculates \(perm(A)=haf(B)\), by calling

>>> hafnian_repeated(B, rpt*2, loop=False)
Parameters
  • A (array) – matrix of size [N, N]

  • rpt (Sequence) – sequence of N positive integers indicating the corresponding rows/columns of A to be repeated.

Returns

the permanent of matrix A.

Return type

np.int64 or np.float64 or np.complex128

reduction(A, rpt)[source]

Calculates the reduction of an array by a vector of indices.

This is equivalent to repeating the ith row/column of \(A\), \(rpt_i\) times.

Parameters
  • A (array) – matrix of size [N, N]

  • rpt (Sequence) – sequence of N positive integers indicating the corresponding rows/columns of A to be repeated.

Returns

the reduction of A by the index vector rpt

Return type

array

hermite_multidimensional(R, cutoff, y=None, renorm=False, make_tensor=True, modified=False)[source]

Returns the multidimensional Hermite polynomials \(H_k^{(R)}(y)\).

Here \(R\) is an \(n \times n\) square matrix, and \(y\) is an \(n\) dimensional vector. The polynomials are parametrized by the multi-index \(k=(k_0,k_1,\ldots,k_{n-1})\), and are calculated for all values \(0 \leq k_j < \text{cutoff}\), thus a tensor of dimensions \(\text{cutoff}^n\) is returned.

This tensor can either be flattened into a vector or returned as an actual tensor with \(n\) indices.

Note

Note that if \(R = (1)\) then \(H_k^{(R)}(y)\) are precisely the well known probabilists’ Hermite polynomials \(He_k(y)\), and if \(R = (2)\) then \(H_k^{(R)}(y)\) are precisely the well known physicists’ Hermite polynomials \(H_k(y)\).

Parameters
  • R (array) – square matrix parametrizing the Hermite polynomial family

  • cutoff (int) – maximum size of the subindices in the Hermite polynomial

  • y (array) – vector argument of the Hermite polynomial

  • renorm (bool) – If True, normalizes the returned multidimensional Hermite polynomials such that \(H_k^{(R)}(y)/\prod_i k_i!\)

  • make_tensor (bool) – If False, returns a flattened one dimensional array containing the values of the polynomial

  • modified (bool) – whether to return the modified multidimensional Hermite polynomials or the standard ones

Returns

the multidimensional Hermite polynomials

Return type

(array)

version()[source]

Get version number of The Walrus

Returns

The package version number

Return type

str

Quantum algorithms

This submodule provides access to various utility functions that act on Gaussian quantum states.

For more details on how the hafnian relates to various properties of Gaussian quantum states, see:

Fock states and tensors

pure_state_amplitude(mu, cov, i[, …])

Returns the \(\langle i | \psi\rangle\) element of the state ket of a Gaussian state defined by covariance matrix cov.

state_vector(mu, cov[, post_select, …])

Returns the state vector of a (PNR post-selected) Gaussian state.

density_matrix_element(mu, cov, i, j[, …])

Returns the \(\langle i | \rho | j \rangle\) element of the density matrix of a Gaussian state defined by covariance matrix cov.

density_matrix(mu, cov[, post_select, …])

Returns the density matrix of a (PNR post-selected) Gaussian state.

fock_tensor(S, alpha, cutoff[, choi_r, …])

Calculates the Fock representation of a Gaussian unitary parametrized by the symplectic matrix S and the displacements alpha up to cutoff in Fock space.

probabilities(mu, cov, cutoff[, hbar, rtol, …])

Generate the Fock space probabilities of a Gaussian state up to a Fock space cutoff.

update_probabilities_with_loss(etas, probs)

Given a list of transmissivities a tensor of probabilitites, calculate an updated tensor of probabilities after loss is applied.

update_probabilities_with_noise(probs_noise, …)

Given a list of noise probability distributions for each of the modes and a tensor of probabilitites, calculate an updated tensor of probabilities after noise is applied.

fidelity(mu1, cov1, mu2, cov2[, hbar, rtol, …])

Calculates the fidelity between two Gaussian quantum states.

Details
pure_state_amplitude(mu, cov, i, include_prefactor=True, tol=1e-10, hbar=2, check_purity=True)[source]

Returns the \(\langle i | \psi\rangle\) element of the state ket of a Gaussian state defined by covariance matrix cov.

Parameters
  • mu (array) – length-\(2N\) quadrature displacement vector

  • cov (array) – length-\(2N\) covariance matrix

  • i (list) – list of amplitude elements

  • include_prefactor (bool) – if True, the prefactor is automatically calculated used to scale the result.

  • tol (float) – tolerance for determining if displacement is negligible

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

  • check_purity (bool) – if True, the purity of the Gaussian state is checked before calculating the state vector.

Returns

the pure state amplitude

Return type

complex

state_vector(mu, cov, post_select=None, normalize=False, cutoff=5, hbar=2, check_purity=True, **kwargs)[source]

Returns the state vector of a (PNR post-selected) Gaussian state.

The resulting density matrix will have shape

\[\underbrace{D\times D \times \cdots \times D}_M\]

where \(D\) is the Fock space cutoff, and \(M\) is the number of non post-selected modes, i.e. M = len(mu)//2 - len(post_select).

If post_select is None then the density matrix elements are calculated using the multidimensional Hermite polynomials which provide a significantly faster evaluation.

Parameters
  • mu (array) – length-\(2N\) means vector in xp-ordering

  • cov (array) – \(2N\times 2N\) covariance matrix in xp-ordering

  • post_select (dict) – dictionary containing the post-selected modes, of the form {mode: value}.

  • normalize (bool) – If True, a post-selected density matrix is re-normalized.

  • cutoff (dim) – the final length (i.e., Hilbert space dimension) of each mode in the density matrix.

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

  • check_purity (bool) – if True, the purity of the Gaussian state is checked before calculating the state vector.

Keyword Arguments

choi_r (float or None) – Value of the two-mode squeezing parameter used in Choi-Jamiolkoski trick in fock_tensor(). This keyword argument should only be used when state_vector is called by fock_tensor().

Returns

the state vector of the Gaussian state

Return type

np.array[complex]

density_matrix_element(mu, cov, i, j, include_prefactor=True, tol=1e-10, hbar=2)[source]

Returns the \(\langle i | \rho | j \rangle\) element of the density matrix of a Gaussian state defined by covariance matrix cov.

Parameters
  • mu (array) – length-\(2N\) quadrature displacement vector

  • cov (array) – length-\(2N\) covariance matrix

  • i (list) – list of density matrix rows

  • j (list) – list of density matrix columns

  • include_prefactor (bool) – if True, the prefactor is automatically calculated used to scale the result.

  • tol (float) – tolerance for determining if displacement is negligible

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

Returns

the density matrix element

Return type

complex

density_matrix(mu, cov, post_select=None, normalize=False, cutoff=5, hbar=2)[source]

Returns the density matrix of a (PNR post-selected) Gaussian state.

The resulting density matrix will have shape

\[\underbrace{D\times D \times \cdots \times D}_{2M}\]

where \(D\) is the Fock space cutoff, and \(M\) is the number of non post-selected modes, i.e. M = len(mu)//2 - len(post_select).

Note that we use the Strawberry Fields convention for indexing the density matrix; the first two dimensions correspond to subsystem 1, the second two dimensions correspond to subsystem 2, etc. If post_select is None then the density matrix elements are calculated using the multidimensional Hermite polynomials which provide a significantly faster evaluation.

Parameters
  • mu (array) – length-\(2N\) means vector in xp-ordering

  • cov (array) – \(2N\times 2N\) covariance matrix in xp-ordering

  • post_select (dict) – dictionary containing the post-selected modes, of the form {mode: value}. If post_select is None the whole non post-selected density matrix is calculated directly using (multidimensional) Hermite polynomials, which is significantly faster than calculating one hafnian at a time.

  • normalize (bool) – If True, a post-selected density matrix is re-normalized.

  • cutoff (dim) – the final length (i.e., Hilbert space dimension) of each mode in the density matrix.

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

Returns

the density matrix of the Gaussian state

Return type

np.array[complex]

fock_tensor(S, alpha, cutoff, choi_r=0.881373587019543, check_symplectic=True, sf_order=False, rtol=1e-05, atol=1e-08)[source]

Calculates the Fock representation of a Gaussian unitary parametrized by the symplectic matrix S and the displacements alpha up to cutoff in Fock space.

Parameters
  • S (array) – symplectic matrix

  • alpha (array) – complex vector of displacements

  • cutoff (int) – cutoff in Fock space

  • choi_r (float) – squeezing parameter used for the Choi expansion

  • check_symplectic (boolean) – checks whether the input matrix is symplectic

  • sf_order (boolean) – reshapes the tensor so that it follows the sf ordering of indices

  • rtol (float) – the relative tolerance parameter used in np.allclose

  • atol (float) – the absolute tolerance parameter used in np.allclose

Returns

Tensor containing the Fock representation of the Gaussian unitary

Return type

(array)

probabilities(mu, cov, cutoff, hbar=2.0, rtol=1e-05, atol=1e-08)[source]

Generate the Fock space probabilities of a Gaussian state up to a Fock space cutoff.

Parameters
  • mu (array) – vector of means of length 2*n_modes

  • cov (array) – covariance matrix of shape [2*n_modes, 2*n_modes]

  • cutoff (int) – cutoff in Fock space

  • hbar (float) – value of \(\hbar\) in the commutation relation :math;`[hat{x}, hat{p}]=ihbar`

  • rtol (float) – the relative tolerance parameter used in np.allclose

  • atol (float) – the absolute tolerance parameter used in np.allclose

Returns

Fock space probabilities up to cutoff. The shape of this tensor is [cutoff]*num_modes.

Return type

(array)

update_probabilities_with_loss(etas, probs)[source]

Given a list of transmissivities a tensor of probabilitites, calculate an updated tensor of probabilities after loss is applied.

Parameters
  • etas (list) – List of transmissitivities describing the loss in each of the modes

  • probs (array) – Array of probabilitites in the different modes

Returns

List of loss-updated probabilities with the same shape as probs.

Return type

array

update_probabilities_with_noise(probs_noise, probs)[source]

Given a list of noise probability distributions for each of the modes and a tensor of probabilitites, calculate an updated tensor of probabilities after noise is applied.

Parameters
  • probs_noise (list) – List of probability distributions describing the noise in each of the modes

  • probs (array) – Array of probabilitites in the different modes

Returns

List of noise-updated probabilities with the same shape as probs.

Return type

array

Utility functions

reduced_gaussian(mu, cov, modes)

Returns the vector of means and the covariance matrix of the specified modes.

Xmat(N)

Returns the matrix \(X_n = \begin{bmatrix}0 & I_n\\ I_n & 0\end{bmatrix}\)

Qmat(cov[, hbar])

Returns the \(Q\) Husimi matrix of the Gaussian state.

Covmat(Q[, hbar])

Returns the Wigner covariance matrix in the \(xp\)-ordering of the Gaussian state.

Amat(cov[, hbar, cov_is_qmat])

Returns the \(A\) matrix of the Gaussian state whose hafnian gives the photon number probabilities.

Beta(mu[, hbar])

Returns the vector of complex displacements and conjugate displacements.

Means(beta[, hbar])

Returns the vector of real quadrature displacements.

prefactor(mu, cov[, hbar])

Returns the prefactor.

find_scaling_adjacency_matrix(A, n_mean)

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that endodes it has a total mean photon number n_mean.

mean_number_of_clicks(A)

Given an adjacency matrix this function calculates the mean number of clicks.

find_scaling_adjacency_matrix_torontonian(A, …)

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has give a mean number of clicks equal to c_mean when measured with threshold detectors.

gen_Qmat_from_graph(A, n_mean)

Returns the Qmat xp-covariance matrix associated to a graph with adjacency matrix \(A\) and with mean photon number \(n_{mean}\).

photon_number_mean(mu, cov, j[, hbar])

Calculate the mean photon number of mode j of a Gaussian state.

photon_number_mean_vector(mu, cov[, hbar])

Calculate the mean photon number of each of the modes in a Gaussian state

photon_number_covar(mu, cov, j, k[, hbar])

Calculate the variance/covariance of the photon number distribution of a Gaussian state.

photon_number_covmat(mu, cov[, hbar])

Calculate the covariance matrix of the photon number distribution of a Gaussian state.

is_valid_cov(cov[, hbar, rtol, atol])

Checks if the covariance matrix is a valid quantum covariance matrix.

is_pure_cov(cov[, hbar, rtol, atol])

Checks if the covariance matrix is a valid quantum covariance matrix that corresponds to a quantum pure state

is_classical_cov(cov[, hbar, atol])

Checks if the covariance matrix can be efficiently sampled.

total_photon_num_dist_pure_state(cov[, …])

Calculates the total photon number distribution of a pure state with zero mean.

gen_single_mode_dist(s[, cutoff, N])

Generate the photon number distribution of \(N\) identical single mode squeezed states.

gen_multi_mode_dist(s[, cutoff, padding_factor])

Generates the total photon number distribution of single mode squeezed states with different squeezing values.

Details
Amat(cov, hbar=2, cov_is_qmat=False)[source]

Returns the \(A\) matrix of the Gaussian state whose hafnian gives the photon number probabilities.

Parameters
  • cov (array) – \(2N\times 2N\) covariance matrix

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

  • cov_is_qmat (bool) – if True, it is assumed that cov is in fact the Q matrix.

Returns

the \(A\) matrix.

Return type

array

Beta(mu, hbar=2)[source]

Returns the vector of complex displacements and conjugate displacements.

Parameters
  • mu (array) – length-\(2N\) means vector

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

Returns

the expectation values \([\langle a_1\rangle, \langle a_2\rangle,\dots,\langle a_N\rangle, \langle a^\dagger_1\rangle, \dots, \langle a^\dagger_N\rangle]\)

Return type

array

Covmat(Q, hbar=2)[source]

Returns the Wigner covariance matrix in the \(xp\)-ordering of the Gaussian state. This is the inverse function of Qmat.

Parameters
  • Q (array) – \(2N\times 2N\) Husimi Q matrix

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

Returns

the \(xp\)-ordered covariance matrix in the xp-ordering.

Return type

array

Means(beta, hbar=2)[source]

Returns the vector of real quadrature displacements.

Parameters
  • beta (array) – length-\(2N\) means bivector

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

Returns

the quadrature expectation values \([\langle q_1\rangle, \langle q_2\rangle,\dots,\langle q_N\rangle, \langle p_1\rangle, \dots, \langle p_N\rangle]\)

Return type

array

Qmat(cov, hbar=2)[source]

Returns the \(Q\) Husimi matrix of the Gaussian state.

Parameters
  • cov (array) – \(2N\times 2N xp-\) Wigner covariance matrix

  • hbar (float) – the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\).

Returns

the \(Q\) matrix.

Return type

array

Xmat(N)[source]

Returns the matrix \(X_n = \begin{bmatrix}0 & I_n\\ I_n & 0\end{bmatrix}\)

Parameters

N (int) – positive integer

Returns

\(2N\times 2N\) array

Return type

array

fidelity(mu1, cov1, mu2, cov2, hbar=2, rtol=1e-05, atol=1e-08)[source]

Calculates the fidelity between two Gaussian quantum states.

Note that if the covariance matrices correspond to pure states this function reduces to the modulus square of the overlap of their state vectors. For the derivation see ‘Quantum Fidelity for Arbitrary Gaussian States’, Banchi et al..

Parameters
  • mu1 (array) – vector of means of the first state

  • cov1 (array) – covariance matrix of the first state

  • mu2 (array) – vector of means of the second state

  • cov2 (array) – covariance matrix of the second state

  • hbar (float) – value of hbar in the uncertainty relation

  • rtol (float) – the relative tolerance parameter used in np.allclose

  • atol (float) – the absolute tolerance parameter used in np.allclose

Returns

value of the fidelity between the two states

Return type

(float)

find_scaling_adjacency_matrix(A, n_mean)[source]

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that endodes it has a total mean photon number n_mean.

Parameters
  • A (array) – Adjacency matrix

  • n_mean (float) – Mean photon number of the Gaussian state

Returns

Scaling parameter

Return type

float

find_scaling_adjacency_matrix_torontonian(A, c_mean)[source]

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has give a mean number of clicks equal to c_mean when measured with threshold detectors.

Parameters
  • A (array) – adjacency matrix

  • c_mean (float) – mean photon number of the Gaussian state

Returns

scaling parameter

Return type

float

fock_tensor(S, alpha, cutoff, choi_r=0.881373587019543, check_symplectic=True, sf_order=False, rtol=1e-05, atol=1e-08)[source]

Calculates the Fock representation of a Gaussian unitary parametrized by the symplectic matrix S and the displacements alpha up to cutoff in Fock space.

Parameters
  • S (array) – symplectic matrix

  • alpha (array) – complex vector of displacements

  • cutoff (int) – cutoff in Fock space

  • choi_r (float) – squeezing parameter used for the Choi expansion

  • check_symplectic (boolean) – checks whether the input matrix is symplectic

  • sf_order (boolean) – reshapes the tensor so that it follows the sf ordering of indices

  • rtol (float) – the relative tolerance parameter used in np.allclose

  • atol (float) – the absolute tolerance parameter used in np.allclose

Returns

Tensor containing the Fock representation of the Gaussian unitary

Return type

(array)

gen_Qmat_from_graph(A, n_mean)[source]

Returns the Qmat xp-covariance matrix associated to a graph with adjacency matrix \(A\) and with mean photon number \(n_{mean}\).

Parameters
  • A (array) – a \(N\times N\) np.float64 (symmetric) adjacency matrix

  • n_mean (float) – mean photon number of the Gaussian state

Returns

the \(2N\times 2N\) Q matrix.

Return type

array

gen_multi_mode_dist(s, cutoff=50, padding_factor=2)[source]

Generates the total photon number distribution of single mode squeezed states with different squeezing values.

Parameters
  • s (array) – array of squeezing parameters

  • cutoff (int) – Fock cutoff

Returns

total photon number distribution

Return type

(array[int])

gen_single_mode_dist(s, cutoff=50, N=1)[source]

Generate the photon number distribution of \(N\) identical single mode squeezed states.

Parameters
  • s (float) – squeezing parameter

  • cutoff (int) – Fock cutoff

  • N (float) – number of squeezed states

Returns

Photon number distribution

Return type

(array)

is_classical_cov(cov, hbar=2, atol=1e-08)[source]

Checks if the covariance matrix can be efficiently sampled.

Parameters
  • cov (array) – a covariance matrix

  • hbar (float) – value of hbar in the uncertainty relation

Returns

whether the given covariance matrix corresponds to a classical state

Return type

(bool)

is_pure_cov(cov, hbar=2, rtol=1e-05, atol=1e-08)[source]

Checks if the covariance matrix is a valid quantum covariance matrix that corresponds to a quantum pure state

Parameters
  • cov (array) – a covariance matrix

  • hbar (float) – value of hbar in the uncertainty relation

  • rtol (float) – the relative tolerance parameter used in np.allclose

  • atol (float) – the absolute tolerance parameter used in np.allclose

Returns

whether the given covariance matrix corresponds to a pure state

Return type

(bool)

is_valid_cov(cov, hbar=2, rtol=1e-05, atol=1e-08)[source]

Checks if the covariance matrix is a valid quantum covariance matrix.

Parameters
  • cov (array) – a covariance matrix

  • hbar (float) – value of hbar in the uncertainty relation

  • rtol (float) – the relative tolerance parameter used in np.allclose

  • atol (float) – the absolute tolerance parameter used in np.allclose

Returns

whether the given covariance matrix is a valid covariance matrix

Return type

(bool)

loss_mat(eta, cutoff)[source]

Constructs a binomial loss matrix with transmission eta up to n photons.

Parameters
  • eta (float) – Transmission coefficient. eta=0.0 corresponds to complete loss and eta=1.0 corresponds to no loss.

  • cutoff (int) – cutoff in Fock space.

Returns

\(n\times n\) matrix representing the loss.

Return type

array

mean_number_of_clicks(A)[source]

Given an adjacency matrix this function calculates the mean number of clicks. For this to make sense the user must provide a matrix with singular values less than or equal to one. See Appendix A.3 of <https://arxiv.org/abs/1902.00462>`_ by Banchi et al.

Parameters

A (array) – rescaled adjacency matrix

Returns

mean number of clicks

Return type

float

photon_number_covar(mu, cov, j, k, hbar=2)[source]

Calculate the variance/covariance of the photon number distribution of a Gaussian state.

Implements the covariance matrix of the photon number distribution of a Gaussian state according to the Last two eq. of Part II. in ‘Multidimensional Hermite polynomials and photon distribution for polymode mixed light’, Dodonov et al.

\[\begin{split}\sigma_{n_j n_j} &= \frac{1}{2}\left(T_j^2 - 2d_j - \frac{1}{2}\right) + \left<\mathbf{Q}_j\right>\mathcal{M}_j\left<\mathbf{Q}_j\right>, \\ \sigma_{n_j n_k} &= \frac{1}{2}\mathrm{Tr}\left(\Lambda_j \mathbf{M} \Lambda_k \mathbf{M}\right) + \frac{1}{2}\left<\mathbf{Q}\right>\Lambda_j \mathbf{M} \Lambda_k\left<\mathbf{Q}\right>,\end{split}\]

where \(T_j\) and \(d_j\) are the trace and the determinant of \(2 \times 2\) matrix \(\mathcal{M}_j\) whose elements coincide with the nonzero elements of matrix \(\mathbf{M}_j = \Lambda_j \mathbf{M} \Lambda_k\) while the two-vector \(\mathbf{Q}_j\) has the components \((q_j, p_j)\). \(2N \times 2N\) projector matrix \(\Lambda_j\) has only two nonzero elements: \(\left(\Lambda_j\right)_{jj} = \left(\Lambda_j\right)_{j+N,j+N} = 1\). Note that the convention for mu used here differs from the one used in Dodonov et al., They both provide the same results in this particular case.

Parameters
  • mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)

  • cov (array) – the covariance matrix of the Gaussian state

  • j (int) – the j th mode

  • k (int) – the k th mode

  • hbar (float) – the hbar convention used in the commutation relation \([q, p]=i\hbar\)

Returns

the covariance for the photon numbers at modes \(j\) and \(k\).

Return type

float

photon_number_covmat(mu, cov, hbar=2)[source]

Calculate the covariance matrix of the photon number distribution of a Gaussian state.

Parameters
  • mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)

  • cov (array) – the covariance matrix of the Gaussian state

  • hbar (float) – the hbar convention used in the commutation relation \([q, p]=i\hbar\)

Returns

the covariance matrix of the photon number distribution

Return type

array

photon_number_mean(mu, cov, j, hbar=2)[source]

Calculate the mean photon number of mode j of a Gaussian state.

Parameters
  • mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)

  • cov (array) – the covariance matrix of the Gaussian state

  • j (int) – the j th mode

  • hbar (float) – the hbar convention used in the commutation relation \([q, p]=i\hbar\)

Returns

the mean photon number in mode \(j\).

Return type

float

photon_number_mean_vector(mu, cov, hbar=2)[source]

Calculate the mean photon number of each of the modes in a Gaussian state

Parameters
  • mu (array) – vector of means of the Gaussian state using the ordering \([q_1, q_2, \dots, q_n, p_1, p_2, \dots, p_n]\)

  • cov (array) – the covariance matrix of the Gaussian state

  • hbar (float) – the hbar convention used in the commutation relation \([q, p]=i\hbar\)

Returns

the vector of means of the photon number distribution

Return type

array

prefactor(mu, cov, hbar=2)[source]

Returns the prefactor.

\[prefactor = \frac{e^{-\beta Q^{-1}\beta^*/2}}{n_1!\cdots n_m! \sqrt{|Q|}}\]
Parameters
  • mu (array) – length-\(2N\) vector of mean values \([\alpha,\alpha^*]\)

  • cov (array) – length-\(2N\) xp-covariance matrix

Returns

the prefactor

Return type

float

probabilities(mu, cov, cutoff, hbar=2.0, rtol=1e-05, atol=1e-08)[source]

Generate the Fock space probabilities of a Gaussian state up to a Fock space cutoff.

Parameters
  • mu (array) – vector of means of length 2*n_modes

  • cov (array) – covariance matrix of shape [2*n_modes, 2*n_modes]

  • cutoff (int) – cutoff in Fock space

  • hbar (float) – value of \(\hbar\) in the commutation relation :math;`[hat{x}, hat{p}]=ihbar`

  • rtol (float) – the relative tolerance parameter used in np.allclose

  • atol (float) – the absolute tolerance parameter used in np.allclose

Returns

Fock space probabilities up to cutoff. The shape of this tensor is [cutoff]*num_modes.

Return type

(array)

reduced_gaussian(mu, cov, modes)[source]

Returns the vector of means and the covariance matrix of the specified modes.

Parameters
  • mu (array) – a length-\(2N\) np.float64 vector of means.

  • cov (array) – a \(2N\times 2N\) np.float64 covariance matrix representing an \(N\) mode quantum state.

  • modes (int of Sequence[int]) – indices of the requested modes

Returns

where means is an array containing the vector of means, and cov is a square array containing the covariance matrix.

Return type

tuple (means, cov)

total_photon_num_dist_pure_state(cov, cutoff=50, hbar=2, padding_factor=2)[source]

Calculates the total photon number distribution of a pure state with zero mean.

Parameters
  • cov (array) – \(2N\times 2N\) covariance matrix in xp-ordering

  • cutoff (int) – Fock cutoff

  • tol (float) – tolerance for determining if displacement is negligible

  • hbar (float) – the value of \(\hbar\) in the commutation

  • padding_factor (int) – expanded size of the photon distribution to avoid accumulation of errors

Returns

Total photon number distribution

Return type

(array)

update_probabilities_with_loss(etas, probs)[source]

Given a list of transmissivities a tensor of probabilitites, calculate an updated tensor of probabilities after loss is applied.

Parameters
  • etas (list) – List of transmissitivities describing the loss in each of the modes

  • probs (array) – Array of probabilitites in the different modes

Returns

List of loss-updated probabilities with the same shape as probs.

Return type

array

update_probabilities_with_noise(probs_noise, probs)[source]

Given a list of noise probability distributions for each of the modes and a tensor of probabilitites, calculate an updated tensor of probabilities after noise is applied.

Parameters
  • probs_noise (list) – List of probability distributions describing the noise in each of the modes

  • probs (array) – Array of probabilitites in the different modes

Returns

List of noise-updated probabilities with the same shape as probs.

Return type

array

Classical Sampling Algorithms

This submodule provides access to classical sampling algorithms for thermal states going through an interferometer specified by a real orthogonal matrix. The quantum state to be sampled is specified by a positive semidefinite real matrix and a mean photon number. The algorithm implemented here was first derived in

For more precise details of the implementation see

  • Soran Jahangiri, Juan Miguel Arrazola, Nicolás Quesada, and Nathan Killoran. “Point processes with Gaussian boson sampling” Phys. Rev. E 101, 022134, (2020)..

Summary

rescale_adjacency_matrix_thermal(A, n_mean)

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean for thermal sampling.

rescale_adjacency_matrix(A, n_mean, scale[, …])

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean.

generate_thermal_samples(ls, O[, num_samples])

Generates samples of the Gaussian state in terms of the mean photon number parameter ls and the interferometer O.

Code details

generate_thermal_samples(ls, O, num_samples=1)[source]

Generates samples of the Gaussian state in terms of the mean photon number parameter ls and the interferometer O.

Parameters
  • ls (array) – squashing parameters

  • O (array) – Orthogonal matrix representing the interferometer

  • num_samples – Number of samples to generate

Returns

samples

Return type

list(array

rescale_adjacency_matrix(A, n_mean, scale, check_positivity=True, check_symmetry=True, rtol=1e-05, atol=1e-08)[source]

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean.

Parameters
  • A (array) – Adjacency matrix, assumed to be positive semi-definite and real

  • n_mean (float) – Mean photon number of the Gaussian state

  • scale (float) – Determines whether to rescale the matrix for thermal sampling (scale = 1.0) or for squashed sampling (scale = 2.0)

  • check_positivity (bool) – Checks if the matrix A is positive semidefinite

  • check_symmetry (bool) – Checks if the matrix is symmetric

  • rtol – relative tolerance for the checks

  • atol – absolute tolerance for the checks

Returns

rescaled eigenvalues and eigenvectors of the matrix A

Return type

tuple(array,array)

rescale_adjacency_matrix_thermal(A, n_mean, check_positivity=True, check_symmetry=True, rtol=1e-05, atol=1e-08)[source]

Returns the scaling parameter by which the adjacency matrix A should be rescaled so that the Gaussian state that encodes it has a total mean photon number n_mean for thermal sampling.

Parameters
  • A (array) – Adjacency matrix, assumed to be positive semi-definite and real

  • n_mean (float) – Mean photon number of the Gaussian state

  • check_positivity (bool) – Checks if the matrix A is positive semidefinite

  • check_symmetry (bool) – Checks if the matrix is symmetric

  • rtol – relative tolerance for the checks

  • atol – absolute tolerance for the checks

Returns

rescaled eigenvalues and eigenvectors of the matrix A

Return type

tuple(array,array)

Symplectic Operations

Module name: thewalrus.symplectic

Contains some Gaussian operations and auxiliary functions.

Auxiliary functions

expand(S, modes, N)

Expands a Symplectic matrix S to act on the entire subsystem.

expand_vector(alpha, mode, N[, hbar])

Returns the phase-space displacement vector associated to a displacement.

reduced_state(mu, cov, modes)

Returns the vector of means and the covariance matrix of the specified modes.

is_symplectic(S[, rtol, atol])

Checks if matrix S is a symplectic matrix

sympmat(N)

Returns the matrix \(\Omega_n = \begin{bmatrix}0 & I_n\\ -I_n & 0\end{bmatrix}\)

Gaussian states

vacuum_state(modes[, hbar])

Returns the vacuum state.

Gates and operations

two_mode_squeezing(r, phi)

Two-mode squeezing.

interferometer(U)

Interferometer.

loss(mu, cov, T, mode[, nbar, hbar])

Loss channel acting on a Gaussian state.

mean_photon_number(mu, cov[, hbar])

Calculates the mean photon number for a given one-mode state.

beam_splitter(theta, phi)

Beam-splitter.

rotation(theta)

Rotation gate.

Code details
autonne(A, rtol=1e-05, atol=1e-08, svd_order=True)[source]

Autonne-Takagi decomposition of a complex symmetric (not Hermitian!) matrix.

Parameters
  • A (array) – square, symmetric matrix

  • rtol (float) – the relative tolerance parameter between A and A.T

  • atol (float) – the absolute tolerance parameter between A and A.T

  • svd_order (boolean) – whether to return result by ordering the singular values of A in descending (True) or asceding (False) order.

Returns

(r, U), where r are the singular values, and U is the Autonne-Takagi unitary, such that \(A = U \diag(r) U^T\).

Return type

tuple[array, array]

beam_splitter(theta, phi)[source]

Beam-splitter.

Parameters
  • theta (float) – transmissivity parameter

  • phi (float) – phase parameter

Returns

symplectic-orthogonal transformation matrix of an interferometer with angles theta and phi

Return type

array

expand(S, modes, N)[source]

Expands a Symplectic matrix S to act on the entire subsystem.

Parameters
  • S (array) – a \(2M\times 2M\) Symplectic matrix

  • modes (Sequence[int]) – the list of modes S acts on

  • N (int) – full size of the subsystem

Returns

the resulting \(2N\times 2N\) Symplectic matrix

Return type

array

expand_vector(alpha, mode, N, hbar=2.0)[source]

Returns the phase-space displacement vector associated to a displacement.

Parameters
  • alpha (complex) – complex displacement

  • mode (int) – mode index

  • N (int) – number of modes

Returns

phase-space displacement vector of size 2*N

Return type

array

interferometer(U)[source]

Interferometer.

Parameters

U (array) – unitary matrix

Returns

symplectic transformation matrix

Return type

array

is_symplectic(S, rtol=1e-05, atol=1e-08)[source]

Checks if matrix S is a symplectic matrix

Parameters

S (array) – a square matrix

Returns

whether the given matrix is symplectic

Return type

(bool)

loss(mu, cov, T, mode, nbar=0, hbar=2)[source]

Loss channel acting on a Gaussian state.

Parameters
  • mu (array) – means vector

  • cov (array) – covariance matri

  • T (float) – transmission; 1 corresponds to no loss, 0 to full loss.

  • mode (int) – mode to act on

  • nbar (float) – thermal mean population (default 0)

  • hbar (float) – (default 2) the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\)

Returns

the means vector and covariance matrix of the resulting state

Return type

tuple[array]

mean_photon_number(mu, cov, hbar=2)[source]

Calculates the mean photon number for a given one-mode state.

Parameters
  • mu (array) – length-2 vector of means

  • cov (array) – \(2\times 2\) covariance matrix

  • hbar (float) – (default 2) the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\)

Returns

the photon number expectation and variance

Return type

tuple

reduced_state(mu, cov, modes)[source]

Returns the vector of means and the covariance matrix of the specified modes.

Parameters

modes (int of Sequence[int]) – indices of the requested modes

Returns

means is an array containing the vector of means, and cov is a square array containing the covariance matrix. Both use the \(xp\)-ordering.

Return type

tuple (means, cov)

rotation(theta)[source]

Rotation gate.

Parameters

theta (float) – rotation angle

Returns

rotation matrix by angle theta

Return type

array

squeezing(r, phi)[source]

Squeezing. In fock space this corresponds to exp(tfrac{1}{2}r e^{i phi} (a^2 - a^{dagger 2}) ).

Parameters
  • r (float) – squeezing magnitude

  • phi (float) – rotation parameter

Returns

symplectic transformation matrix

Return type

array

sympmat(N)[source]

Returns the matrix \(\Omega_n = \begin{bmatrix}0 & I_n\\ -I_n & 0\end{bmatrix}\)

Parameters

N (int) – positive integer

Returns

\(2N\times 2N\) array

Return type

array

two_mode_squeezing(r, phi)[source]

Two-mode squeezing.

Parameters
  • r (float) – squeezing magnitude

  • phi (float) – rotation parameter

Returns

symplectic transformation matrix

Return type

array

vacuum_state(modes, hbar=2.0)[source]

Returns the vacuum state.

Parameters
  • modes (str) – Returns the vector of means and the covariance matrix

  • hbar (float) – (default 2) the value of \(\hbar\) in the commutation relation \([\x,\p]=i\hbar\)

Returns

the means vector and covariance matrix of the vacuum state

Return type

list[array]

Random matrices

This submodule provides access to utility functions to generate random unitary, symplectic and covariance matrices.

randnc(*arg)[source]

Normally distributed array of random complex numbers.

random_covariance(N, hbar=2, pure=False, block_diag=False)[source]

Random covariance matrix.

Parameters
  • N (int) – number of modes

  • hbar (float) – the value of \(\hbar\) to use in the definition of the quadrature operators \(x\) and \(p\)

  • pure (bool) – If True, a random covariance matrix corresponding to a pure state is returned.

  • block_diag (bool) – If True, uses passive Gaussian transformations that are orthogonal instead of unitary. This implies that the positions \(x\) do not mix with the momenta \(p\) and thus the covariance matrix is block diagonal.

Returns

random \(2N\times 2N\) covariance matrix

Return type

array

random_interferometer(N, real=False)[source]

Random unitary matrix representing an interferometer. For more details, see [mezzadri2006].

Parameters
  • N (int) – number of modes

  • real (bool) – return a random real orthogonal matrix

Returns

random \(N\times N\) unitary distributed with the Haar measure

Return type

array

random_symplectic(N, passive=False, block_diag=False, scale=1.0)[source]

Random symplectic matrix representing a Gaussian transformation.

The squeezing parameters \(r\) for active transformations are randomly sampled from the standard normal distribution, while passive transformations are randomly sampled from the Haar measure. Note that for the Symplectic group there is no notion of Haar measure since this is group is not compact.

Parameters
  • N (int) – number of modes

  • passive (bool) – If True, returns a passive Gaussian transformation (i.e., one that preserves photon number). If False, returns an active transformation.

  • block_diag (bool) – If True, uses passive Gaussian transformations that are orthogonal instead of unitary. This implies that the positions \(q\) do not mix with the momenta \(p\) and thus the symplectic operator is block diagonal

  • scale (float) – Sets the scale of the random values used as squeezing parameters. They will range from 0 to \(\sqrt{2}\texttt{scale}\)

Returns

random \(2N\times 2N\) symplectic matrix

Return type

array

Fock gradients of Gaussian gates

This module contains the Fock representation of the standard Gaussian gates as well as their gradients.

displacement(r, phi, cutoff[, dtype])

Calculates the matrix elements of the displacement gate using a recurrence relation.

squeezing(r, theta, cutoff[, dtype])

Calculates the matrix elements of the squeezing gate using a recurrence relation.

beamsplitter(theta, phi, cutoff[, dtype])

Calculates the Fock representation of the beamsplitter.

two_mode_squeezing(r, theta, cutoff[, dtype])

Calculates the matrix elements of the two-mode squeezing gate recursively.

grad_displacement(T, r, phi)

Calculates the gradients of the displacement gate with respect to the displacement magnitude and angle.

grad_squeezing(T, r, phi)

Calculates the gradients of the squeezing gate with respect to the squeezing magnitude and angle

grad_beamsplitter(T, theta, phi)

Calculates the gradients of the beamsplitter gate with respect to the transmissivity angle and reflection phase

grad_two_mode_squeezing(T, r, theta)

Calculates the gradients of the two-mode squeezing gate with respect to the squeezing magnitude and angle

Reference implementations

This submodule provides access to pure-Python reference implementations of the hafnian and loop hafnian by summing over the set of perfect matching permutations or the set of single pair matchings.

For more details on these definitions see:

  • Andreas Björklund, Brajesh Gupt, and Nicolás Quesada. “A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer” arxiv:1805.12498 (2018)

Reference functions

hafnian(M[, loop])

Returns the (loop) hafnian of the matrix \(M\).

Details
hafnian(M, loop=False)[source]

Returns the (loop) hafnian of the matrix \(M\).

\(M\) can be any two-dimensional object of square shape \(m\) for which the elements (i, j) can be accessed via Python indexing M[i, j], and for which said elements have well defined multiplication __mul__ and addition __add__ special methods.

For example, this includes nested lists and NumPy arrays.

In particular, one can use this function to calculate symbolic hafnians implemented as SymPy matrices.

Parameters
  • M (array) – a square matrix

  • loop (boolean) – if set to True, the loop hafnian is returned

Returns

The (loop) hafnian of M

Return type

scalar

Auxiliary functions

memoized(f[, maxsize])

Decorator used to memoize a generator.

partitions(s[, singles, pairs])

Returns the partitions of a tuple in terms of pairs and singles.

spm(s)

Generator for the set of single pair matchings.

pmp(s)

Generator for the set of perfect matching permutations.

T(n)

Returns the \(n\) th telephone number.

Details
memoized(f, maxsize=1000)[source]

Decorator used to memoize a generator.

The standard approach of using functools.lru_cache cannot be used, as it only memoizes the generator object, not the results of the generator.

See https://stackoverflow.com/a/10726355/10958457 for the original implementation.

Parameters
  • f (function or generator) – function or generator to memoize

  • maxsize (int) – positive integer that defines the maximum size of the cache

Returns

the memoized function or generator

Return type

function or generator

partitions(s, singles=True, pairs=True)[source]

Returns the partitions of a tuple in terms of pairs and singles.

Parameters
  • s (tuple) – a tuple representing the (multi-)set that will be partitioned. Note that it must hold that len(s) >= 3.

  • single (boolean) – allow singles in the partitions

  • pairs (boolean) – allow pairs in the partitions

Returns

a generators that goes through all the single-double partitions of the tuple

Return type

generator

spm(s)[source]

Generator for the set of single pair matchings.

Parameters

s (tuple) – an input tuple

Returns

the set of single pair matching of the tuple s

Return type

generator

pmp(s)[source]

Generator for the set of perfect matching permutations.

Parameters

s (tuple) – an input tuple

Returns

the set of perfect matching permutations of the tuple s

Return type

generator

T(n)[source]

Returns the \(n\) th telephone number.

They satisfy the recursion relation \(T(n) = T(n-1)+(n-1)T(n-2)\) and \(T(0)=T(1)=1\).

See https://oeis.org/A000085 for more details.

Parameters

n (integer) – index

Returns

the nth telephone number

Return type

int

The libwalrus C++ library

The libwalrus C++ library is provided as a header-only library, libwalrus.hpp, which can be included at the top of your source file:

#include <libwalrus.hpp>

The following templated functions are then available for use within the libwalrus namespace.

Example

For instance, consider the following example example.cpp, which calculates the loop hafnian of several all ones matrices:

#include <iostream>
#include <complex>
#include <vector>
#include <libwalrus.hpp>


int main() {
    int nmax = 10;

    for (int m = 1; m <= nmax; m++) {
        // create a 2m*2m all ones matrix
        int n = 2 * m;
        std::vector<std::complex<double>> mat(n * n, 1.0);

        // calculate the hafnian
        std::complex<double> hafval = libwalrus::loop_hafnian(mat);
        // print out the result
        std::cout << hafval << std::endl;
    }

    return 0;
};

This can be compiled using the gcc g++ compiler as follows,

$ g++ example.cpp -o example -std=c++11 -O3 -Wall -I/path/to/libwalrus.hpp -I/path/to/Eigen -fopenmp

where /path/to/libwalrus.hpp is the path to the directory containing libwalrus.hpp, /path/to/Eigen is the path to the Eigen C++ linear algebra header library, and the -fopenmp flag instructs the compiler to parallelize the compiled program using OpenMP.

Additionally, you may instruct Eigen to simply act as a ‘frontend’ to an installed LAPACKE library. To do so, you must pass additional flags:

$ g++ example.cpp -o example -std=c++11 -O3 -Wall -I/path/to/libwalrus.hpp -I/path/to/Eigen \
-fopenmp -DLAPACKE -llapacke -lblas

Below, the main interface (available as templated functions) as well as all auxiliary functions are summarized and listed.

Note

If compiling using the clang compiler provided by Xcode on MacOS, OpenMP is natively supported, however the libomp.so library must be installed and linked to separately. One approach is to use the Homebrew packaging manager:

$ brew install eigen libomp
$ clang example.cpp -o example -O3 -Wall -fPIC -shared -Xpreprocessor -fopenmp -lomp \
-I/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/

Main interface

The following functions are intended as the main interface to the C++ libwalrus library. Most support parallelization via OpenMP.

libwalrus::hafnian_recursive()

Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5].

libwalrus::hafnian()

Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

libwalrus::loop_hafnian()

Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

libwalrus::hafnian_rpt()

Returns the hafnian of a matrix with repeated rows and columns using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.

libwalrus::hafnian_approx()

Returns the approximate hafnian of a matrix with non-negative entries by sampling over determinants. The higher the number of samples, the better the accuracy.

libwalrus::torontonian()

Returns the Torontonian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

libwalrus::torontonian_fsum()

Returns the torontonian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498, with increased accuracy via the fsum summation algorithm.

libwalrus::permanent()

Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering.

libwalrus::perm_fsum()

Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering, with increased accuracy via the fsum summation algorithm.

libwalrus::hermite_multidimensional_cpp()

Returns photon number statistics of a Gaussian state for a given covariance matrix as described in Multidimensional Hermite polynomials and photon distribution for polymode mixed light arxiv:9308033.

API

See here for full details on the C++ API and the libwalrus namespace.

C++ Library API
Class Hierarchy
Full API
Namespaces
Namespace libwalrus
Detailed Description

Contains all algorithms for computing the hafnian, torontonian, and permanent of matrices.

Functions
Functions
Function dec2bin(llint&, int&)
Function Documentation
std::vector<int> dec2bin(llint &k, int &n)

Decimal to binary conversion

Parameters

Function dec2bin(char *, unsigned long long int, Byte)
Function Documentation
void dec2bin(char *dst, unsigned long long int x, Byte len)

Convert decimal number x to character vector dst of length len representing a binary number

Parameters
  • dst: character array to store the binary number

  • x: decimal integer

  • len: length of the binary character array

Function find2
Function Documentation
Byte find2(char *dst, Byte len, Byte *pos)

Given a string of length len, finds the positions in which it has a 1 and stores its position i, as 2*i and 2*i+1 in consecutive slots of the array pos.

It also returns (twice) the number of ones in array dst

Return

returns twice the number of ones in array dst.

Parameters
  • dst: character array representing binary digits.

  • len: length of the array dst.

  • pos: resulting character array of length 2*len storing the indices at which dst contains the values 1.

Function igray
Function Documentation
llint igray(llint n)

Gray code generator.

Return

the corresponding Gray code

Parameters
  • n:

Function left_most_set_bit
Function Documentation
int left_most_set_bit(llint n)

Returns left most set bit.

Return

the left most set bit

Parameters
  • n:

Template Function libwalrus::do_chunk
Function Documentation
template<typename T>
T libwalrus::do_chunk(std::vector<T> &mat, int n, unsigned long long int X, unsigned long long int chunksize)

Calculates the partial sum \(X,X+1,\dots,X+\text{chunksize}\) using the Cygan and Pilipczuk formula for the hafnian of matrix mat.

Note that if X=0 and chunksize=pow(2.0, n/2), then the full hafnian is calculated.

This function uses OpenMP (if available) to parallelize the reduction.

Return

the partial sum for hafnian

Parameters
  • mat: vector representing the flattened matrix

  • n: size of the matrix

  • X: initial index of the partial sum

  • chunksize: length of the partial sum

Template Function libwalrus::do_chunk_loops
Function Documentation
template<typename T>
T libwalrus::do_chunk_loops(std::vector<T> &mat, std::vector<T> &C, std::vector<T> &D, int n, unsigned long long int X, unsigned long long int chunksize)

Calculates the partial sum \(X,X+1,\dots,X+\text{chunksize}\) using the Cygan and Pilipczuk formula for the loop hafnian of matrix mat.

Note that if X=0 and chunksize=pow(2.0, n/2), then the full loop hafnian is calculated.

This function uses OpenMP (if available) to parallelize the reduction.

Return

the partial sum for the loop hafnian

Parameters
  • mat: vector representing the flattened matrix

  • C: contains the diagonal elements of matrix z

  • D: the diagonal elements of matrix z, with every consecutive pair swapped (i.e., C[0]==D[1], C[1]==D[0], C[2]==D[3], C[3]==D[2], etc.).

  • n: size of the matrix

  • X: initial index of the partial sum

  • chunksize: length of the partial sum

Function libwalrus::find2T
Function Documentation
void libwalrus::find2T(char *dst, Byte len, Byte *pos, char offset)

Given a string of length len, finds the positions in which it has a 1 and stores its position i, as 2*i and 2*i+1 in consecutive slots of the array pos.

It also returns (twice) the number of ones in array dst

Return

returns twice the number of ones in array dst.

Parameters
  • dst: character array representing binary digits.

  • len: length of the array dst.

  • pos: resulting character array of length 2*len storing the indices at which dst contains the values 1.

Template Function libwalrus::get_binom_coeff
Function Documentation
template<typename T>
T libwalrus::get_binom_coeff(T N, T K)

Returns the binomial coefficient \(N!/K!(N-K)!\) Adapted from http://blog.plover.com/math/choose.html

Return

\(N!/K!(N-K)!\)

Parameters
  • N:

  • K:

Template Function libwalrus::hafnian
Function Documentation
template<typename T>
T libwalrus::hafnian(std::vector<T> &mat)

Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::hafnian_approx
Function Documentation
double libwalrus::hafnian_approx(std::vector<double> &mat, int &nsamples)

Returns the approximation to the hafnian of a matrix with non-negative entries.

The approximation follows an stochastic algorithm according to which the hafnian can be approximated as the sum of determinants of matrices. The accuracy of the approximation increases with increasing number of iterations.

This is a wrapper around the templated function libwalrus::hafnian_nonneg for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type long double, allowing for greater precision than supported by Python and NumPy.

Return

the approximate hafnian

Parameters
  • mat: vector representing the flattened matrix

  • nsamples: positive integer representing the number of samples to perform

Function libwalrus::hafnian_eigen(std::vector<std::complex<double>>&)
Function Documentation
std::complex<double> libwalrus::hafnian_eigen(std::vector<std::complex<double>> &mat)

Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

This is a wrapper around the templated function hafnian() for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::hafnian_eigen(std::vector<double>&)
Function Documentation
double libwalrus::hafnian_eigen(std::vector<double> &mat)

Returns the hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

This is a wrapper around the templated function hafnian() for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Template Function libwalrus::hafnian_nonneg
Function Documentation
template<typename T>
long double libwalrus::hafnian_nonneg(std::vector<T> &mat, int &nsamples)

Returns the approximation to the hafnian of a matrix with non-negative entries.

The approximation follows an stochastic algorithm according to which the hafnian can be approximated as the sum of determinants of matrices. The accuracy of the approximation increases with increasing number of iterations.

Return

the approximate hafnian

Parameters
  • mat: vector representing the flattened matrix

  • nsamples: positive integer representing the number of samples to perform

Template Function libwalrus::hafnian_recursive
Function Documentation
template<typename T>
T libwalrus::hafnian_recursive(std::vector<T> &mat)

Returns the hafnian of an matrix.

Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5], where it is labelled as ‘Algorithm 2’.

Modified with permission from https://github.com/eklotek/Hafnian.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::hafnian_recursive_quad(std::vector<std::complex<double>>&)
Function Documentation
std::complex<double> libwalrus::hafnian_recursive_quad(std::vector<std::complex<double>> &mat)

Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5], where it is labelled as ‘Algorithm 2’.

Modified with permission from https://github.com/eklotek/Hafnian.

This is a wrapper around the templated function libwalrus::hafnian_recursive for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type complex<long double>, allowing for greater precision than supported by Python and NumPy.

Return

the hafnian

Parameters
  • mat: vector representing the flattened matrix

Function libwalrus::hafnian_recursive_quad(std::vector<double>&)
Function Documentation
double libwalrus::hafnian_recursive_quad(std::vector<double> &mat)

Returns the hafnian of a matrix using the recursive algorithm described in Counting perfect matchings as fast as Ryser [5], where it is labelled as ‘Algorithm 2’.

Modified with permission from https://github.com/eklotek/Hafnian.

This is a wrapper around the templated function libwalrus::hafnian_recursive for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type long double, allowing for greater precision than supported by Python and NumPy.

Return

the hafnian

Parameters
  • mat: vector representing the flattened matrix

Template Function libwalrus::hafnian_rpt
Function Documentation
template<typename T>
T libwalrus::hafnian_rpt(std::vector<T> &mat, std::vector<int> &rpt)

Returns the hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.

Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

  • rpt: a vector of integers, representing the number of times each row/column in mat is repeated. For example, mat = [1] and rpt = [6] represents a \(6\times 6\) matrix of all ones.

Function libwalrus::hafnian_rpt_quad(std::vector<std::complex<double>>&, std::vector<int>&)
Function Documentation
std::complex<double> libwalrus::hafnian_rpt_quad(std::vector<std::complex<double>> &mat, std::vector<int> &rpt)

Returns the hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.

Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.

This is a wrapper around the templated function libwalrus::hafnian_rpt for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type complex<long double>, allowing for greater precision than supported by Python and NumPy.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

  • rpt: a vector of integers, representing the number of times each row/column in mat is repeated. For example, mat = [1] and rpt = [6] represents a \(6\times 6\) matrix of all ones.

Function libwalrus::hafnian_rpt_quad(std::vector<double>&, std::vector<int>&)
Function Documentation
double libwalrus::hafnian_rpt_quad(std::vector<double> &mat, std::vector<int> &rpt)

Returns the hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.

Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.

This is a wrapper around the templated function libwalrus::hafnian_rpt for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type long double, allowing for greater precision than supported by Python and NumPy.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

  • rpt: a vector of integers, representing the number of times each row/column in mat is repeated. For example, mat = [1] and rpt = [6] represents a \(6\times 6\) matrix of all ones.

Template Function libwalrus::hermite_multidimensional_cpp
Function Documentation
template<typename T>
T *libwalrus::hermite_multidimensional_cpp(const std::vector<T> &R, const std::vector<T> &y, const int &cutoff)

Returns the multidimensional Hermite polynomials \(H_k^{(R)}(y)\).

This implementation is based on the MATLAB code available at https://github.com/clementsw/gaussian-optics

Parameters
  • R: a flattened vector of size \(n^2\), representing a \(n\times n\) symmetric matrix.

  • y: a flattened vector of size \(n\).

  • cutoff: highest number of photons to be resolved.

Template Function libwalrus::interferometer_cpp
Function Documentation
template<typename T>
T *libwalrus::interferometer_cpp(const std::vector<T> &R, const int &cutoff)

Returns the matrix elements of an interferometer parametrized in terms of its R matrix

Parameters
  • R: a flattened vector of size \(n^2\), representing a \(n\times n\) symmetric matrix.

  • cutoff: highest number of photons to be resolved.

Function libwalrus::lin_to_multi
Function Documentation
std::vector<int> libwalrus::lin_to_multi(unsigned long long int linear_index, const std::vector<int> &maxes)

Converts a linear index to a multi index e.g. if we wanted the multi-index (i,j) of an element in a 2x2 matrix given a linear index of 3 in the array storing the matrix, the maxes vector would be {1,1} and this function would return (1,0)

Return

multi-index corresponding to the linear index

Parameters
  • linear_index: the “flattened” index

  • maxes: a vector of integers, representing the max index value of each indice in the multi-index object.

Template Function libwalrus::loop_hafnian
Function Documentation
template<typename T>
T libwalrus::loop_hafnian(std::vector<T> &mat)

Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

Return

hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::loop_hafnian_eigen(std::vector<std::complex<double>>&)
Function Documentation
std::complex<double> libwalrus::loop_hafnian_eigen(std::vector<std::complex<double>> &mat)

Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

This is a wrapper around the templated function libwalrus::loop_hafnian() for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.

Return

loop hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::loop_hafnian_eigen(std::vector<double>&)
Function Documentation
double libwalrus::loop_hafnian_eigen(std::vector<double> &mat)

Returns the loop hafnian of a matrix using the algorithm described in A faster hafnian formula for complex matrices and its benchmarking on the Titan supercomputer, arxiv:1805.12498.

This is a wrapper around the templated function loop_hafnian() for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

Return

loop hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Template Function libwalrus::loop_hafnian_rpt
Function Documentation
template<typename T>
T libwalrus::loop_hafnian_rpt(std::vector<T> &mat, std::vector<T> &mu, std::vector<int> &rpt)

Returns the loop hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.

Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.

Return

loop hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

  • mu: a vector of length \(n\) representing the vector of means/displacement.

  • rpt: a vector of integers, representing the number of times each row/column in mat is repeated. For example, mat = [1] and rpt = [6] represents a \(6\times 6\) matrix of all ones.

Function libwalrus::loop_hafnian_rpt_quad(std::vector<std::complex<double>>&, std::vector<std::complex<double>>&, std::vector<int>&)
Function Documentation
std::complex<double> libwalrus::loop_hafnian_rpt_quad(std::vector<std::complex<double>> &mat, std::vector<std::complex<double>> &mu, std::vector<int> &rpt)

Returns the loop hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.

Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.

This is a wrapper around the templated function libwalrus::hafnian_rpt_loop for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type complex<long double>, allowing for greater precision than supported by Python and NumPy.

Return

loop hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

  • mu: a vector of length \(n\) representing the vector of means/displacement.

  • rpt: a vector of integers, representing the number of times each row/column in mat is repeated. For example, mat = [1] and rpt = [6] represents a \(6\times 6\) matrix of all ones.

Function libwalrus::loop_hafnian_rpt_quad(std::vector<double>&, std::vector<double>&, std::vector<int>&)
Function Documentation
double libwalrus::loop_hafnian_rpt_quad(std::vector<double> &mat, std::vector<double> &mu, std::vector<int> &rpt)

Returns the loop hafnian of a matrix using the algorithm described in From moments of sum to moments of product, doi:10.1016/j.jmva.2007.01.013.

Note that this algorithm, while generally slower than others, can be significantly more efficient in the cases where the matrix has repeated rows and columns.

This is a wrapper around the templated function libwalrus::hafnian_rpt_loop for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type long double, allowing for greater precision than supported by Python and NumPy.

Return

loop hafnian of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

  • mu: a vector of length \(n\) representing the vector of means/displacement.

  • rpt: a vector of integers, representing the number of times each row/column in mat is repeated. For example, mat = [1] and rpt = [6] represents a \(6\times 6\) matrix of all ones.

Template Function libwalrus::perm_fsum
Function Documentation
template<typename T>
double libwalrus::perm_fsum(std::vector<T> &mat)

Returns the permanent of an matrix using fsum.

Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering.

Return

permanent of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Template Function libwalrus::permanent
Function Documentation
template<typename T>
T libwalrus::permanent(std::vector<T> &mat)

Returns the permanent of an matrix.

Returns the permanent of a matrix using Ryser’s algorithm with Gray code ordering.

Return

permanent of the input matrix

Parameters
  • mat: a flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::permanent_fsum
Function Documentation
double libwalrus::permanent_fsum(std::vector<double> &mat)

Returns the permanent of a matrix using Ryser’s algo with Gray code ordering with fsum

This is a wrapper around the templated function libwalrus::perm_fsum for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

Return

the permanent

Parameters
  • mat: vector representing the flattened matrix

Function libwalrus::permanent_quad(std::vector<std::complex<double>>&)
Function Documentation
std::complex<double> libwalrus::permanent_quad(std::vector<std::complex<double>> &mat)

Returns the permanent of a matrix using the Ryser’s algo with Gray code ordering

This is a wrapper around the templated function libwalrus::permanent for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type complex<long double>, allowing for greater precision than supported by Python and NumPy.

Return

the permanent

Parameters
  • mat: vector representing the flattened matrix

Function libwalrus::permanent_quad(std::vector<double>&)
Function Documentation
double libwalrus::permanent_quad(std::vector<double> &mat)

Returns the permanent of a matrix using Ryser’s algo with Gray code ordering

This is a wrapper around the templated function libwalrus::permanent for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type long double, allowing for greater precision than supported by Python and NumPy.

Return

the permanent

Parameters
  • mat: vector representing the flattened matrix

Function libwalrus::powtrace(vec_complex&, int, int)
Function Documentation
vec_complex libwalrus::powtrace(vec_complex &z, int n, int l)

Given a complex matrix \(z\) of dimensions \(n\times n\), it calculates \(Tr(z^j)~\forall~1\leq j\leq l\).

Return

a vector containing the power traces of matrix z to power \(1\leq j \leq l\).

Parameters
  • z: a flattened complex vector of size \(n^2\), representing an \(n\times n\) row-ordered matrix.

  • n: size of the matrix z.

  • l: maximum matrix power when calculating the power trace.

Function libwalrus::powtrace(vec_double&, int, int)
Function Documentation
vec_double libwalrus::powtrace(vec_double &z, int n, int l)

Given a real matrix \(z\) of dimensions \(n\times n\), it calculates \(Tr(z^j)~\forall~1\leq j\leq l\).

Return

a vector containing the power traces of matrix z to power \(1\leq j \leq l\).

Parameters
  • z: a flattened complex vector of size \(n^2\), representing an \(n\times n\) row-ordered matrix.

  • n: size of the matrix z.

  • l: maximum matrix power when calculating the power trace.

Template Function libwalrus::recursive_chunk
Function Documentation
template<typename T>
T libwalrus::recursive_chunk(std::vector<T> b, int s, int w, std::vector<T> g, int n)

Recursive hafnian solver.

Modified with permission from https://github.com/eklotek/Hafnian.

This function uses OpenMP (if available) to parallelize the reduction.

Return

the hafnian

Parameters
  • b:

  • s:

  • w:

  • g:

  • n:

Template Function libwalrus::renorm_hermite_multidimensional_cpp
Function Documentation
template<typename T>
T *libwalrus::renorm_hermite_multidimensional_cpp(const std::vector<T> &R, const std::vector<T> &y, const int &cutoff)

Returns the normalized multidimensional Hermite polynomials \(\tilde{H}_k^{(R)}(y)\).

This implementation is based on the MATLAB code available at https://github.com/clementsw/gaussian-optics

Parameters
  • R: a flattened vector of size \(n^2\), representing a \(n\times n\) symmetric matrix.

  • y: a flattened vector of size \(n\).

  • cutoff: highest number of photons to be resolved.

Function libwalrus::sum
Function Documentation
char libwalrus::sum(char *dst, Byte m)

Partial sum of a character array

Return

the partial sum

Parameters
  • dst: character array

  • m: sum the first m characters

Template Function libwalrus::torontonian
Function Documentation
template<typename T>
T libwalrus::torontonian(std::vector<T> &mat)

Computes the Torontonian of an input matrix.

If the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.

This function uses OpenMP (if available) to parallelize the reduction.

Return

Torontonian of the input matrix

Parameters
  • mat: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Template Function libwalrus::torontonian_fsum
Function Documentation
template<typename T>
double libwalrus::torontonian_fsum(std::vector<T> &mat)

Computes the Torontonian of an input matrix using the Shewchuck algorithm, a significantly more accurate summation algorithm.

Note that the fsum implementation currently only allows for double precision, and precludes use of OpenMP parallelization.

Note: if the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.

Return

Torontonian of the input matrix

Parameters
  • mat: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::torontonian_quad(std::vector<std::complex<double>>&)
Function Documentation
std::complex<double> libwalrus::torontonian_quad(std::vector<std::complex<double>> &mat)

Computes the Torontonian of an input matrix.

If the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.

This is a wrapper around the templated function libwalrus::torontonian for Python integration. It accepts and returns complex double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type complex<long double>, allowing for greater precision than supported by Python and NumPy.

Return

Torontonian of the input matrix

Parameters
  • mat: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function libwalrus::torontonian_quad(std::vector<double>&)
Function Documentation
double libwalrus::torontonian_quad(std::vector<double> &mat)

Computes the Torontonian of an input matrix.

If the output is NaN, that means that the input matrix does not have a Torontonian with physical meaning.

This is a wrapper around the templated function libwalrus::torontonian for Python integration. It accepts and returns double numeric types, and returns sensible values for empty and non-even matrices.

In addition, this wrapper function automatically casts all matrices to type long double, allowing for greater precision than supported by Python and NumPy.

Return

Torontonian of the input matrix

Parameters
  • mat: flattened vector of size \(n^2\), representing an \(n\times n\) row-ordered symmetric matrix.

Function update_iterator
Function Documentation
int update_iterator(std::vector<int> &nextPos, std::vector<int> &jumpFrom, int &jump, const int &cutoff, const int &dim)

Updates the iterators needed for the calculation of the Hermite multidimensional functions

index necessary for knowing which elements are needed from the input vector y and matrix R

Parameters
  • nextPos: a vector of integers

  • jumpFrom: a vector of integers

  • jump: integer specifying whether to jump to the next index

  • cutoff: integer specifying the cuotff dimension of the R matrix

Function vec2index
Function Documentation
ullint vec2index(std::vector<int> &pos, int cutoff)

Returns the index of the one dimensional flattened vector corresponding to the multidimensional tensor

Return

index on flattened vector

Parameters
  • pos:

  • cutoff:

Defines
Define HAFNIAN_VERSION_CODE
Define Documentation
HAFNIAN_VERSION_CODE

The complete version number.

Define HAFNIAN_VERSION_MAJOR
Define Documentation
HAFNIAN_VERSION_MAJOR

The major version number.

Define HAFNIAN_VERSION_MINOR
Define Documentation
HAFNIAN_VERSION_MINOR

The minor version number.

Define HAFNIAN_VERSION_PATCH
Define Documentation
HAFNIAN_VERSION_PATCH

The patch number.

Define HAFNIAN_VERSION_STRING
Define Documentation
HAFNIAN_VERSION_STRING

Version number as string.

Define SC_STACK
Define Documentation
SC_STACK
Typedefs
Typedef Byte
Typedef Documentation
typedef unsigned char Byte
Typedef double_complex
Typedef Documentation
typedef std::complex<double> double_complex
Typedef llint
Typedef Documentation
typedef long long int llint
Typedef qp
Typedef Documentation
typedef long double qp
Typedef ullint
Typedef Documentation
typedef unsigned long long int ullint
Typedef ullint
Typedef Documentation
typedef unsigned long long int ullint
Typedef vec_complex
Typedef Documentation
typedef std::vector<double_complex> vec_complex
Typedef vec_double
Typedef Documentation
typedef std::vector<double> vec_double

Contents

Getting started

Background

The Walrus API