Beyond classical (but is it quantum?)

The accepted version of “Interpreting weak value amplification with a toy realist model” is now available online. The work was an interesting exercise involving two topics in the foundations of physics: Weak values and stochastic electrodynamics. Basically we showed that a recent quantum optics experiment from our lab could be re-interpreted with a slightly modified version of classical electrodynamics.

The aim of the work was to develop some intuition on what weak values could mean in a realist model (i.e. one where the mathematical objects represent real “stuff”, like an electromagnetic field). To do this we added a fluctuating vacuum to classical electrodynamics. This framework (at least at the level we developed it) does not capture all quantum phenomena, and so it is not to be taken seriously beyond the regime of the paper, but it did allow us to examine some neat features of weak values and the experiment.

One interesting result was the regime where the model succeeds in reproducing the theoretical results (the experimental results all fall within this regime), which provided insight into weak values. Specifically, the model works only when the fluctuations are relatively small. Going back to the real world, this provide new intuition on the relation between weak values and the weak measurement process.

The experiment we looked at was done in Aephraim Steinberg‘s light matter interaction lab by Matin Hallaji et al. a few months before I joined the group. It was a ‘typical’ weak value amplification experiment with a twist. In an idealized version, a single photon should be sent into an almost balanced Mach Zehnder interferometer with a weak photon counting (or intensity) measurement apparatus on one arm (see figure below). The measurement is weak in the sense that it is almost non-disturbing, but also very imprecise, so the experiment must be repeated many times to get a good result. So far nothing surprising can happen, and we expect to get a result of 1/2 photon (on average) going through the top arm of the interferometer. To get the weak value amplification effect, we post-select only on (those rare) events where a photon is detected at the dark port of the interferometer. In such a case, the mean count on the detector can correspond to an arbitrarily large (or small, or even negative) number of photons.

The twist with the experiment was to use a second light beam for the photon counting measurement. This was the first time this type of weak measurement was done in this way, and is of particular importance since many of the previous experiments could be explained using standard electrodynamics. In this case, the photon-photon interaction is a purely quantum effect.

In reality the experiment described above was not feasible due to various imperfections that could not avoided, so a compromise was made. Instead of sending a single photon, they used a coherent beam with around 10-100 photons. To get the amplification effect on a single photon, they used a trick that would increase the number of photons by 1 and only looked for the change in the signal due to the extra photon. The results showed that the same amplification as the ideal case. However, there remained (and still remains) a question of what that amplification means. Can we really talk about additional photons appearing in the experiment?

As we showed, the results of the experiment can be explained (or at least reproduced) in a model which is more intuitive than quantum theory. Within that model it is clear that the amplification is a real, i.e the post-selected events corresponded to cases where more light traveled through the top arm of the interferometer.

The model was based on classical electrodynamics with a slight modification, we assumed that fields fluctuated like the quantum vacuuum. This turned out to be sufficient to get the same predictions as quantum theory in the regime of the experiment. However, we showed that this model would not work if the intensity of the incoming light was sufficiently small, and in particular it would not work for something like a single photon.

Our model has a clear ‘reality’, i.e the real field is a fluctuating classical EM field, and so it provides a nice start for a more general theory that has weak values as its underlying real quantities. One important feature of the model is the regime where it makes accurate predictions. It turns out that there are two bounds on the regime of validity. The first is a requirement that the light is coherent and is not too weak, this roughly corresponds to being in a semi-classical regime. The second is that the probability of detecting photons at the dark port is not significantly effected by the fluctuation which is similar to the standard weak measurement requirement, i.e. that the measurement back-action is small.

My main takeaway from the work was that the weak values ended up being the most accurate experimentally accessible quantity for measuring the underlying field. Making a leap into quantum theory, we might say that weak measurements give a more accurate description of reality than the usual strong measurement.

The result also set a new challenge for us: Can we repeat the experiment in the regime where the model brakes down (e.g. with single photons)?

Advertisements

Bell tests

The loophole free Bell experiments are among the top achievements in quantum information science over the last few years. However, as with other recent experimental validations of an a well accepted theory, the results did not change our view of reality. The few skeptics remained unconvinced, while the majority received further confirmation of a theory we already accepted. It turns out that this was not the case with the first Bell tests in the 1970s and 1980s (Clauser, Aspect etc. )

Jaynes, a prominent 20th century physicist who did some important work on light matter interaction did not believe that the electromagnetic field needs to be quantized (until Clauser’s experiment) and did extensive work on explaining optical phenomena without photons. As part of our recent work on modeling a quantum optics experiment using a modified version of classical electrodynamics (and no `photons’) we had a look at Jaynes’s last review of his neo-classical theory (1973). This work was incredibly impressive and fairly successful, but it was clear (to him at least) that it could not survive a violation of Bell’s inequalities. Jaynes’s review was written at the same time as the first Bell test experiments were reported by Clauser. In a show of extraordinary scientific honesty he wrote:

If it [Clauser’s experiment] survives that scrutiny, and if the experimental result is confirmed by others, then this will surely go down as one of the most incredible intellectual achievements in the history of science, and my own work will lie in ruins.

Updates

Some updates from the past 10 months…

  1. Papers published
  2. Preprint: Weak values and neoclassical realism
    My first paper with the Steinberg group is hardcore foundations.  Not only do we use the word ontology throughout the manuscript, we analyse an experiment using a theory which we know is not physical. Still, we get some nice insights about weak values.

Going integrated

As a quantum information theorist, the cleanest types of results I can get are proofs that something is possible, impossible or optimal. Much of my work focused on these types of results in the context of measurements and non-locality. As a physicist, it is always nice to bring these conceptual ideas closer to the lab, so I try to collaborate with experimentalists. The types of problems an experimental group can work on are constrained by technical capabilities. In the case of Amr Helmy’s group, they specialize in integrated photonic sources so I now know something about integrated optics.

When I started learning about the possibilities and constraints in the group, I realized that the types of devices they can fabricate are much better suited for work in continuous variables as opposed to single photons. I also realized that no one explored the limitations of these types of devices. In other words, we did not know the subset of states that we could generate in principle (in an ideal device).

In trying to answer this question we figured out that, with our capabilities,  it is in principle (i.e in the absence of loss) possible to fabricate a device that can generate any Gaussian state (up to some limitations on squeezing and displacement). What turns out to be even nicer is that we could have a single device that can be programmed to generate any N-mode Gausssian state. The basic design for this device was recently posted on arXiv.
Fig4_Page_1

We left the results fairly generic so that they could be applied to a variety of integrated devices, using various semiconductors. The next step would be to apply them to something more specific and start accounting for loss and other imperfections. Once we figure that out, we (i.e. the fab guys) an go on to building an actual device that could be tested in the lab.

Dynamically Reconfigurable Sources for Arbitrary Gaussian States in Integrated Photonics Circuits A. Brodutch Ryan Marchildon and Amr Helmy arXiv:1712.04105

Tomaytos, Tomahtos and Non-local Measurements

In the interest of keeping this blog active, i’m recycling one of my old IQC blog posts

One of my discoveries as a physicist was that, despite all attempts at clarity, we still have different meanings for the same words and use different words to refer the the same thing. When Alice says measurement, Bob hears a `quantum to classical channel’, but Alice, a hard-core Everettian, does not even believe such channels exist. When Charlie says non-local, he means Bell non-local, but string theorist Dan starts lecturing him about non-local Lagrangian terms and violations of causality. And when I say non-local measurements, you hear #$%^ ?e#&*?.  Let me give you a hint, I do not mean ‘Bell non-local quantum to classical channels’, to be honest, I am not even sure what that would mean.

So what do I mean when I say measurement? A measurement is a quantum operation that takes a quantum state as its input and spits out a quantum state and a classical result as an output (no, I am not an Everettian). For simplicity I will concentrate of a special case of this operation, a projective measurement of an observable A. The classical result of a projective measurement is an eigenvalue of A, but what is the outgoing state?

textbookM

A textbook (projective) measurement.  A Quantum state |\psi\rangle  goes in and a classical outcome “r” comes out together with a corresponding  quantum state |\psi_r\rangle.

The Lüders measurement

Even the term projective measurement can lead to confusion, and indeed in the early days of quantum mechanics it did. When von Neumann wrote down the mathematical formalism for quantum measurements, he missed an important detail about degenerate observables (i.e Hermitian operators with a degenerate eigenvalue spectrum). In the usual projective measurement, the state of the system after the measurement is uniquely determined by the classical result (an eigenvalue of the observable). Consequently,  if we don’t look at the classical result, the quantum channel is a standard dephasing channel. In the case of a degenerate observable, the same eigenvalue corresponds to two or more orthogonal eigenstates. Seemingly the state of the system should correspond to one of those eigenstates, and the channel is a standard dephasing channel. But a degenerate spectrum means that the set of orthogonal eigenvectors is not unique, instead each eigenvalue has a corresponding subspace of eigenvectors. What Lüders suggested is that the dephasing channel does nothing within these subspaces.

 

Example

Consider the two qubit observable A=|00\rangle\langle 00 |. It has eigenvalues 1,0,0,0. A  1 result in this measurement corresponds to “The system is in the state |00\rangle “.  Following a measurement with outcome 1 , the outgoing state will be |00\rangle . Similarly, a 0 result corresponds to “The system is not in the state |00\rangle “. But here is where the Lüders rule kicks in. Given a generic input state \alpha|00\rangle+\beta|01\rangle+\gamma|{10}\rangle+\delta|{11}\rangle   and a Lüders measurement of A with outcome 0, the outgoing state will be  \frac{1}{\sqrt{|\alpha|^2+|\beta|^2+|\gamma|^2}}\left[\beta|{01}\rangle+\gamma|{10}\rangle+\delta|{11}\rangle\right].

 

Non-local measurements

The relation to non-locality may already be apparent from the example, but let me start with some definitions. A system can be called non-local if it has parts in different locations, e.g. one part on Earth and the other on the moon. A measurement is non-local if it reveals something about a non-local system as a whole. In principle these definitions apply to classical and quantum systems. Classically a non-local measurement is trivial, there is no conceptual reason why we can’t just measure at each location. For a quantum system the situation is different. Let us use the example above, but now consider the situation where the two qubits are in separate locations. Local measurements of  \sigma_z will produce the desired measurement statistics (after coarse graining) but reveal too much information and dephase the state completely, while a Lüders measurement should not. What is quite neat about this example is that the Lüders measurement of |{00}\rangle cannot be implemented without entanglement (or quantum communication) resources and two-way classical communication. To prove that entanglement is necessary, it is enough to give an example where entanglement is created during the measurement. To show that communication is necessary, it is enough to show that the measurement (even if the outcome is unknown) can be used to transmit information. The detailed proof is left as an exercise to the reader. The lazy reader can find it here (see appendix A).

This is a slighly modified version of a  Feb 2016 IQC blog post.  

Three papers published

When it rains it pours. I had three papers published in the last week. One experimental paper and two papers about entanglement.

  1. Experimental violation of the Leggett–Garg inequality in a three-level system. A cool experimental project with IQC’s liquid state NMR group.    Check out the outreach article  about this experiment.
  2. Extrapolated quantum states, void states and a huge novel class of distillable entangled states. My first collaboration with Tal Mor and Michel Boyer and my first paper to appear in a bona fide CS journal (although the content is really mathematical physics). It took about 18 months to get the first referee reports.
  3. Entanglement and deterministic quantum computing with one qubit. This is a follow up to the paper above, although it appeared on arXiv  a few months earlier.

Towards quantum supremacy

 Quantum phenomena do not occur in a Hilbert space. They occur in a laboratory.

Asher Peres

Being a theorist, it is easy to forget that physics is an empirical science.  This is especially true for those of us working on quantum information. Quantum theory has been so thoroughly tested, that we have gotten into the habit of assuming our theoretical predictions must correspond to physical reality. If an experiment deviates from the theory, we look for technical flaws (and usually find them) before seeking an explanation outside the standard theory. Luckily, we have experimentalists who insist on testing our prediction.

Quantum computers are an extreme prediction of quantum theory. Those of us who expect to see working quantum computers at some point in the future, expect the theory to hold for fairly large systems undergoing complex dynamics.  This is a reasonable expectation but it is not trivial.  Our only way to convince ourselves that quantum theory holds at fairly large scales, is through experiment. Conversely, the most reasonable way to convince ourselves that the theory breaks down at some scale, is through experiment. Either way, the consequences are immense,  either we build quantum computers or we make the most significant scientific discovery in decades.

Unfortunately, building quantum computers is very difficult.

There are many different routes towards  quantum computers.  The long and difficult roads, are those gearing towards universal quantum computers, i.e those that are at least as powerful as any other quantum  computer. The (hopefully) shorter and less difficult roads are those aimed at specialized (or semi or sub-universal) quantum computers. These should outperform classical computers for some specialized tasks and allow a demonstration of quantum supremacy; empirical evidence that quantum mechanics does not break down at a fairly high level of complexity.

One of the difficulties in building quantum computers is optimizing the control sequences. In many cases we end up dealing with catch-22. In order to optimize the sequence we need to simulate the system; in order to simulate the system we need a quantum computer; in order to build a quantum computer we need to optimize the control sequence…..

Recently Jun Li and collaborators found a loophole. The optimization algorithm requires a simulation of the quantum system under the imperfect pulses. This type of simulation can be done efficiently on the same quantum processor. We can generate the imperfect pulse `perfectly’, on our processor and it can obviously simulate itself.   In-fact, the task of optimizing pulses seems like a perfect candidate for demonstrating quantum supremacy.

I was lucky to be in the right place at the right time and be part of the group that implemented this idea on a 12-qubit processor. We showed that at the 12-qubit level, this method can outperform a fairly standard computer. It is not a demonstration of quantum supremacy yet, but it seems like a promising road towards this task. It is also a promising way to optimize control pulses.

As a theorist, I cannot see a good reason why quantum computers will not be a reality, but it is always nice to know that physical reality matches my expectations at least at the 12-qubit level.

P.S – A similar paper appeared on arXiv a few days after ours.

  1. Towards quantum supremacy: enhancing quantum control by bootstrapping a quantum processor – arXiv:1701.01198
  2. In situ upgrade of quantum simulators to universal computers – arXiv:1701.01723
  3. Realization of a Quantum Simulator Based Oracle Machine for Solving Quantum Optimal Control Problem – arXiv:1608.00677