When I started learning about the possibilities and constraints in the group, I realized that the types of devices they can fabricate are much better suited for work in continuous variables as opposed to single photons. I also realized that no one explored the limitations of these types of devices. In other words, we did not know the subset of states that we could generate in principle (in an ideal device).

In trying to answer this question we figured out that, with our capabilities, it is in principle (i.e in the absence of loss) possible to fabricate a device that can generate any Gaussian state (up to some limitations on squeezing and displacement). What turns out to be even nicer is that we could have a single device that can be programmed to generate any N-mode Gausssian state. The basic design for this device was recently posted on arXiv.

We left the results fairly generic so that they could be applied to a variety of integrated devices, using various semiconductors. The next step would be to apply them to something more specific and start accounting for loss and other imperfections. Once we figure that out, we (i.e. the fab guys) an go on to building an actual device that could be tested in the lab.

One of my discoveries as a physicist was that, despite all attempts at clarity, we still have different meanings for the same words and use different words to refer the the same thing. When Alice says measurement, Bob hears a `quantum to classical channel’, but Alice, a hard-core Everettian, does not even believe such channels exist. When Charlie says non-local, he means Bell non-local, but string theorist Dan starts lecturing him about non-local Lagrangian terms and violations of causality. And when I say non-local measurements, you hear #$%^ ?e#&*?. Let me give you a hint, I do not mean ‘Bell non-local quantum to classical channels’, to be honest, I am not even sure what that would mean.

So what do I mean when I say measurement? A measurement is a quantum operation that takes a quantum state as its input and spits out a quantum state **and **a classical result as an output (no, I am not an Everettian). For simplicity I will concentrate of a special case of this operation, a projective measurement of an observable *A*. The classical result of a projective measurement is an eigenvalue of *A*, but what is the outgoing state?

Even the term projective measurement can lead to confusion, and indeed in the early days of quantum mechanics it did. When von Neumann wrote down the mathematical formalism for quantum measurements, he missed an important detail about degenerate observables (i.e Hermitian operators with a degenerate eigenvalue spectrum). In the usual projective measurement, the state of the system after the measurement is uniquely determined by the classical result (an eigenvalue of the observable). Consequently, if we don’t look at the classical result, the quantum channel is a standard dephasing channel. In the case of a degenerate observable, the same eigenvalue corresponds to two or more orthogonal eigenstates. Seemingly the state of the system should correspond to one of those eigenstates, and the channel is a standard dephasing channel. But a degenerate spectrum means that the set of orthogonal eigenvectors is not unique, instead each eigenvalue has a corresponding subspace of eigenvectors. What Lüders suggested is that the dephasing channel does nothing within these subspaces.

Consider the two qubit observable . It has eigenvalues . A result in this measurement corresponds to “The system is in the state “. Following a measurement with outcome , the outgoing state will be . Similarly, a result corresponds to “The system is not in the state “. But here is where the Lüders rule kicks in. Given a generic input state and a Lüders measurement of with outcome 0, the outgoing state will be .

The relation to non-locality may already be apparent from the example, but let me start with some definitions. A system can be called non-local if it has parts in different locations, e.g. one part on Earth and the other on the moon. A measurement is non-local if it reveals something about a non-local system as a whole. In principle these definitions apply to classical and quantum systems. Classically a non-local measurement is trivial, there is no conceptual reason why we can’t just measure at each location. For a quantum system the situation is different. Let us use the example above, but now consider the situation where the two qubits are in separate locations. Local measurements of will produce the desired measurement statistics (after coarse graining) but reveal too much information and dephase the state completely, while a Lüders measurement should not. What is quite neat about this example is that the Lüders measurement of cannot be implemented without entanglement (or quantum communication) resources and two-way classical communication. To prove that entanglement is necessary, it is enough to give an example where entanglement is created during the measurement. To show that communication is necessary, it is enough to show that the measurement (even if the outcome is unknown) can be used to transmit information. The detailed proof is left as an exercise to the reader. The lazy reader can find it here (see appendix A).

*This is a slighly modified version of a Feb 2016 IQC blog post. *

- Experimental violation of the Leggett–Garg inequality in a three-level system. A cool experimental project with IQC’s liquid state NMR group. Check out the outreach article about this experiment.
- Extrapolated quantum states, void states and a huge novel class of distillable entangled states. My first collaboration with Tal Mor and Michel Boyer and my first paper to appear in a bona fide CS journal (although the content is really mathematical physics). It took about 18 months to get the first referee reports.
- Entanglement and deterministic quantum computing with one qubit. This is a follow up to the paper above, although it appeared on arXiv a few months earlier.

Quantum phenomena do not occur in a Hilbert space. They occur in a laboratory.

Asher Peres

Being a theorist, it is easy to forget that physics is an empirical science. This is especially true for those of us working on quantum information. Quantum theory has been so thoroughly tested, that we have gotten into the habit of assuming our theoretical predictions must correspond to physical reality. If an experiment deviates from the theory, we look for technical flaws (and usually find them) before seeking an explanation outside the standard theory. Luckily, we have experimentalists who insist on testing our prediction.

Quantum computers are an extreme prediction of quantum theory. Those of us who expect to see working quantum computers at some point in the future, expect the theory to hold for fairly large systems undergoing complex dynamics. This is a reasonable expectation but it is not trivial. Our only way to convince ourselves that quantum theory holds at fairly large scales, is through experiment. Conversely, the most reasonable way to convince ourselves that the theory breaks down at some scale, is through experiment. Either way, the consequences are immense, either we build quantum computers or we make the most significant scientific discovery in decades.

Unfortunately, building quantum computers is very difficult.

There are many different routes towards quantum computers. The long and difficult roads, are those gearing towards universal quantum computers, i.e those that are at least as powerful as any other quantum computer. The (hopefully) shorter and less difficult roads are those aimed at specialized (or semi or sub-universal) quantum computers. These should outperform classical computers for some specialized tasks and allow a demonstration of quantum supremacy; empirical evidence that quantum mechanics does not break down at a fairly high level of complexity.

One of the difficulties in building quantum computers is optimizing the control sequences. In many cases we end up dealing with catch-22. In order to optimize the sequence we need to simulate the system; in order to simulate the system we need a quantum computer; in order to build a quantum computer we need to optimize the control sequence…..

Recently Jun Li and collaborators found a loophole. The optimization algorithm requires a simulation of the quantum system under the imperfect pulses. This type of simulation can be done efficiently on the same quantum processor. We can generate the imperfect pulse `perfectly’, on our processor and it can obviously simulate itself. In-fact, the task of optimizing pulses seems like a perfect candidate for demonstrating quantum supremacy.

I was lucky to be in the right place at the right time and be part of the group that implemented this idea on a 12-qubit processor. We showed that at the 12-qubit level, this method can outperform a fairly standard computer. It is not a demonstration of quantum supremacy yet, but it seems like a promising road towards this task. It is also a promising way to optimize control pulses.

As a theorist, I cannot see a good reason why quantum computers will not be a reality, but it is always nice to know that physical reality matches my expectations at least at the 12-qubit level.

P.S – A similar paper appeared on arXiv a few days after ours.

- Towards quantum supremacy: enhancing quantum control by bootstrapping a quantum processor – arXiv:1701.01198
- In situ upgrade of quantum simulators to universal computers – arXiv:1701.01723
- Realization of a Quantum Simulator Based Oracle Machine for Solving Quantum Optimal Control Problem – arXiv:1608.00677

Schrödinger coined the term entanglement in the context of pure quantum states. A pure quantum state describing two subsystems is entangled if (and only if) the state of each subsystem is mixed, i.e (within the context of the relevant operators) there is no (rank 1) measurement that yields a definite outcome^{1}. But in reality the states we encounter are mixed and Schrödinger’s definition cannot be applied in a straightforward way.

A mixed quantum state is similar to a composite color such as pink, brown or white which have no specific wavelength. Any composite color can be made by mixing elements from a set of primary colors such as red green and blue (RBG) but one can choose different conventions to produce the same color^{2}. Similarly, a mixed quantum state does not have a unique decomposition in terms of pure quantum states. The cat in the box is in a mixture of being dead and alive, yet it is also in a mixture of being in various superpositions of dead and alive. It turns out that this creates a serious problem when we try to define mixed state entanglement.

The standard way to define entanglement is to look for a decomposition into pure non-entangled states. So, if we can find some way to describe the mixed quantum state as a mixture of non-entangled (i.e separable) states, then the state is separable. This is a convenient mathematical definition ^{3 }but it is not consistent with the physical manifestation of entanglement.

What is the physical manifestation of entanglement?

One way to think of entanglement is as a resource for some physical tasks such as teleportation or quantum communication. Ideally one would want to make a claim such as “If you gave me enough copies of an entangled state I could perform perfect teleportation”. Indeed this would be the case if the states were pure, but in the case of mixed states there are counterexamples to this statement for practically any physical task (except channel discrimination).

Another way to think about entanglement is as a way to quantify complexity. The intuition comes from the fact that a good enough description^{4 } of an entangled state usually requires a very large memory. If a system is in a pure quantum state and it is not entangled, we can fully describe it by specifying each part. If it is entangled, we must also specify some global properties. Roughly speaking, these global properties describe the relations between the subsystems, and the number of parameters we need to keep track of grows exponentially with the number of subsystems. However, as it turns out, some highly entangled states can be described in a very concise way. When it comes to mixed states, the situation is different and it is unclear if we can give a concise description of a separable system.

The bottom line is, the physical manifestation of entanglement is not trivial, especially when we consider mixed states. As a result, there is no obvious one-size-fits-all way to extend various ideas about entanglement to mixed states.

Quantum correlations and discord

So, while there is no unique way to generalize entanglement to mixed states, one particular method (entangled = not separable) has become canonical. Other ways of generalizing entanglement from pure states must be given a different name. Many of these fall into the broad category of quantum correlations (or discord). These quantities are equivalent to entanglement in pure states, but don’t correspond to non-separable in the case of mixed states.

Ok, but why should we care?

Entanglement is one of the central features of quantum theory, and there is good reason to suspect that it plays a crucial role in many physical scenarios, from many body physics to black holes and of course quantum information processing. Unfortunately, it is not trivial to extend our mathematical treatment of entanglement beyond the two party, pure state case. There are many examples where separable mixed states or ensembles of separable pure states, behave in a way that resembles pure entangled states. Apart from the obvious joy of playing around with the mathematical structure of quantum states, there are many things we can learn by trying to understand this rich structure beyond the usual separable vs non-separable states. Discord is one, and there are others, most notably Bell non-locality.

And if you want to know more, check out my paper with Danny Terno , arXiv:1608.01920

**Footnotes**

- The caveats here are simply to ensure that the measurement is not trivial in some sense. For example if the states are entangled in spin, asking about their position is not relevant, similarly making a trivial measurement (one that has outcome 1 if the spin is up and the same outcome 1 if the spin is down) is not interesting.
- Actually, the situation with colors is far more complex than I described, but as far as the human eye is concerned the statement is more or less correct. Spectroscopy would reveal a unique decomposition to any color. Quantum states on the other hand have no unique decomposition, in fact, if they did we would be in big trouble with relativistic causality (i.e we would be able to send information faster than the speed of light). As a side note: Schrödinger was interested in our perception of colors and made some interesting contributions to the field.
- Given the complete description of a (mixed) quantum state, it can be very difficult (computationally) to decide if it is entangled or separable.
- Think of trying to keep a description of the state in the memory of a computer for the purpose of simulating the evolution and finally reproducing some measurement statistics.

In August I organized a workshop on Semi-quantum computing and recently wrote about it on the IQC blog. I also attended a workshop on Entanglement and quantumnes in Montréal.

Earlier this month I got sucked into a discussion about publishing.

]]>

]]>

]]>

One of the first things we learn about quantum mechanics is that the measurement process causes an unavoidable back-action on the measured system. As a consequence, some measurements are incompatible, i.e. the result of a measurement on observable ** A ** can change significantly if a different observable, **B**, is measured before **A**. A well known example is the measurements of position and momentum where the back-action leads to the Heisenberg uncertainty relation.

The measurement back-action can create some seemingly paradoxical situations when we make counterfactual arguments such as

We measured

Aand got the resulta, but had we measuredBwe would have go the resultbwhich is incompatible witha.

These situations appear very often when we consider systems both past and future boundary conditions. In these cases they are known as pre and post selection (PPS) paradoxes. In PPS paradoxex the measurement back-action is important even when **A **and **B** commute. An example is the t*hree box paradox* that I explain without mathematical detail:

A single particle is placed in one of three boxes **A,B,C **(actually in a superposition) at time t_{0} and is later found to be in some other superposition state at t_{1}. At time t_{0}< t _{m} < t_{1} one box is opened. The initial t_{0} and final t_{1} states of the particle are chosen in such a way that the following happens:

If box **A** is opened, the particle will be discovered with certainty. If box **B** is opened, the particle will also be found with certainty. If box **C** is opened the particle will be found with some probability. The situation seems paradoxical:

If the ball is found with certainty in box A, then it must have been in box A to begin with. But if it is also found with certainty in box B, so it must have been there …

One way to solve this apparent paradox is to note that the measurements are incompatible. i.e opening box **A** and not **B**,**C** is incompatible with opening box **B** and not **A**,**C** etc.

These are the types of questions that Aharonov Albert and Vaidman were investigating 1980s^{1 }. Weak measurements were studied as a way to minimize the measurement back-action. These measurements then provided a picture that arguably gives a solid (if somewhat strange) foundation to statements like the one above.

The motivation of weak measurements is therefore an attempt to derive a consistent picture where all observables are mutually compatible in a way which is similar to classical physics. In quantum mechanics this comes at a cost. The classical information gained by reading out the result of a single weak measurement is usually indistinguishable from noise. In other words **weak measurements are noisy measurements**.

Part of the confusion around weak measurements lies in the fact that the statement above is not a sufficient condition for a weak measurement. One may argue noise is not even a necessary requirement, it is rather, a consequence of quantum mechanics. Weak measurements may be noisy, but noisy measurements are, in most cases, not weak. To understand this fact it is good to examine both a classical and quantum scenario.

Walking on the beach you see a person drowning. Being a good swimmer you go in and try to save this person. As you get back to the beach you see that he is not responsive and decide to to find if he is alive. You are now faced with the choice of how to perform the measurement.

**A weak measurement** – You try to get a pulse – The measurement is somewhat noisy since the pulse may be too weak to notice. It is also a weak measurement since it is unlikely to change this person’s state.

**A noisy measurement** – You start screaming for help. There is some small chance that the guy will wake up and tell you to shut up.

**A noisy, strong measurement** – You start kicking the guy in the head, hoping that he regains conciseness. This is a strong measurement, but it is also noisy. The person might be alive and you still won’t notice after kicking his head, moreover the kick in the head might kill him.

You want to find the component of a spin 1/2 particle.

**A weak measurement** – Perform the usual von Neumann measurement with weak coupling. There is still some back-action but if the coupling is sufficiently weak you can ignore it. The down side is that you will get very little information.

**A noisy measurement** – Perform the weak measurement as above, but follow it with a unitary rotation and some dephasing.

**A noisy, strong measurement** – Perform a standard projective measurement, but then add extra noise at the readout stage. This could, for example, be the result of a defective amplifier.

While all of the measurements above are noisy, only the weak measurements follow the original motivation of making a measurement with a weak back-action.

One neat example of a measurement which is noisy but not weak involves a wave function with a probability distribution that has no tails.

Take the measurement of a Pauli observable that has results and imagine that after the readout we get the following probability distributions: If the system was initially in the state corresponding to +1 we get a flat distribution between -9 and 11, if the result is -1 we get a flat distribution between -11 and 9. The measurement is noisy, in fact any result between -9 and +9 will give us no information about the system. However it is not weak since any result outside this range will cause the state of the system to collapse into an eigenstate.

It is not surprising that this type of measurement will not produce a weak value as the expectation value of a given set of measurements on a pre and post selected system. While this is is obviously an extreme case, any situation where the probability density function for the readout probabilities has no tails will not be weak for the same reason. The same is usually true in cases where the derivatives of the probability density function are very large. In less technical terms – **noise is not a sufficient condition for a weak measurement.**

^{1. To get a partial historic account of what AAV were thinking see David Albert’s remarks in Howard Wiseman’s QTWOIII talk on weak measurements (around minute 25-29) }

]]>

You can also check out my papers with Daniel Terno and other collaborators on the subject.

Polarization rotation, reference frames, and Mach’s principle

Photon polarization and geometric phase in general relativity

Post-Newtonian gravitational effects in optical interferometry

]]>