key: cord-0655689-8gxnuyf3 authors: Piparo, Nicolo Lo; Hanks, Michael; Gravel, Claude; Nemoto, Kae; Munro, WIlliam J. title: Resource reduction for distributed quantum information processing using quantum multiplexed photons date: 2019-07-04 journal: nan DOI: nan sha: fa69b104fbd899f2cb0e2a2859e7b34eba1c5bcc doc_id: 655689 cord_uid: 8gxnuyf3 Distributed quantum information processing is based on the transmission of quantum data over lossy channels between quantum processing nodes. These nodes may be separated by a few microns or on planetary scale distances, but transmission losses due to absorption/scattering in the channel are the major source of error for most distributed quantum information tasks. Of course quantum error detection (QED) /correction (QEC) techniques can be used to mitigate such effects but error detection approaches have severe performance issues due to the signaling constraints between nodes and so error correction approaches are preferable - assuming one has sufficient high quality local operations. Typical loss based QEC utilizes a one qubit per photon encoding. However single photons can carry more than one qubit of information and so our focus in this work is to explore whether loss-based quantum error correction utilizing quantum multiplexed photons is viable and advantageous, especially as photon loss results in more than one qubit of information being lost. Distributed quantum information processing is based on the transmission of quantum data over lossy channels between quantum processing nodes. These nodes may be separated by a few microns or on planetary scale distances, but transmission losses due to absorption/scattering in the channel are the major source of error for most distributed quantum information tasks. Of course quantum error detection (QED) /correction (QEC) techniques can be used to mitigate such effects but error detection approaches have severe performance issues due to the signaling constraints between nodes and so error correction approaches are preferable -assuming one has sufficient high quality local operations. Typical loss based QEC utilizes a one qubit per photon encoding. However single photons can carry more than one qubit of information and so our focus in this work is to explore whether loss-based quantum error correction utilizing quantum multiplexed photons is viable and advantageous, especially as photon loss results in more than one qubit of information being lost. There are many active approaches being pursued in the development of quantum technologies, including those associated with imaging and sensing [1] [2] [3] , communication [4] [5] [6] [7] [8] [9] and computation [10] [11] [12] [13] [14] [15] . What has become clear is that many of these will have a distributed nature [5] and, as such, it will be essential for quantum information to be shared between the remote nodes, regardless of whether those nodes are separated on the atomic or planetary scales [16] [17] [18] . This distributed nature means we are going to require both a quantum interface between matter and photonic qubits and a photonic bus to transfer such information between those nodes [19] . However, real implementations will suffer from losses, which will affect dramatically the performance of the quantum protocols in which such devices are being used. Some mechanisms must be developed to mitigate such detrimental effects. There are of course quite a number of routes available to offset these loss effects, ranging from the development of lower loss fibers to more efficient quantum information coding. The latter route is quite appealing as it can be used with current technology and is likely to be more compatible with our existing infrastructure. There is a well known set of loss based quantum detection and error correction codes useful in this situation. Some insight is seen in [20] who discussed a simple quantum network scenario in which the quantum multiplexing (QMu) of photonic degrees of freedom allows one to design a singlestep combined entanglement distribution and error detection protocol with improved entanglement generation rates using fewer physical (both photons and quantum memories) and temporal resources. However, their performance is still limited by the probabilistic nature of the various quantum operations and the resulting heralding signals necessary between remote nodes. * nicopale@gmail.com Quantum error correction codes (ECCs) naturally avoid this heralding bottleneck due to their deterministic nature, with example loss based codes including the quantum parity [21] , cat [22] , binomial [23] , Reed-Solomon [24] and surface ones [25] . They allow the deterministic transmission of quantum information over a lossy channel, as long as those total losses do not exceed a certain threshold (50% at most). Typically such codes use either the polarization or time bin qubits but are not particularly resource efficient as they require a large number of single photons. However, single photons have the potential to carry much more information using either higher dimensional encodings or different degrees of freedom. Hence the natural question is whether this is advantageous, especially as photon loss results in more than one qubit of information being lost. Let us begin by exploring the simplest loss based error correction code (the redundant quantum parity code [21] ) to determine if there are any successful photon transmission regimes in which both the number of qubits (memories) present within the node and the number of photons (transmitted through the channel) can be reduced using our quantum multiplexing approach while maintaining the near deterministic transmission of information (entangled states) between the two nodes. In the redundant quantum parity code the computational basis states are defined as |0 = |H and |1 = |V respectively. The first logical qubit layer (block) entangles m of these physical photonic qubits in the state |± (m) = |+ ⊗m ± |− ⊗m with |± = |H ± |V . The second logical layer is then constructed by simple repetition of n of these blocks according to |± (n, m) = |± ⊗n (m) , so that an encoded state |ψ (n, m) = α |+ (n, m) + β |− (n, m) is an entangled state of n blocks, with m qubits per block. The information α, β in our encoded state is successfully transmitted over the channel when at least one block of m qubits arrives intact (no losses) and all other blocks have at least one Figure 1 . (a) Plot of the overall information transfer success probability versus photon transmission probability pt of the redundant quantum parity code with (blue curve) and without (red and yellow curves) quantum multiplexed photons. The blue curve corresponds to 3 photons each carrying two qubits of information while the yellow curve represents the non multiplexed cases comprising 6 photons (two photons per block in a balanced configuration). It is clear the 3 photon multiplexed approach is significantly outperforming the 6 photon non multiplexed situation. The red curve represents a non multiplexed case comprising 7 photons in an unbalanced configuration (3 photons in the first block, 2 photons in the second and third blocks). The inset of (a) depicts a schematic illustration of a particular instance of the six-qubit quantum redundancy parity code in which each photon carries one qubit (i) and three photons carry two qubits of information each (ii). Similarly in (b), we show the information transfer success probability P QMu S versus pt for three different configurations of a quantum multiplexed system of 6 photons carrying 3 qubits each distributed over 6 blocks (with each block containing 3 qubits). photon in them each (see Figure 1a inset). The overall information transfer success probability is given by [26] where p t is the probability of successfully transmitting a single photon through the channel (p t ∝ e −L/L0 with L being the channel length and L 0 its attenuation length). Our first observation is that this concatenated code is not particularly resource efficient as the number of qubits at the first logical layer, m, grows inversely with the transmission probability p t . Further, n grows inversely with p m t and so (m, n) are growing exponentially with increasing L. A potential way to overcome this would be to use a quantum multiplexer [20] to encode multiple qubits onto a single photon meaning less photons in total would need to be transmitted over the channel. This has the potential to offer multiple benefits -especially when one's single photon sources are probabilistic in nature. Now in the inset of Fig. (1a. i) we illustrate a successful realization of a six photon redundant quantum parity code without the use of quantum multiplexing. Here 3 blocks of 2 photons each are used. After the photons are transmitted over a lossy channel, the code is successful if at least one block contains two photons and the other two blocks each contain one or more photons. However, one can think of substituting that 6 photon realization with three quantum multiplexed photons each carrying two qubits of information, as shown in Fig. (1a. ii). This is represented by the colored lines connected to the dots contained in the blocks. In this case, the ECC code can tolerate the loss of only one photon. Therefore it would seem logical that we can reduce the number of photons by using the multiplexing approach, provided that the success probability, P S , is above a threshold value. This raises the question as to what the information transfer success probability P S is in this quantum multiplexed approach. While Eqn. (1) provides a simple way to calculate m, n in the non-multiplexed case, it is not so straightforward in the multiplexed case. One can show that the information transfer success probability P QMu S in this situation for n tot transmitted photons is where U i , E i are the number of events in which losing i photons will leave none of the blocks with the initial number of qubits or at least one empty block (naturally this expression reduces to Eq. (1) when the number of qubits per photon is one). Next we can substitute the sum's upper limit with n * , the number of lost photons the ECC code can tolerate. Therefore to calculate P QMu S , we need to determine both U i and E i , which are highly dependent on how the quantum multiplexed photons are connected to the blocks (see Fig. (1b) ). Different configurations lead to different success probabilities [27] . Further we can release the constraints of all blocks having to have the same number of qubits, which is typically not utilized in error correction schemes. As a consequence, we can reduce the number of qubits (and photons) by unbalancing the number of qubits in each block of the error correction code (this applies even to the non-multiplexed case [28]). In Figure (1a) we plot the overall information transfer success probability P S versus p t for two non-multiplexed (equal and unbalanced number of qubits per block) and one quantum-multiplexed situation with a minimum threshold success probability requirement of P S = 0.995 (such a threshold is typical for many quantum compu- tation based tasks). It is clear that our 3 photon quantum multiplexed case (2 qubits / photon with 3 blocks) dramatically outperforms the traditional 6 photon nonmultiplexed case (3 blocks with 2 qubits, photons each). Further there is a region between 0.958 p t 0.976 where the 6 photon case does not reach our threshold target, while the 3 photon multiplexed approach does. The 7 photon (7 qubit) unbalanced non-multiplexed approach (with the first block containing 3 photons while the second and third blocks contain 2 photons each) has a slightly better performance than the multiplexed case. However both are above the threshold and the multiplexed situation is using fewer photons and qubits. This is a critical resource saving. Further in the region 0.976 p t ≤ 0.995 the number of qubits utilized is the same between multiplexed and non-multiplexed cases, however, the number of photons required is approximately halved. For p t ≥ 0.995 we are already at our success probability threshold and so only one photon and one qubit need to be used. These observations, as expected, indicate that there are two important resources to minimize: the number of qubits N min and photons n min required to reach the target information transfer success probability threshold. Of course the lower the photon transition probability p t the more qubits and photons we need in our error correcting code to reach P S . We can now explore specific quantum unbalanced multiplexing configurations for a range of values of p t (where the number of qubits/block can change between blocks). In Figure (2a) we plot, versus p t , the minimal number of qubits, N min , and photons, n min , for the optimal configuration required to reach the threshold success probability for a multiplexed approach with 2 − 4 qubits per photon (as well as the non-multiplexed case). To begin this exploration for lower p t values we plot in Figure (2a) the total number of qubits, N min , and photons, n min , required to reach the threshold success probability for the situations in which we have one, two, three, and four qubits per photon respectively. We immediately observe that the higher the degree of quantum multiplexing, the more qubits are necessary in order to reach P S . On the other hand the number of photons required is fewer. Compared to the situation where quantum multiplexing is not used, quantum multiplexing systems utilize fewer photons, however the number of qubits is either the same or higher for any p t , except for a small region near p t ∼ 0.97 illustrated in Fig. (1a) . In fact we can almost halve the number of photons being transmitted over the channel with only a modest increase in the total number of qubits. Utilizing fewer photons while maintaining (or modestly increasing) the total number of qubits would seem quite an advantage, especially as single photon sources are currently not as efficient as quantum gates or measurements. This leads to an interesting question: what is the minimum number of photons that can be used in this code? To address this question, it is useful to rephrase the problem as: how many photons are required to tolerate N loss events? We will assume that each photon is carrying the same number of quantum multiplexed qubits and each block contains the same number of qubits. In such a situation, the code must satisfy three conditions: there are at least N + 1 blocks, each block involves qubits from at least N + 1 photons, and for any block there are at least N photons that do not contribute with any qubits. This means that there must be at least 2N +1 photons regardless of the degree of quantum multiplexing. Now the number of photons required to achieve our threshold information transfer success probability can be determined by equating P S with the probability that N or fewer of 2N + 1 photons are lost. Known bounds on binomial cumulative distribution functions then indicate that the probability of failure will decrease exponentially with the number of photons. However, in this minimal construction each photon must contribute at least one qubit to each of 2N N blocks, so that the degree of multiplexing increases sub-exponentially. Table I . Minimum number of photons and qubits required to reach our overall information transfer success probability threshold of PS = 0.995 when pt = 0.916. Similar results are seen for most values of pt. The star corresponds to the optimal case, in which, by using the mixing strategy, for a given Nmin we reach the lowest nmin for a specific value of pt. Although not explicitly shown, there are optimal cases for other values of pt as well. So far using quantum multiplexing we have focused on the situations where all photons have the same number of qubits. What happens if we relax this constraint and use a mixed strategy? For the mixing strategy, we assume that each photon is carrying an arbitrary number of qubits (from 1 to 4) and we see that, by combining them, we can further reduce resources while reaching P S for a specific value of p t . Table I shows the total number of photons and the total number of qubits needed for reaching P S at p t = 0.916 by using the pure and the mixed strategy. We observe that we can reach the required P S with a lower number of photons (12) given the same number of qubits (15) when we apply the mixing strategy. This further highlights the potential advantages quantum multiplexing gives. An important issue is whether the improvements we have seen with the redundant quantum parity code [21] generalize to other loss based quantum error correction codes. Another well known loss code is the quantum Reed-Solomon [[d, 2k − d, d − k + 1]] d code where the information is encoded in d qudits with the code failing when the loss of some d−k +1 out of d qudits occurs. For comparative purposes, we will express the degree of multiplexing as q qubits of information per photon. When we encode the qudits in these q degrees of freedom of quantum multiplexed photons, any qudit of information depends upon the successful transmission of ⌈log 2 (d)/q⌉ photons [24] . The probability of failure is therefore In this code the block is given by the total number of photons encoding a single qudit, and if a block is incomplete, the qudit is not successfully transmitted. Therefore, the performance can be improved by maintaining independence between these blocks, and by reducing the chances for loss events within any single block. Adding additional quantum multiplexing will help so long as it preserves independence between qudit loss events. For the quantum Reed-Solomon code we can also determine the lowest number of qubits and photons required to reach P S , as shown in Fig. 2b . Here, the advantage of using quantum multiplexed photons is evident in terms of both a reduction of the number of qubits and photons compared to the no quantum multiplexing case. In particular, the higher is the quantum multiplexing degree the less qubits and photons we require. For instance, at p t = 0.85, we have that for q = 4, N min ≃ 40 and n min ≃ 10, whereas when no quantum multiplexed photons are in use, we have that both N min and n min are over 1000. As p t gets lower, the number of photons and qubits increases considerably, hence, we need to use higher degrees of quantum multiplexing. Furthermore, by comparing Fig. (2a) with Fig. (2b) , we infer that there is always a specific value of q for which the Reed-Solomon code requires a lower number of resources compared to the parity code (for q = 4, at p t = 0.85, N min (n min ) is 72%( 75%) lower for the Reed-Solomon code than the parity code). There are several other error correction codes based on the transmission of qudits and we expect the same reduction in both the number of qubits and photons when quantum multiplexing is in use. On the other hand, there are other loss-based codes which do not show the same advantages as the ones analyzed so far. For instance, the bosonic codes [23] encode information in superpositions of the number of excitations of distinct transmitted photon modes. Quantum multiplexing in this case would correspond to the assignment of information about multiple excitation to the various degrees of freedom of a single mode. However, any quantum multiplexed photon mode is equivalent, in this case, to a no quantum multiplexed mode. There is always therefore a no quantum multiplexing code using a smaller number of excitations and a higher number of modes than the original that will perform as well as the quantum multiplexed case. Now to summarize, we have shown how the quantum multiplexing of quantum information onto photons in loss based quantum error correction codes has the potential to lead to a significant decrease in the resources required to deterministically transfer quantum information between two adjacent nodes. Two primary error correction codes were considered: the redundant quantum parity code and the quantum Reed-Solomon code. In the first case, we found that the total number of single photons that need to be transmitted through the channel can be dramatically reduced (near 50 percent) without significantly increasing the number of qubits within the redundant quantum parity code. Further we found it advantageous if individual photons have different degrees of quantum multiplexing associated with them, as well as having blocks within the code having different numbers of qubits. The quantum Reed-Solomon code using quantum multiplexed qudits goes further and has the potential to reduce both the number of photons and qubits used. As expected it significantly outperforms the redundant quantum parity code. These improvements should be possible in many (but not all) of the other loss based error correction codes when quantum multiplexing is used. Quantum multiplexing has the potential to be a new resource saving tool. Our findings can be applied to any communication system that needs error correction to improve its efficiency, such as in quantum repeaters, quantum computation and quantum sensing. We thank Koji Azuma for useful discussions as this project development. This project was made possible through the support of the MEXT KAKENHI Grant-in-Aid for Scientific Research on Innovative Areas "Science of Hybrid Quantum Systems" Grant No. 15H05870 and a grant from the John Templeton Foundation (JTF # 60478). The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. Quantum Computation and Quantum Information International Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes For each quantum multiplexed system, we did not ex plore all the possible configurations, rather we assumed a best configuration based on reasonable deductions. For instance, connecting a quantum multiplexed photon with the same block multiple times will increase the empty block events in case that photon is lost.[28] In the non-multiplexed situation it is also quite natural to relax the constraints on all blocks having the same number of qubits associated with them. This we call the unbalanced case and it is straightforwardwhere mi is the number of photons in the i−th block where mi is the number of photons in the i − th block. We can show that seven photons distributed over three blocks has a performance similar to 9 photons distributed over those three blocks and is significantly better than 6 photons distributed over those three blocks.