Get trending papers in your email inbox once a day!
Get trending papers in your email inbox!
SubscribeAn Architecture for Meeting Quality-of-Service Requirements in Multi-User Quantum Networks
Quantum communication can enhance internet technology by enabling novel applications that are provably impossible classically. The successful execution of such applications relies on the generation of quantum entanglement between different users of the network which meets stringent performance requirements. Alongside traditional metrics such as throughput and jitter, one must ensure the generated entanglement is of sufficiently high quality. Meeting such performance requirements demands a careful orchestration of many devices in the network, giving rise to a fundamentally new scheduling problem. Furthermore, technological limitations of near-term quantum devices impose significant constraints on scheduling methods hoping to meet performance requirements. In this work, we propose the first end-to-end design of a centralized quantum network with multiple users that orchestrates the delivery of entanglement which meets quality-of-service (QoS) requirements of applications. We achieve this by using a centrally constructed schedule that manages usage of devices and ensures the coordinated execution of different quantum operations throughout the network. We use periodic task scheduling and resource-constrained project scheduling techniques, including a novel heuristic, to construct the schedules. Our simulations of four small networks using hardware-validated network parameters, and of a real-world fiber topology using futuristic parameters, illustrate trade-offs between traditional and quantum performance metrics.
Quantum Time: a novel resource for quantum information
Time in relativity theory has a status different from that adopted by standard quantum mechanics, where time is considered as a parameter measured with reference to an external absolute Newtonian frame. This status strongly restricts its role in the dynamics of systems and hinders any formulation to merge quantum mechanics with general relativity, specifically when considering quantum gravity. To overcome those limitations, several authors tried to construct an operator which is conjugate to the Hamiltonian of quantum systems implementing some essential features of the relativistic time. These formulations use the concept of internal or intrinsic time instead of the universal coordinate time used in textbooks. Furthermore, recently it is remarked that the consideration of time with relativistic features could enhance the analysis techniques in quantum information processing and have an impact on its status in causal orders and causal structures of quantum information. The role of clocks, their accuracy and stability has become an important issue in quantum information processing. This article present a substantiative review of recent works which reflect the possibility of utilizing quantum time, measured by quantum clock devised according to Page-Wootters scheme, to stand as a resource for quantum information processing.
Fusion-based quantum computation
We introduce fusion-based quantum computing (FBQC) - a model of universal quantum computation in which entangling measurements, called fusions, are performed on the qubits of small constant-sized entangled resource states. We introduce a stabilizer formalism for analyzing fault tolerance and computation in these schemes. This framework naturally captures the error structure that arises in certain physical systems for quantum computing, such as photonics. FBQC can offer significant architectural simplifications, enabling hardware made up of many identical modules, requiring an extremely low depth of operations on each physical qubit and reducing classical processing requirements. We present two pedagogical examples of fault-tolerant schemes constructed in this framework and numerically evaluate their threshold under a hardware agnostic fusion error model including both erasure and Pauli error. We also study an error model of linear optical quantum computing with probabilistic fusion and photon loss. In FBQC the non-determinism of fusion is directly dealt with by the quantum error correction protocol, along with other errors. We find that tailoring the fault-tolerance framework to the physical system allows the scheme to have a higher threshold than schemes reported in literature. We present a ballistic scheme which can tolerate a 10.4% probability of suffering photon loss in each fusion.
Experimental demonstration of memory-enhanced quantum communication
The ability to communicate quantum information over long distances is of central importance in quantum science and engineering. For example, it enables secure quantum key distribution (QKD) relying on fundamental principles that prohibit the "cloning" of unknown quantum states. While QKD is being successfully deployed, its range is currently limited by photon losses and cannot be extended using straightforward measure-and-repeat strategies without compromising its unconditional security. Alternatively, quantum repeaters, which utilize intermediate quantum memory nodes and error correction techniques, can extend the range of quantum channels. However, their implementation remains an outstanding challenge, requiring a combination of efficient and high-fidelity quantum memories, gate operations, and measurements. Here we report the experimental realization of memory-enhanced quantum communication. We use a single solid-state spin memory integrated in a nanophotonic diamond resonator to implement asynchronous Bell-state measurements. This enables a four-fold increase in the secret key rate of measurement device independent (MDI)-QKD over the loss-equivalent direct-transmission method while operating megahertz clock rates. Our results represent a significant step towards practical quantum repeaters and large-scale quantum networks.
Entanglement Purification in Quantum Networks: Guaranteed Improvement and Optimal Time
While the concept of entanglement purification protocols (EPPs) is straightforward, the integration of EPPs in network architectures requires careful performance evaluations and optimizations that take into account realistic conditions and imperfections, especially probabilistic entanglement generation and quantum memory decoherence. It is important to understand what is guaranteed to be improved from successful EPP with arbitrary non-identical input, which determines whether we want to perform the EPP at all. When successful EPP can offer improvement, the time to perform the EPP should also be optimized to maximize the improvement. In this work, we study the guaranteed improvement and optimal time for the CNOT-based recurrence EPP, previously shown to be optimal in various scenarios. We firstly prove guaranteed improvement for multiple figures of merit, including fidelity and several entanglement measures when compared to practical baselines as functions of input states. However, it is noteworthy that the guaranteed improvement we prove does not imply the universality of the EPP as introduced in arXiv:2407.21760. Then we prove robust, parameter-independent optimal time for typical error models and figures of merit. We further explore memory decoherence described by continuous-time Pauli channels, and demonstrate the phenomenon of optimal time transition when the memory decoherence error pattern changes. Our work deepens the understanding of EPP performance in realistic scenarios and offers insights into optimizing quantum networks that integrate EPPs.
Optimizing quantum phase estimation for the simulation of Hamiltonian eigenstates
We revisit quantum phase estimation algorithms for the purpose of obtaining the energy levels of many-body Hamiltonians and pay particular attention to the statistical analysis of their outputs. We introduce the mean phase direction of the parent distribution associated with eigenstate inputs as a new post-processing tool. By connecting it with the unknown phase, we find that if used as its direct estimator, it exceeds the accuracy of the standard majority rule using one less bit of resolution, making evident that it can also be inverted to provide unbiased estimation. Moreover, we show how to directly use this quantity to accurately find the energy levels when the initialized state is an eigenstate of the simulated propagator during the whole time evolution, which allows for shallower algorithms. We then use IBM Q hardware to carry out the digital quantum simulation of three toy models: a two-level system, a two-spin Ising model and a two-site Hubbard model at half-filling. Methodologies are provided to implement Trotterization and reduce the variability of results in noisy intermediate scale quantum computers.
Minimal evolution times for fast, pulse-based state preparation in silicon spin qubits
Standing as one of the most significant barriers to reaching quantum advantage, state-preparation fidelities on noisy intermediate-scale quantum processors suffer from quantum-gate errors, which accumulate over time. A potential remedy is pulse-based state preparation. We numerically investigate the minimal evolution times (METs) attainable by optimizing (microwave and exchange) pulses on silicon hardware. We investigate two state preparation tasks. First, we consider the preparation of molecular ground states and find the METs for H_2, HeH^+, and LiH to be 2.4 ns, 4.4 ns, and 27.2 ns, respectively. Second, we consider transitions between arbitrary states and find the METs for transitions between arbitrary four-qubit states to be below 50 ns. For comparison, connecting arbitrary two-qubit states via one- and two-qubit gates on the same silicon processor requires approximately 200 ns. This comparison indicates that pulse-based state preparation is likely to utilize the coherence times of silicon hardware more efficiently than gate-based state preparation. Finally, we quantify the effect of silicon device parameters on the MET. We show that increasing the maximal exchange amplitude from 10 MHz to 1 GHz accelerates the METs, e.g., for H_2 from 84.3 ns to 2.4 ns. This demonstrates the importance of fast exchange. We also show that increasing the maximal amplitude of the microwave drive from 884 kHz to 56.6 MHz shortens state transitions, e.g., for two-qubit states from 1000 ns to 25 ns. Our results bound both the state-preparation times for general quantum algorithms and the execution times of variational quantum algorithms with silicon spin qubits.
Quantum control of a cat-qubit with bit-flip times exceeding ten seconds
Binary classical information is routinely encoded in the two metastable states of a dynamical system. Since these states may exhibit macroscopic lifetimes, the encoded information inherits a strong protection against bit-flips. A recent qubit - the cat-qubit - is encoded in the manifold of metastable states of a quantum dynamical system, thereby acquiring bit-flip protection. An outstanding challenge is to gain quantum control over such a system without breaking its protection. If this challenge is met, significant shortcuts in hardware overhead are forecast for quantum computing. In this experiment, we implement a cat-qubit with bit-flip times exceeding ten seconds. This is a four order of magnitude improvement over previous cat-qubit implementations, and six orders of magnitude enhancement over the single photon lifetime that compose this dynamical qubit. This was achieved by introducing a quantum tomography protocol that does not break bit-flip protection. We prepare and image quantum superposition states, and measure phase-flip times above 490 nanoseconds. Most importantly, we control the phase of these superpositions while maintaining the bit-flip time above ten seconds. This work demonstrates quantum operations that preserve macroscopic bit-flip times, a necessary step to scale these dynamical qubits into fully protected hardware-efficient architectures.
Robust Angular Synchronization via Directed Graph Neural Networks
The angular synchronization problem aims to accurately estimate (up to a constant additive phase) a set of unknown angles theta_1, dots, theta_nin[0, 2pi) from m noisy measurements of their offsets theta_i-theta_j ;mod ; 2pi. Applications include, for example, sensor network localization, phase retrieval, and distributed clock synchronization. An extension of the problem to the heterogeneous setting (dubbed k-synchronization) is to estimate k groups of angles simultaneously, given noisy observations (with unknown group assignment) from each group. Existing methods for angular synchronization usually perform poorly in high-noise regimes, which are common in applications. In this paper, we leverage neural networks for the angular synchronization problem, and its heterogeneous extension, by proposing GNNSync, a theoretically-grounded end-to-end trainable framework using directed graph neural networks. In addition, new loss functions are devised to encode synchronization objectives. Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.
Teleportation of entanglement over 143 km
As a direct consequence of the no-cloning theorem, the deterministic amplification as in classical communication is impossible for quantum states. This calls for more advanced techniques in a future global quantum network, e.g. for cloud quantum computing. A unique solution is the teleportation of an entangled state, i.e. entanglement swapping, representing the central resource to relay entanglement between distant nodes. Together with entanglement purification and a quantum memory it constitutes a so-called quantum repeater. Since the aforementioned building blocks have been individually demonstrated in laboratory setups only, the applicability of the required technology in real-world scenarios remained to be proven. Here we present a free-space entanglement-swapping experiment between the Canary Islands of La Palma and Tenerife, verifying the presence of quantum entanglement between two previously independent photons separated by 143 km. We obtained an expectation value for the entanglement-witness operator, more than 6 standard deviations beyond the classical limit. By consecutive generation of the two required photon pairs and space-like separation of the relevant measurement events, we also showed the feasibility of the swapping protocol in a long-distance scenario, where the independence of the nodes is highly demanded. Since our results already allow for efficient implementation of entanglement purification, we anticipate our assay to lay the ground for a fully-fledged quantum repeater over a realistic high-loss and even turbulent quantum channel.
Protocols for creating and distilling multipartite GHZ states with Bell pairs
The distribution of high-quality Greenberger-Horne-Zeilinger (GHZ) states is at the heart of many quantum communication tasks, ranging from extending the baseline of telescopes to secret sharing. They also play an important role in error-correction architectures for distributed quantum computation, where Bell pairs can be leveraged to create an entangled network of quantum computers. We investigate the creation and distillation of GHZ states out of non-perfect Bell pairs over quantum networks. In particular, we introduce a heuristic dynamic programming algorithm to optimize over a large class of protocols that create and purify GHZ states. All protocols considered use a common framework based on measurements of non-local stabilizer operators of the target state (i.e., the GHZ state), where each non-local measurement consumes another (non-perfect) entangled state as a resource. The new protocols outperform previous proposals for scenarios without decoherence and local gate noise. Furthermore, the algorithms can be applied for finding protocols for any number of parties and any number of entangled pairs involved.
Review of Distributed Quantum Computing. From single QPU to High Performance Quantum Computing
The emerging field of quantum computing has shown it might change how we process information by using the unique principles of quantum mechanics. As researchers continue to push the boundaries of quantum technologies to unprecedented levels, distributed quantum computing raises as an obvious path to explore with the aim of boosting the computational power of current quantum systems. This paper presents a comprehensive survey of the current state of the art in the distributed quantum computing field, exploring its foundational principles, landscape of achievements, challenges, and promising directions for further research. From quantum communication protocols to entanglement-based distributed algorithms, each aspect contributes to the mosaic of distributed quantum computing, making it an attractive approach to address the limitations of classical computing. Our objective is to provide an exhaustive overview for experienced researchers and field newcomers.
Error Correction of Quantum Algorithms: Arbitrarily Accurate Recovery Of Noisy Quantum Signal Processing
The intrinsic probabilistic nature of quantum systems makes error correction or mitigation indispensable for quantum computation. While current error-correcting strategies focus on correcting errors in quantum states or quantum gates, these fine-grained error-correction methods can incur significant overhead for quantum algorithms of increasing complexity. We present a first step in achieving error correction at the level of quantum algorithms by combining a unified perspective on modern quantum algorithms via quantum signal processing (QSP). An error model of under- or over-rotation of the signal processing operator parameterized by epsilon < 1 is introduced. It is shown that while Pauli Z-errors are not recoverable without additional resources, Pauli X and Y errors can be arbitrarily suppressed by coherently appending a noisy `recovery QSP.' Furthermore, it is found that a recovery QSP of length O(2^k c^{k^2} d) is sufficient to correct any length-d QSP with c unique phases to k^{th}-order in error epsilon. Allowing an additional assumption, a lower bound of Omega(cd) is shown, which is tight for k = 1, on the length of the recovery sequence. Our algorithmic-level error correction method is applied to Grover's fixed-point search algorithm as a demonstration.
Programmable Heisenberg interactions between Floquet qubits
The fundamental trade-off between robustness and tunability is a central challenge in the pursuit of quantum simulation and fault-tolerant quantum computation. In particular, many emerging quantum architectures are designed to achieve high coherence at the expense of having fixed spectra and consequently limited types of controllable interactions. Here, by adiabatically transforming fixed-frequency superconducting circuits into modifiable Floquet qubits, we demonstrate an XXZ Heisenberg interaction with fully adjustable anisotropy. This interaction model is on one hand the basis for many-body quantum simulation of spin systems, and on the other hand the primitive for an expressive quantum gate set. To illustrate the robustness and versatility of our Floquet protocol, we tailor the Heisenberg Hamiltonian and implement two-qubit iSWAP, CZ, and SWAP gates with estimated fidelities of 99.32(3)%, 99.72(2)%, and 98.93(5)%, respectively. In addition, we implement a Heisenberg interaction between higher energy levels and employ it to construct a three-qubit CCZ gate with a fidelity of 96.18(5)%. Importantly, the protocol is applicable to various fixed-frequency high-coherence platforms, thereby unlocking a suite of essential interactions for high-performance quantum information processing. From a broader perspective, our work provides compelling avenues for future exploration of quantum electrodynamics and optimal control using the Floquet framework.
Learning to Program Variational Quantum Circuits with Fast Weights
Quantum Machine Learning (QML) has surfaced as a pioneering framework addressing sequential control tasks and time-series modeling. It has demonstrated empirical quantum advantages notably within domains such as Reinforcement Learning (RL) and time-series prediction. A significant advancement lies in Quantum Recurrent Neural Networks (QRNNs), specifically tailored for memory-intensive tasks encompassing partially observable environments and non-linear time-series prediction. Nevertheless, QRNN-based models encounter challenges, notably prolonged training duration stemming from the necessity to compute quantum gradients using backpropagation-through-time (BPTT). This predicament exacerbates when executing the complete model on quantum devices, primarily due to the substantial demand for circuit evaluation arising from the parameter-shift rule. This paper introduces the Quantum Fast Weight Programmers (QFWP) as a solution to the temporal or sequential learning challenge. The QFWP leverages a classical neural network (referred to as the 'slow programmer') functioning as a quantum programmer to swiftly modify the parameters of a variational quantum circuit (termed the 'fast programmer'). Instead of completely overwriting the fast programmer at each time-step, the slow programmer generates parameter changes or updates for the quantum circuit parameters. This approach enables the fast programmer to incorporate past observations or information. Notably, the proposed QFWP model achieves learning of temporal dependencies without necessitating the use of quantum recurrent neural networks. Numerical simulations conducted in this study showcase the efficacy of the proposed QFWP model in both time-series prediction and RL tasks. The model exhibits performance levels either comparable to or surpassing those achieved by QLSTM-based models.
Less Quantum, More Advantage: An End-to-End Quantum Algorithm for the Jones Polynomial
We present an end-to-end reconfigurable algorithmic pipeline for solving a famous problem in knot theory using a noisy digital quantum computer, namely computing the value of the Jones polynomial at the fifth root of unity within additive error for any input link, i.e. a closed braid. This problem is DQC1-complete for Markov-closed braids and BQP-complete for Plat-closed braids, and we accommodate both versions of the problem. Even though it is widely believed that DQC1 is strictly contained in BQP, and so is 'less quantum', the resource requirements of classical algorithms for the DQC1 version are at least as high as for the BQP version, and so we potentially gain 'more advantage' by focusing on Markov-closed braids in our exposition. We demonstrate our quantum algorithm on Quantinuum's H2-2 quantum computer and show the effect of problem-tailored error-mitigation techniques. Further, leveraging that the Jones polynomial is a link invariant, we construct an efficiently verifiable benchmark to characterise the effect of noise present in a given quantum processor. In parallel, we implement and benchmark the state-of-the-art tensor-network-based classical algorithms for computing the Jones polynomial. The practical tools provided in this work allow for precise resource estimation to identify near-term quantum advantage for a meaningful quantum-native problem in knot theory.
Comparing coherent and incoherent models for quantum homogenization
Here we investigate the role of quantum interference in the quantum homogenizer, whose convergence properties model a thermalization process. In the original quantum homogenizer protocol, a system qubit converges to the state of identical reservoir qubits through partial-swap interactions, that allow interference between reservoir qubits. We design an alternative, incoherent quantum homogenizer, where each system-reservoir interaction is moderated by a control qubit using a controlled-swap interaction. We show that our incoherent homogenizer satisfies the essential conditions for homogenization, being able to transform a qubit from any state to any other state to arbitrary accuracy, with negligible impact on the reservoir qubits' states. Our results show that the convergence properties of homogenization machines that are important for modelling thermalization are not dependent on coherence between qubits in the homogenization protocol. We then derive bounds on the resources required to re-use the homogenizers for performing state transformations. This demonstrates that both homogenizers are universal for any number of homogenizations, for an increased resource cost.
SeQUeNCe: A Customizable Discrete-Event Simulator of Quantum Networks
Recent advances in quantum information science enabled the development of quantum communication network prototypes and created an opportunity to study full-stack quantum network architectures. This work develops SeQUeNCe, a comprehensive, customizable quantum network simulator. Our simulator consists of five modules: Hardware models, Entanglement Management protocols, Resource Management, Network Management, and Application. This framework is suitable for simulation of quantum network prototypes that capture the breadth of current and future hardware technologies and protocols. We implement a comprehensive suite of network protocols and demonstrate the use of SeQUeNCe by simulating a photonic quantum network with nine routers equipped with quantum memories. The simulation capabilities are illustrated in three use cases. We show the dependence of quantum network throughput on several key hardware parameters and study the impact of classical control message latency. We also investigate quantum memory usage efficiency in routers and demonstrate that redistributing memory according to anticipated load increases network capacity by 69.1% and throughput by 6.8%. We design SeQUeNCe to enable comparisons of alternative quantum network technologies, experiment planning, and validation and to aid with new protocol design. We are releasing SeQUeNCe as an open source tool and aim to generate community interest in extending it.
A Grand Unification of Quantum Algorithms
Quantum algorithms offer significant speedups over their classical counterparts for a variety of problems. The strongest arguments for this advantage are borne by algorithms for quantum search, quantum phase estimation, and Hamiltonian simulation, which appear as subroutines for large families of composite quantum algorithms. A number of these quantum algorithms were recently tied together by a novel technique known as the quantum singular value transformation (QSVT), which enables one to perform a polynomial transformation of the singular values of a linear operator embedded in a unitary matrix. In the seminal GSLW'19 paper on QSVT [Gily\'en, Su, Low, and Wiebe, ACM STOC 2019], many algorithms are encompassed, including amplitude amplification, methods for the quantum linear systems problem, and quantum simulation. Here, we provide a pedagogical tutorial through these developments, first illustrating how quantum signal processing may be generalized to the quantum eigenvalue transform, from which QSVT naturally emerges. Paralleling GSLW'19, we then employ QSVT to construct intuitive quantum algorithms for search, phase estimation, and Hamiltonian simulation, and also showcase algorithms for the eigenvalue threshold problem and matrix inversion. This overview illustrates how QSVT is a single framework comprising the three major quantum algorithms, thus suggesting a grand unification of quantum algorithms.
Federated learning with distributed fixed design quantum chips and quantum channels
The privacy in classical federated learning can be breached through the use of local gradient results along with engineered queries to the clients. However, quantum communication channels are considered more secure because a measurement on the channel causes a loss of information, which can be detected by the sender. Therefore, the quantum version of federated learning can be used to provide more privacy. Additionally, sending an N dimensional data vector through a quantum channel requires sending log N entangled qubits, which can potentially provide exponential efficiency if the data vector is utilized as quantum states. In this paper, we propose a quantum federated learning model where fixed design quantum chips are operated based on the quantum states sent by a centralized server. Based on the coming superposition states, the clients compute and then send their local gradients as quantum states to the server, where they are aggregated to update parameters. Since the server does not send model parameters, but instead sends the operator as a quantum state, the clients are not required to share the model. This allows for the creation of asynchronous learning models. In addition, the model as a quantum state is fed into client-side chips directly; therefore, it does not require measurements on the upcoming quantum state to obtain model parameters in order to compute gradients. This can provide efficiency over the models where the parameter vector is sent via classical or quantum channels and local gradients are obtained through the obtained values of these parameters.
Proposal for room-temperature quantum repeaters with nitrogen-vacancy centers and optomechanics
We propose a quantum repeater architecture that can operate under ambient conditions. Our proposal builds on recent progress towards non-cryogenic spin-photon interfaces based on nitrogen-vacancy centers, which have excellent spin coherence times even at room temperature, and optomechanics, which allows to avoid phonon-related decoherence and also allows the emitted photons to be in the telecom band. We apply the photon number decomposition method to quantify the fidelity and the efficiency of entanglement established between two remote electron spins. We describe how the entanglement can be stored in nuclear spins and extended to long distances via quasi-deterministic entanglement swapping operations involving the electron and nuclear spins. We furthermore propose schemes to achieve high-fidelity readout of the spin states at room temperature using the spin-optomechanics interface. Our work shows that long-distance quantum networks made of solid-state components that operate at room temperature are within reach of current technological capabilities.
A quantum walk control plane for distributed quantum computing in quantum networks
Quantum networks are complex systems formed by the interaction among quantum processors through quantum channels. Analogous to classical computer networks, quantum networks allow for the distribution of quantum computation among quantum computers. In this work, we describe a quantum walk protocol to perform distributed quantum computing in a quantum network. The protocol uses a quantum walk as a quantum control signal to perform distributed quantum operations. We consider a generalization of the discrete-time coined quantum walk model that accounts for the interaction between a quantum walker system in the network graph with quantum registers inside the network nodes. The protocol logically captures distributed quantum computing, abstracting hardware implementation and the transmission of quantum information through channels. Control signal transmission is mapped to the propagation of the walker system across the network, while interactions between the control layer and the quantum registers are embedded into the application of coin operators. We demonstrate how to use the quantum walker system to perform a distributed CNOT operation, which shows the universality of the protocol for distributed quantum computing. Furthermore, we apply the protocol to the task of entanglement distribution in a quantum network.
Designing a Quantum Network Protocol
The second quantum revolution brings with it the promise of a quantum internet. As the first quantum network hardware prototypes near completion new challenges emerge. A functional network is more than just the physical hardware, yet work on scalable quantum network systems is in its infancy. In this paper we present a quantum network protocol designed to enable end-to-end quantum communication in the face of the new fundamental and technical challenges brought by quantum mechanics. We develop a quantum data plane protocol that enables end-to-end quantum communication and can serve as a building block for more complex services. One of the key challenges in near-term quantum technology is decoherence -- the gradual decay of quantum information -- which imposes extremely stringent limits on storage times. Our protocol is designed to be efficient in the face of short quantum memory lifetimes. We demonstrate this using a simulator for quantum networks and show that the protocol is able to deliver its service even in the face of significant losses due to decoherence. Finally, we conclude by showing that the protocol remains functional on the extremely resource limited hardware that is being developed today underlining the timeliness of this work.
Ultra-sensitive solid-state organic molecular microwave quantum receiver
High-accuracy microwave sensing is widely demanded in various fields, ranging from cosmology to microwave quantum technology. Quantum receivers based on inorganic solid-state spin systems are promising candidates for such purpose because of the stability and compatibility, but their best sensitivity is currently limited to a few pT/rm{Hz}. Here, by utilising an enhanced readout scheme with the state-of-the-art solid-state maser technology, we develop a robust microwave quantum receiver functioned by organic molecular spins at ambient conditions. Owing to the maser amplification, the sensitivity of the receiver achieves 6.14 pm 0.17 fT/rm{Hz} which exceeds three orders of magnitude than that of the inorganic solid-state quantum receivers. The heterodyne detection without additional local oscillators improves bandwidth of the receiver and allows frequency detection. The scheme can be extended to other solid-state spin systems without complicated control pulses and thus enables practical applications such as electron spin resonance spectroscopy, dark matter searches, and astronomical observations.
Stim: a fast stabilizer circuit simulator
This paper presents ``Stim", a fast simulator for quantum stabilizer circuits. The paper explains how Stim works and compares it to existing tools. With no foreknowledge, Stim can analyze a distance 100 surface code circuit (20 thousand qubits, 8 million gates, 1 million measurements) in 15 seconds and then begin sampling full circuit shots at a rate of 1 kHz. Stim uses a stabilizer tableau representation, similar to Aaronson and Gottesman's CHP simulator, but with three main improvements. First, Stim improves the asymptotic complexity of deterministic measurement from quadratic to linear by tracking the {\em inverse} of the circuit's stabilizer tableau. Second, Stim improves the constant factors of the algorithm by using a cache-friendly data layout and 256 bit wide SIMD instructions. Third, Stim only uses expensive stabilizer tableau simulation to create an initial reference sample. Further samples are collected in bulk by using that sample as a reference for batches of Pauli frames propagating through the circuit.
Single-shot Quantum Signal Processing Interferometry
Quantum systems of infinite dimension, such as bosonic oscillators, provide vast resources for quantum sensing. Yet, a general theory on how to manipulate such bosonic modes for sensing beyond parameter estimation is unknown. We present a general algorithmic framework, quantum signal processing interferometry (QSPI), for quantum sensing at the fundamental limits of quantum mechanics by generalizing Ramsey-type interferometry. Our QSPI sensing protocol relies on performing nonlinear polynomial transformations on the oscillator's quadrature operators by generalizing quantum signal processing (QSP) from qubits to hybrid qubit-oscillator systems. We use our QSPI sensing framework to make efficient binary decisions on a displacement channel in the single-shot limit. Theoretical analysis suggests the sensing accuracy, given a single-shot qubit measurement, scales inversely with the sensing time or circuit depth of the algorithm. We further concatenate a series of such binary decisions to perform parameter estimation in a bit-by-bit fashion. Numerical simulations are performed to support these statements. Our QSPI protocol offers a unified framework for quantum sensing using continuous-variable bosonic systems beyond parameter estimation and establishes a promising avenue toward efficient and scalable quantum control and quantum sensing schemes beyond the NISQ era.
Advances in Quantum Cryptography
Quantum cryptography is arguably the fastest growing area in quantum information science. Novel theoretical protocols are designed on a regular basis, security proofs are constantly improving, and experiments are gradually moving from proof-of-principle lab demonstrations to in-field implementations and technological prototypes. In this review, we provide both a general introduction and a state of the art description of the recent advances in the field, both theoretically and experimentally. We start by reviewing protocols of quantum key distribution based on discrete variable systems. Next we consider aspects of device independence, satellite challenges, and high rate protocols based on continuous variable systems. We will then discuss the ultimate limits of point-to-point private communications and how quantum repeaters and networks may overcome these restrictions. Finally, we will discuss some aspects of quantum cryptography beyond standard quantum key distribution, including quantum data locking and quantum digital signatures.
Assembly and coherent control of a register of nuclear spin qubits
We introduce an optical tweezer platform for assembling and individually manipulating a two-dimensional register of nuclear spin qubits. Each nuclear spin qubit is encoded in the ground ^{1}S_{0} manifold of ^{87}Sr and is individually manipulated by site-selective addressing beams. We observe that spin relaxation is negligible after 5 seconds, indicating that T_1gg5 s. Furthermore, utilizing simultaneous manipulation of subsets of qubits, we demonstrate significant phase coherence over the entire register, estimating T_2^star = left(21pm7right) s and measuring T_2^echo=left(42pm6right) s.
A Machine Learning Pipeline for Hunting Hidden Axion Signals in Pulsar Dispersion Measurements
In the axion model, electromagnetic waves interacting with axions induce frequency-dependent time delays, determined by the axion mass and decay constant. These small delays are difficult to detect, making traditional methods ineffective. To address this, we computed time delays for various parameters and found a prominent dispersion signal when the wave frequency equals half the axion mass. Based on this, we developed a machine learning-based pipeline, achieving 95\% classification accuracy and demonstrating strong detection capability in low signal-to-noise data. Applying this to PSR J1933-6211, we found no axion-induced delays within current sensitivity limits. While existing constraints are limited by atomic clock resolution in radio telescopes, future advances in optical clocks and broader bandwidths will enable more extensive searches. In particular, combining high-precision optical clocks with next-generation radio telescopes, such as the Qitai Radio Telescope, could improve decay constant constraints by four orders of magnitude for axion masses in the 10^{-6} sim 10^{-4} eV range.
Mitiq: A software package for error mitigation on noisy quantum computers
We introduce Mitiq, a Python package for error mitigation on noisy quantum computers. Error mitigation techniques can reduce the impact of noise on near-term quantum computers with minimal overhead in quantum resources by relying on a mixture of quantum sampling and classical post-processing techniques. Mitiq is an extensible toolkit of different error mitigation methods, including zero-noise extrapolation, probabilistic error cancellation, and Clifford data regression. The library is designed to be compatible with generic backends and interfaces with different quantum software frameworks. We describe Mitiq using code snippets to demonstrate usage and discuss features and contribution guidelines. We present several examples demonstrating error mitigation on IBM and Rigetti superconducting quantum processors as well as on noisy simulators.
Approximate Quantum Compiling for Quantum Simulation: A Tensor Network based approach
We introduce AQCtensor, a novel algorithm to produce short-depth quantum circuits from Matrix Product States (MPS). Our approach is specifically tailored to the preparation of quantum states generated from the time evolution of quantum many-body Hamiltonians. This tailored approach has two clear advantages over previous algorithms that were designed to map a generic MPS to a quantum circuit. First, we optimize all parameters of a parametric circuit at once using Approximate Quantum Compiling (AQC) - this is to be contrasted with other approaches based on locally optimizing a subset of circuit parameters and "sweeping" across the system. We introduce an optimization scheme to avoid the so-called ``orthogonality catastrophe" - i.e. the fact that the fidelity of two arbitrary quantum states decays exponentially with the number of qubits - that would otherwise render a global optimization of the circuit impractical. Second, the depth of our parametric circuit is constant in the number of qubits for a fixed simulation time and fixed error tolerance. This is to be contrasted with the linear circuit Ansatz used in generic algorithms whose depth scales linearly in the number of qubits. For simulation problems on 100 qubits, we show that AQCtensor thus achieves at least an order of magnitude reduction in the depth of the resulting optimized circuit, as compared with the best generic MPS to quantum circuit algorithms. We demonstrate our approach on simulation problems on Heisenberg-like Hamiltonians on up to 100 qubits and find optimized quantum circuits that have significantly reduced depth as compared to standard Trotterized circuits.
Subsystem codes with high thresholds by gauge fixing and reduced qubit overhead
We introduce a technique that uses gauge fixing to significantly improve the quantum error correcting performance of subsystem codes. By changing the order in which check operators are measured, valuable additional information can be gained, and we introduce a new method for decoding which uses this information to improve performance. Applied to the subsystem toric code with three-qubit check operators, we increase the threshold under circuit-level depolarising noise from 0.67% to 0.81%. The threshold increases further under a circuit-level noise model with small finite bias, up to 2.22% for infinite bias. Furthermore, we construct families of finite-rate subsystem LDPC codes with three-qubit check operators and optimal-depth parity-check measurement schedules. To the best of our knowledge, these finite-rate subsystem codes outperform all known codes at circuit-level depolarising error rates as high as 0.2%, where they have a qubit overhead that is 4.3times lower than the most efficient version of the surface code and 5.1times lower than the subsystem toric code. Their threshold and pseudo-threshold exceeds 0.42% for circuit-level depolarising noise, increasing to 2.4% under infinite bias using gauge fixing.
Data centers with quantum random access memory and quantum networks
In this paper, we propose the Quantum Data Center (QDC), an architecture combining Quantum Random Access Memory (QRAM) and quantum networks. We give a precise definition of QDC, and discuss its possible realizations and extensions. We discuss applications of QDC in quantum computation, quantum communication, and quantum sensing, with a primary focus on QDC for T-gate resources, QDC for multi-party private quantum communication, and QDC for distributed sensing through data compression. We show that QDC will provide efficient, private, and fast services as a future version of data centers.
One-Time Universal Hashing Quantum Digital Signatures without Perfect Keys
Quantum digital signatures (QDS), generating correlated bit strings among three remote parties for signatures through quantum law, can guarantee non-repudiation, authenticity, and integrity of messages. Recently, one-time universal hashing QDS framework, exploiting the quantum asymmetric encryption and universal hash functions, has been proposed to significantly improve the signature rate and ensure unconditional security by directly signing the hash value of long messages. However, similar to quantum key distribution, this framework utilizes keys with perfect secrecy by performing privacy amplification that introduces cumbersome matrix operations, thereby consuming large computational resources, causing delays and increasing failure probability. Here, we prove that, different from private communication, imperfect quantum keys with limited information leakage can be used for digital signatures and authentication without compromising the security while having eight orders of magnitude improvement on signature rate for signing a megabit message compared with conventional single-bit schemes. This study significantly reduces the delay for data postprocessing and is compatible with any quantum key generation protocols. In our simulation, taking two-photon twin-field key generation protocol as an example, QDS can be practically implemented over a fiber distance of 650 km between the signer and receiver. For the first time, this study offers a cryptographic application of quantum keys with imperfect secrecy and paves a way for the practical and agile implementation of digital signatures in a future quantum network.
Ground State Preparation via Dynamical Cooling
Quantum algorithms for probing ground-state properties of quantum systems require good initial states. Projection-based methods such as eigenvalue filtering rely on inputs that have a significant overlap with the low-energy subspace, which can be challenging for large, strongly-correlated systems. This issue has motivated the study of physically-inspired dynamical approaches such as thermodynamic cooling. In this work, we introduce a ground-state preparation algorithm based on the simulation of quantum dynamics. Our main insight is to transform the Hamiltonian by a shifted sign function via quantum signal processing, effectively mapping eigenvalues into positive and negative subspaces separated by a large gap. This automatically ensures that all states within each subspace conserve energy with respect to the transformed Hamiltonian. Subsequent time-evolution with a perturbed Hamiltonian induces transitions to lower-energy states while preventing unwanted jumps to higher energy states. The approach does not rely on a priori knowledge of energy gaps and requires no additional qubits to model a bath. Furthermore, it makes mathcal{O}(d^{,3/2}/epsilon) queries to the time-evolution operator of the system and mathcal{O}(d^{,3/2}) queries to a block-encoding of the perturbation, for d cooling steps and an epsilon-accurate energy resolution. Our results provide a framework for combining quantum signal processing and Hamiltonian simulation to design heuristic quantum algorithms for ground-state preparation.
Quantum Long Short-Term Memory
Long short-term memory (LSTM) is a kind of recurrent neural networks (RNN) for sequence and temporal dependency data modeling and its effectiveness has been extensively established. In this work, we propose a hybrid quantum-classical model of LSTM, which we dub QLSTM. We demonstrate that the proposed model successfully learns several kinds of temporal data. In particular, we show that for certain testing cases, this quantum version of LSTM converges faster, or equivalently, reaches a better accuracy, than its classical counterpart. Due to the variational nature of our approach, the requirements on qubit counts and circuit depth are eased, and our work thus paves the way toward implementing machine learning algorithms for sequence modeling on noisy intermediate-scale quantum (NISQ) devices.
Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets
Quantum computers can be used to address molecular structure, materials science and condensed matter physics problems, which currently stretch the limits of existing high-performance computing resources. Finding exact numerical solutions to these interacting fermion problems has exponential cost, while Monte Carlo methods are plagued by the fermionic sign problem. These limitations of classical computational methods have made even few-atom molecular structures problems of practical interest for medium-sized quantum computers. Yet, thus far experimental implementations have been restricted to molecules involving only Period I elements. Here, we demonstrate the experimental optimization of up to six-qubit Hamiltonian problems with over a hundred Pauli terms, determining the ground state energy for molecules of increasing size, up to BeH2. This is enabled by a hardware-efficient variational quantum eigensolver with trial states specifically tailored to the available interactions in our quantum processor, combined with a compact encoding of fermionic Hamiltonians and a robust stochastic optimization routine. We further demonstrate the flexibility of our approach by applying the technique to a problem of quantum magnetism. Across all studied problems, we find agreement between experiment and numerical simulations with a noisy model of the device. These results help elucidate the requirements for scaling the method to larger systems, and aim at bridging the gap between problems at the forefront of high-performance computing and their implementation on quantum hardware.
A Quantum Algorithm for Solving Linear Differential Equations: Theory and Experiment
We present and experimentally realize a quantum algorithm for efficiently solving the following problem: given an Ntimes N matrix M, an N-dimensional vector emph{b}, and an initial vector emph{x}(0), obtain a target vector emph{x}(t) as a function of time t according to the constraint demph{x}(t)/dt=Memph{x}(t)+emph{b}. We show that our algorithm exhibits an exponential speedup over its classical counterpart in certain circumstances. In addition, we demonstrate our quantum algorithm for a 4times4 linear differential equation using a 4-qubit nuclear magnetic resonance quantum information processor. Our algorithm provides a key technique for solving many important problems which rely on the solutions to linear differential equations.
Multiplexed quantum repeaters based on dual-species trapped-ion systems
Trapped ions form an advanced technology platform for quantum information processing with long qubit coherence times, high-fidelity quantum logic gates, optically active qubits, and a potential to scale up in size while preserving a high level of connectivity between qubits. These traits make them attractive not only for quantum computing but also for quantum networking. Dedicated, special-purpose trapped-ion processors in conjunction with suitable interconnecting hardware can be used to form quantum repeaters that enable high-rate quantum communications between distant trapped-ion quantum computers in a network. In this regard, hybrid traps with two distinct species of ions, where one ion species can generate ion-photon entanglement that is useful for optically interfacing with the network and the other has long memory lifetimes, useful for qubit storage, have been proposed for entanglement distribution. We consider an architecture for a repeater based on such dual-species trapped-ion systems. We propose and analyze a protocol based on spatial and temporal mode multiplexing for entanglement distribution across a line network of such repeaters. Our protocol offers enhanced rates compared to rates previously reported for such repeaters. We determine the ion resources required at the repeaters to attain the enhanced rates, and the best rates attainable when constraints are placed on the number of repeaters and the number of ions per repeater. Our results bolster the case for near-term trapped-ion systems as quantum repeaters for long-distance quantum communications.
Optimal fidelity in implementing Grover's search algorithm on open quantum system
We investigate the fidelity of Grover's search algorithm by implementing it on an open quantum system. In particular, we study with what accuracy one can estimate that the algorithm would deliver the searched state. In reality, every system has some influence of its environment. We include the environmental effects on the system dynamics by using a recently reported fluctuation-regulated quantum master equation (FRQME). The FRQME indicates that in addition to the regular relaxation due to system-environment coupling, the applied drive also causes dissipation in the system dynamics. As a result, the fidelity is found to depend on both the drive-induced dissipative terms and the relaxation terms and we find that there exists a competition between them, leading to an optimum value of the drive amplitude for which the fidelity becomes maximum. For efficient implementation of the search algorithm, precise knowledge of this optimum drive amplitude is essential.
A photonic cluster state machine gun
We present a method to convert certain single photon sources into devices capable of emitting large strings of photonic cluster state in a controlled and pulsed "on demand" manner. Such sources would greatly reduce the resources required to achieve linear optical quantum computation. Standard spin errors, such as dephasing, are shown to affect only 1 or 2 of the emitted photons at a time. This allows for the use of standard fault tolerance techniques, and shows that the photonic machine gun can be fired for arbitrarily long times. Using realistic parameters for current quantum dot sources, we conclude high entangled-photon emission rates are achievable, with Pauli-error rates per photon of less than 0.2%. For quantum dot sources the method has the added advantage of alleviating the problematic issues of obtaining identical photons from independent, non-identical quantum dots, and of exciton dephasing.
Beyond Turn-Based Interfaces: Synchronous LLMs as Full-Duplex Dialogue Agents
Despite broad interest in modeling spoken dialogue agents, most approaches are inherently "half-duplex" -- restricted to turn-based interaction with responses requiring explicit prompting by the user or implicit tracking of interruption or silence events. Human dialogue, by contrast, is "full-duplex" allowing for rich synchronicity in the form of quick and dynamic turn-taking, overlapping speech, and backchanneling. Technically, the challenge of achieving full-duplex dialogue with LLMs lies in modeling synchrony as pre-trained LLMs do not have a sense of "time". To bridge this gap, we propose Synchronous LLMs for full-duplex spoken dialogue modeling. We design a novel mechanism to integrate time information into Llama3-8b so that they run synchronously with the real-world clock. We also introduce a training recipe that uses 212k hours of synthetic spoken dialogue data generated from text dialogue data to create a model that generates meaningful and natural spoken dialogue, with just 2k hours of real-world spoken dialogue data. Synchronous LLMs outperform state-of-the-art in dialogue meaningfulness while maintaining naturalness. Finally, we demonstrate the model's ability to participate in full-duplex dialogue by simulating interaction between two agents trained on different datasets, while considering Internet-scale latencies of up to 240 ms. Webpage: https://syncllm.cs.washington.edu/.
Bridging Theory and Practice in Quantum Game Theory: Optimized Implementation of the Battle of the Sexes with Error Mitigation on NISQ Hardware
Implementing quantum game theory on real hardware is challenging due to noise, decoherence, and limited qubit connectivity, yet such demonstrations are essential to validate theoretical predictions. We present one of the first full experimental realizations of the Battle of the Sexes game under the Eisert-Wilkens-Lewenstein (EWL) framework on IBM Quantum's ibm sherbrooke superconducting processor. Four quantum strategies (I, H, R(pi/4), R(pi)) were evaluated across 31 entanglement values gamma in [0, pi] using 2048 shots per configuration, enabling a direct comparison between analytical predictions and hardware execution. To mitigate noise and variability, we introduce a Guided Circuit Mapping (GCM) method that dynamically selects qubit pairs and optimizes routing based on real-time topology and calibration data. The analytical model forecasts up to 108% payoff improvement over the classical equilibrium, and despite hardware-induced deviations, experimental results with GCM preserve the expected payoff trends within 3.5%-12% relative error. These findings show that quantum advantages in strategic coordination can persist under realistic NISQ conditions, providing a pathway toward practical applications of quantum game theory in multi-agent, economic, and distributed decision-making systems.
Coherent shuttle of electron-spin states
We demonstrate a coherent spin shuttle through a GaAs/AlGaAs quadruple-quantum-dot array. Starting with two electrons in a spin-singlet state in the first dot, we shuttle one electron over to either the second, third or fourth dot. We observe that the separated spin-singlet evolves periodically into the m=0 spin-triplet and back before it dephases due to nuclear spin noise. We attribute the time evolution to differences in the local Zeeman splitting between the respective dots. With the help of numerical simulations, we analyse and discuss the visibility of the singlet-triplet oscillations and connect it to the requirements for coherent spin shuttling in terms of the inter-dot tunnel coupling strength and rise time of the pulses. The distribution of entangled spin pairs through tunnel coupled structures may be of great utility for connecting distant qubit registers on a chip.
Curriculum reinforcement learning for quantum architecture search under hardware errors
The key challenge in the noisy intermediate-scale quantum era is finding useful circuits compatible with current device limitations. Variational quantum algorithms (VQAs) offer a potential solution by fixing the circuit architecture and optimizing individual gate parameters in an external loop. However, parameter optimization can become intractable, and the overall performance of the algorithm depends heavily on the initially chosen circuit architecture. Several quantum architecture search (QAS) algorithms have been developed to design useful circuit architectures automatically. In the case of parameter optimization alone, noise effects have been observed to dramatically influence the performance of the optimizer and final outcomes, which is a key line of study. However, the effects of noise on the architecture search, which could be just as critical, are poorly understood. This work addresses this gap by introducing a curriculum-based reinforcement learning QAS (CRLQAS) algorithm designed to tackle challenges in realistic VQA deployment. The algorithm incorporates (i) a 3D architecture encoding and restrictions on environment dynamics to explore the search space of possible circuits efficiently, (ii) an episode halting scheme to steer the agent to find shorter circuits, and (iii) a novel variant of simultaneous perturbation stochastic approximation as an optimizer for faster convergence. To facilitate studies, we developed an optimized simulator for our algorithm, significantly improving computational efficiency in simulating noisy quantum circuits by employing the Pauli-transfer matrix formalism in the Pauli-Liouville basis. Numerical experiments focusing on quantum chemistry tasks demonstrate that CRLQAS outperforms existing QAS algorithms across several metrics in both noiseless and noisy environments.
Improving thermal state preparation of Sachdev-Ye-Kitaev model with reinforcement learning on quantum hardware
The Sachdev-Ye-Kitaev (SYK) model, known for its strong quantum correlations and chaotic behavior, serves as a key platform for quantum gravity studies. However, variationally preparing thermal states on near-term quantum processors for large systems (N>12, where N is the number of Majorana fermions) presents a significant challenge due to the rapid growth in the complexity of parameterized quantum circuits. This paper addresses this challenge by integrating reinforcement learning (RL) with convolutional neural networks, employing an iterative approach to optimize the quantum circuit and its parameters. The refinement process is guided by a composite reward signal derived from entropy and the expectation values of the SYK Hamiltonian. This approach reduces the number of CNOT gates by two orders of magnitude for systems Ngeq12 compared to traditional methods like first-order Trotterization. We demonstrate the effectiveness of the RL framework in both noiseless and noisy quantum hardware environments, maintaining high accuracy in thermal state preparation. This work advances a scalable, RL-based framework with applications for quantum gravity studies and out-of-time-ordered thermal correlators computation in quantum many-body systems on near-term quantum hardware. The code is available at https://github.com/Aqasch/solving_SYK_model_with_RL.
Simulation of integrated nonlinear quantum optics: from nonlinear interferometer to temporal walk-off compensator
Nonlinear quantum photonics serves as a cornerstone in photonic quantum technologies, such as universal quantum computing and quantum communications. The emergence of integrated photonics platform not only offers the advantage of large-scale manufacturing but also provides a variety of engineering methods. Given the complexity of integrated photonics engineering, a comprehensive simulation framework is essential to fully harness the potential of the platform. In this context, we introduce a nonlinear quantum photonics simulation framework which can accurately model a variety of features such as adiabatic waveguide, material anisotropy, linear optics components, photon losses, and detectors. Furthermore, utilizing the framework, we have developed a device scheme, chip-scale temporal walk-off compensation, that is useful for various quantum information processing tasks. Applying the simulation framework, we show that the proposed device scheme can enhance the squeezing parameter of photon-pair sources and the conversion efficiency of quantum frequency converters without relying on higher pump power.
Ergotropy and Capacity Optimization in Heisenberg Spin Chain Quantum Batteries
This study examines the performance of finite spin quantum batteries (QBs) using Heisenberg spin models with Dzyaloshinsky-Moriya (DM) and Kaplan--Shekhtman--Entin-Wohlman--Aharony (KSEA) interactions. The QBs are modeled as interacting quantum spins in local inhomogeneous magnetic fields, inducing variable Zeeman splitting. We derive analytical expressions for the maximal extractable work, ergotropy and the capacity of QBs, as recently examined by Yang et al. [Phys. Rev. Lett. 131, 030402 (2023)]. These quantities are analytically linked through certain quantum correlations, as posited in the aforementioned study. Different Heisenberg spin chain models exhibit distinct behaviors under varying conditions, emphasizing the importance of model selection for optimizing QB performance. In antiferromagnetic (AFM) systems, maximum ergotropy occurs with a Zeeman splitting field applied to either spin, while ferromagnetic (FM) systems benefit from a uniform Zeeman field. Temperature significantly impacts QB performance, with ergotropy in the AFM case being generally more robust against temperature increases compared to the FM case. Incorporating DM and KSEA couplings can significantly enhance the capacity and ergotropy extraction of QBs. However, there exists a threshold beyond which additional increases in these interactions cause a sharp decline in capacity and ergotropy. This behavior is influenced by temperature and quantum coherence, which signal the occurrence of a sudden phase transition. The resource theory of quantum coherence proposed by Baumgratz et al. [Phys. Rev. Lett. 113, 140401 (2014)] plays a crucial role in enhancing ergotropy and capacity. However, ergotropy is limited by both the system's capacity and the amount of coherence. These findings support the theoretical framework of spin-based QBs and may benefit future research on quantum energy storage devices.
Surface codes: Towards practical large-scale quantum computation
This article provides an introduction to surface code quantum computing. We first estimate the size and speed of a surface code quantum computer. We then introduce the concept of the stabilizer, using two qubits, and extend this concept to stabilizers acting on a two-dimensional array of physical qubits, on which we implement the surface code. We next describe how logical qubits are formed in the surface code array and give numerical estimates of their fault-tolerance. We outline how logical qubits are physically moved on the array, how qubit braid transformations are constructed, and how a braid between two logical qubits is equivalent to a controlled-NOT. We then describe the single-qubit Hadamard, S and T operators, completing the set of required gates for a universal quantum computer. We conclude by briefly discussing physical implementations of the surface code. We include a number of appendices in which we provide supplementary information to the main text.
Diagrammatic Design and Study of Ansätze for Quantum Machine Learning
Given the rising popularity of quantum machine learning (QML), it is important to develop techniques that effectively simplify commonly adopted families of parameterised quantum circuits (commonly known as ans\"{a}tze). This thesis pioneers the use of diagrammatic techniques to reason with QML ans\"{a}tze. We take commonly used QML ans\"{a}tze and convert them to diagrammatic form and give a full description of how these gates commute, making the circuits much easier to analyse and simplify. Furthermore, we leverage a combinatorial description of the interaction between CNOTs and phase gadgets to analyse a periodicity phenomenon in layered ans\"{a}tze and also to simplify a class of circuits commonly used in QML.
1d-qt-ideal-solver: 1D Idealized Quantum Tunneling Solver with Absorbing Boundaries
We present 1d-qt-ideal-solver, an open-source Python library for simulating one-dimensional quantum tunneling dynamics under idealized coherent conditions. The solver implements the split-operator method with second-order Trotter-Suzuki factorization, utilizing FFT-based spectral differentiation for the kinetic operator and complex absorbing potentials to eliminate boundary reflections. Numba just-in-time compilation achieves performance comparable to compiled languages while maintaining code accessibility. We validate the implementation through two canonical test cases: rectangular barriers modeling field emission through oxide layers and Gaussian barriers approximating scanning tunneling microscopy interactions. Both simulations achieve exceptional numerical fidelity with machine-precision energy conservation over femtosecond-scale propagation. Comparative analysis employing information-theoretic measures and nonparametric hypothesis tests reveals that rectangular barriers exhibit moderately higher transmission coefficients than Gaussian barriers in the over-barrier regime, though Jensen-Shannon divergence analysis indicates modest practical differences between geometries. Phase space analysis confirms complete decoherence when averaged over spatial-temporal domains. The library name reflects its scope: idealized signifies deliberate exclusion of dissipation, environmental coupling, and many-body interactions, limiting applicability to qualitative insights and pedagogical purposes rather than quantitative experimental predictions. Distributed under the MIT License, the library provides a deployable tool for teaching quantum mechanics and preliminary exploration of tunneling dynamics.
Combined Dissipative and Hamiltonian Confinement of Cat Qubits
Quantum error correction with biased-noise qubits can drastically reduce the hardware overhead for universal and fault-tolerant quantum computation. Cat qubits are a promising realization of biased-noise qubits as they feature an exponential error bias inherited from their non-local encoding in the phase space of a quantum harmonic oscillator. To confine the state of an oscillator to the cat qubit manifold, two main approaches have been considered so far: a Kerr-based Hamiltonian confinement with high gate performances, and a dissipative confinement with robust protection against a broad range of noise mechanisms. We introduce a new combined dissipative and Hamiltonian confinement scheme based on two-photon dissipation together with a Two-Photon Exchange (TPE) Hamiltonian. The TPE Hamiltonian is similar to Kerr nonlinearity, but unlike the Kerr it only induces a bounded distinction between even- and odd-photon eigenstates, a highly beneficial feature for protecting the cat qubits with dissipative mechanisms. Using this combined confinement scheme, we demonstrate fast and bias-preserving gates with drastically improved performance compared to dissipative or Hamiltonian schemes. In addition, this combined scheme can be implemented experimentally with only minor modifications of existing dissipative cat qubit experiments.
Modeling stochastic eye tracking data: A comparison of quantum generative adversarial networks and Markov models
We explore the use of quantum generative adversarial networks QGANs for modeling eye movement velocity data. We assess whether the advanced computational capabilities of QGANs can enhance the modeling of complex stochastic distribution beyond the traditional mathematical models, particularly the Markov model. The findings indicate that while QGANs demonstrate potential in approximating complex distributions, the Markov model consistently outperforms in accurately replicating the real data distribution. This comparison underlines the challenges and avenues for refinement in time series data generation using quantum computing techniques. It emphasizes the need for further optimization of quantum models to better align with real-world data characteristics.
Extracting inter-dot tunnel couplings between few donor quantum dots in silicon
The long term scaling prospects for solid-state quantum computing architectures relies heavily on the ability to simply and reliably measure and control the coherent electron interaction strength, known as the tunnel coupling, t_c. Here, we describe a method to extract the t_c between two quantum dots (QDs) utilising their different tunnel rates to a reservoir. We demonstrate the technique on a few donor triple QD tunnel coupled to a nearby single-electron transistor (SET) in silicon. The device was patterned using scanning tunneling microscopy-hydrogen lithography allowing for a direct measurement of the tunnel coupling for a given inter-dot distance. We extract {t}_{{c}}=5.5pm 1.8;{GHz} and {t}_{{c}}=2.2pm 1.3;{GHz} between each of the nearest-neighbour QDs which are separated by 14.5 nm and 14.0 nm, respectively. The technique allows for an accurate measurement of t_c for nanoscale devices even when it is smaller than the electron temperature and is an ideal characterisation tool for multi-dot systems with a charge sensor.
The probabilistic world
Physics is based on probabilities as fundamental entities of a mathematical description. Expectation values of observables are computed according to the classical statistical rule. The overall probability distribution for one world covers all times. The quantum formalism arises once one focuses on the evolution of the time-local probabilistic information. Wave functions or the density matrix allow the formulation of a general linear evolution law for classical statistics. The quantum formalism for classical statistics is a powerful tool which allows us to implement for generalized Ising models the momentum observable with the associated Fourier representation. The association of operators to observables permits the computation of expectation values in terms of the density matrix by the usual quantum rule. We show that probabilistic cellular automata are quantum systems in a formulation with discrete time steps and real wave functions. With a complex structure the evolution operator for automata can be expressed in terms of a Hamiltonian involving fermionic creation and annihilation operators. The time-local probabilistic information amounts to a subsystem of the overall probabilistic system which is correlated with its environment consisting of the past and future. Such subsystems typically involve probabilistic observables for which only a probability distribution for their possible measurement values is available. Incomplete statistics does not permit to compute classical correlation functions for arbitrary subsystem-observables. Bell's inequalities are not generally applicable.
Designing High-Fidelity Zeno Gates for Dissipative Cat Qubits
Bosonic cat qubits stabilized with a driven two-photon dissipation are systems with exponentially biased noise, opening the door to low-overhead, fault-tolerant and universal quantum computing. However, current gate proposals for such qubits induce substantial noise of the unprotected type, whose poor scaling with the relevant experimental parameters limits their practical use. In this work, we provide a new perspective on dissipative cat qubits by reconsidering the reservoir mode used to engineer the tailored two-photon dissipation, and show how it can be leveraged to mitigate gate-induced errors. Doing so, we introduce four new designs of high-fidelity and bias-preserving cat qubit gates, and compare them to the prevalent gate methods. These four designs should give a broad overview of gate engineering for dissipative systems with different and complementary ideas. In particular, we propose both already achievable low-error gate designs and longer-term implementations.
Efficient and practical quantum compiler towards multi-qubit systems with deep reinforcement learning
Efficient quantum compiling tactics greatly enhance the capability of quantum computers to execute complicated quantum algorithms. Due to its fundamental importance, a plethora of quantum compilers has been designed in past years. However, there are several caveats to current protocols, which are low optimality, high inference time, limited scalability, and lack of universality. To compensate for these defects, here we devise an efficient and practical quantum compiler assisted by advanced deep reinforcement learning (RL) techniques, i.e., data generation, deep Q-learning, and AQ* search. In this way, our protocol is compatible with various quantum machines and can be used to compile multi-qubit operators. We systematically evaluate the performance of our proposal in compiling quantum operators with both inverse-closed and inverse-free universal basis sets. In the task of single-qubit operator compiling, our proposal outperforms other RL-based quantum compilers in the measure of compiling sequence length and inference time. Meanwhile, the output solution is near-optimal, guaranteed by the Solovay-Kitaev theorem. Notably, for the inverse-free universal basis set, the achieved sequence length complexity is comparable with the inverse-based setting and dramatically advances previous methods. These empirical results contribute to improving the inverse-free Solovay-Kitaev theorem. In addition, for the first time, we demonstrate how to leverage RL-based quantum compilers to accomplish two-qubit operator compiling. The achieved results open an avenue for integrating RL with quantum compiling to unify efficiency and practicality and thus facilitate the exploration of quantum advantages.
Q-Cluster: Quantum Error Mitigation Through Noise-Aware Unsupervised Learning
Quantum error mitigation (QEM) is critical in reducing the impact of noise in the pre-fault-tolerant era, and is expected to complement error correction in fault-tolerant quantum computing (FTQC). In this paper, we propose a novel QEM approach, Q-Cluster, that uses unsupervised learning (clustering) to reshape the measured bit-string distribution. Our approach starts with a simplified bit-flip noise model. It first performs clustering on noisy measurement results, i.e., bit-strings, based on the Hamming distance. The centroid of each cluster is calculated using a qubit-wise majority vote. Next, the noisy distribution is adjusted with the clustering outcomes and the bit-flip error rates using Bayesian inference. Our simulation results show that Q-Cluster can mitigate high noise rates (up to 40% per qubit) with the simple bit-flip noise model. However, real quantum computers do not fit such a simple noise model. To address the problem, we (a) apply Pauli twirling to tailor the complex noise channels to Pauli errors, and (b) employ a machine learning model, ExtraTrees regressor, to estimate an effective bit-flip error rate using a feature vector consisting of machine calibration data (gate & measurement error rates), circuit features (number of qubits, numbers of different types of gates, etc.) and the shape of the noisy distribution (entropy). Our experimental results show that our proposed Q-Cluster scheme improves the fidelity by a factor of 1.46x, on average, compared to the unmitigated output distribution, for a set of low-entropy benchmarks on five different IBM quantum machines. Our approach outperforms the state-of-art QEM approaches M3 [24], Hammer [35], and QBEEP [33] by 1.29x, 1.47x, and 2.65x, respectively.
Quantum Reservoir Computing for Corrosion Prediction in Aerospace: A Hybrid Approach for Enhanced Material Degradation Forecasting
The prediction of material degradation is an important problem to solve in many industries. Environmental conditions, such as humidity and temperature, are important drivers of degradation processes, with corrosion being one of the most prominent ones. Quantum machine learning is a promising research field but suffers from well known deficits such as barren plateaus and measurement overheads. To address this problem, recent research has examined quantum reservoir computing to address time-series prediction tasks. Although a promising idea, developing circuits that are expressive enough while respecting the limited depths available on current devices is challenging. In classical reservoir computing, the onion echo state network model (ESN) [https://doi.org/10.1007/978-3-031-72359-9_9] was introduced to increase the interpretability of the representation structure of the embeddings. This onion ESN model utilizes a concatenation of smaller reservoirs that describe different time scales by covering different regions of the eigenvalue spectrum. Here, we use the same idea in the realm of quantum reservoir computing by simultaneously evolving smaller quantum reservoirs to better capture all the relevant time-scales while keeping the circuit depth small. We do this by modifying the rotation angles which we show alters the eigenvalues of the quantum evolution, but also note that modifying the number of mid-circuit measurements accomplishes the same goals of changing the long-term or short-term memory. This onion QRC outperforms a simple model and a single classical reservoir for predicting the degradation of aluminum alloys in different environmental conditions. By combining the onion QRC with an additional classical reservoir layer, the prediction accuracy is further improved.
Tutorial: Remote entanglement protocols for stationary qubits with photonic interfaces
Generating entanglement between distant quantum systems is at the core of quantum networking. In recent years, numerous theoretical protocols for remote entanglement generation have been proposed, of which many have been experimentally realized. Here, we provide a modular theoretical framework to elucidate the general mechanisms of photon-mediated entanglement generation between single spins in atomic or solid-state systems. Our framework categorizes existing protocols at various levels of abstraction and allows for combining the elements of different schemes in new ways. These abstraction layers make it possible to readily compare protocols for different quantum hardware. To enable the practical evaluation of protocols tailored to specific experimental parameters, we have devised numerical simulations based on the framework with our codes available online.
Discovering highly efficient low-weight quantum error-correcting codes with reinforcement learning
The realization of scalable fault-tolerant quantum computing is expected to hinge on quantum error-correcting codes. In the quest for more efficient quantum fault tolerance, a critical code parameter is the weight of measurements that extract information about errors to enable error correction: as higher measurement weights require higher implementation costs and introduce more errors, it is important in code design to optimize measurement weight. This underlies the surging interest in quantum low-density parity-check (qLDPC) codes, the study of which has primarily focused on the asymptotic (large-code-limit) properties. In this work, we introduce a versatile and computationally efficient approach to stabilizer code weight reduction based on reinforcement learning (RL), which produces new low-weight codes that substantially outperform the state of the art in practically relevant parameter regimes, extending significantly beyond previously accessible small distances. For example, our approach demonstrates savings in physical qubit overhead compared to existing results by 1 to 2 orders of magnitude for weight 6 codes and brings the overhead into a feasible range for near-future experiments. We also investigate the interplay between code parameters using our RL framework, offering new insights into the potential efficiency and power of practically viable coding strategies. Overall, our results demonstrate how RL can effectively advance the crucial yet challenging problem of quantum code discovery and thereby facilitate a faster path to the practical implementation of fault-tolerant quantum technologies.
KetGPT - Dataset Augmentation of Quantum Circuits using Transformers
Quantum algorithms, represented as quantum circuits, can be used as benchmarks for assessing the performance of quantum systems. Existing datasets, widely utilized in the field, suffer from limitations in size and versatility, leading researchers to employ randomly generated circuits. Random circuits are, however, not representative benchmarks as they lack the inherent properties of real quantum algorithms for which the quantum systems are manufactured. This shortage of `useful' quantum benchmarks poses a challenge to advancing the development and comparison of quantum compilers and hardware. This research aims to enhance the existing quantum circuit datasets by generating what we refer to as `realistic-looking' circuits by employing the Transformer machine learning architecture. For this purpose, we introduce KetGPT, a tool that generates synthetic circuits in OpenQASM language, whose structure is based on quantum circuits derived from existing quantum algorithms and follows the typical patterns of human-written algorithm-based code (e.g., order of gates and qubits). Our three-fold verification process, involving manual inspection and Qiskit framework execution, transformer-based classification, and structural analysis, demonstrates the efficacy of KetGPT in producing large amounts of additional circuits that closely align with algorithm-based structures. Beyond benchmarking, we envision KetGPT contributing substantially to AI-driven quantum compilers and systems.
BenchRL-QAS: Benchmarking reinforcement learning algorithms for quantum architecture search
We present BenchRL-QAS, a unified benchmarking framework for reinforcement learning (RL) in quantum architecture search (QAS) across a spectrum of variational quantum algorithm tasks on 2- to 8-qubit systems. Our study systematically evaluates 9 different RL agents, including both value-based and policy-gradient methods, on quantum problems such as variational eigensolver, quantum state diagonalization, variational quantum classification (VQC), and state preparation, under both noiseless and noisy execution settings. To ensure fair comparison, we propose a weighted ranking metric that integrates accuracy, circuit depth, gate count, and training time. Results demonstrate that no single RL method dominates universally, the performance dependents on task type, qubit count, and noise conditions providing strong evidence of no free lunch principle in RL-QAS. As a byproduct we observe that a carefully chosen RL algorithm in RL-based VQC outperforms baseline VQCs. BenchRL-QAS establishes the most extensive benchmark for RL-based QAS to date, codes and experimental made publicly available for reproducibility and future advances.
Fault-tolerant simulation of Lattice Gauge Theories with gauge covariant codes
We show in this paper that a strong and easy connection exists between quantum error correction and Lattice Gauge Theories (LGT) by using the Gauge symmetry to construct an efficient error-correcting code for Abelian LGTs. We identify the logical operations on this gauge covariant code and show that the corresponding Hamiltonian can be expressed in terms of these logical operations while preserving the locality of the interactions. Furthermore, we demonstrate that these substitutions actually give a new way of writing the LGT as an equivalent hardcore boson model. Finally we demonstrate a method to perform fault-tolerant time evolution of the Hamiltonian within the gauge covariant code using both product formulas and qubitization approaches. This opens up the possibility of inexpensive end to end dynamical simulations that save physical qubits by blurring the lines between simulation algorithms and quantum error correcting codes.
Real-time quantum error correction beyond break-even
The ambition of harnessing the quantum for computation is at odds with the fundamental phenomenon of decoherence. The purpose of quantum error correction (QEC) is to counteract the natural tendency of a complex system to decohere. This cooperative process, which requires participation of multiple quantum and classical components, creates a special type of dissipation that removes the entropy caused by the errors faster than the rate at which these errors corrupt the stored quantum information. Previous experimental attempts to engineer such a process faced an excessive generation of errors that overwhelmed the error-correcting capability of the process itself. Whether it is practically possible to utilize QEC for extending quantum coherence thus remains an open question. We answer it by demonstrating a fully stabilized and error-corrected logical qubit whose quantum coherence is significantly longer than that of all the imperfect quantum components involved in the QEC process, beating the best of them with a coherence gain of G = 2.27 pm 0.07. We achieve this performance by combining innovations in several domains including the fabrication of superconducting quantum circuits and model-free reinforcement learning.
Quantum computing with Qiskit
We describe Qiskit, a software development kit for quantum information science. We discuss the key design decisions that have shaped its development, and examine the software architecture and its core components. We demonstrate an end-to-end workflow for solving a problem in condensed matter physics on a quantum computer that serves to highlight some of Qiskit's capabilities, for example the representation and optimization of circuits at various abstraction levels, its scalability and retargetability to new gates, and the use of quantum-classical computations via dynamic circuits. Lastly, we discuss some of the ecosystem of tools and plugins that extend Qiskit for various tasks, and the future ahead.
A Quadratic Synchronization Rule for Distributed Deep Learning
In distributed deep learning with data parallelism, synchronizing gradients at each training step can cause a huge communication overhead, especially when many nodes work together to train large models. Local gradient methods, such as Local SGD, address this issue by allowing workers to compute locally for H steps without synchronizing with others, hence reducing communication frequency. While H has been viewed as a hyperparameter to trade optimization efficiency for communication cost, recent research indicates that setting a proper H value can lead to generalization improvement. Yet, selecting a proper H is elusive. This work proposes a theory-grounded method for determining H, named the Quadratic Synchronization Rule (QSR), which recommends dynamically setting H in proportion to 1{eta^2} as the learning rate eta decays over time. Extensive ImageNet experiments on ResNet and ViT show that local gradient methods with QSR consistently improve the test accuracy over other synchronization strategies. Compared with the standard data parallel training, QSR enables Local AdamW on ViT-B to cut the training time on 16 or 64 GPUs down from 26.7 to 20.2 hours or from 8.6 to 5.5 hours and, at the same time, achieves 1.16% or 0.84% higher top-1 validation accuracy.
Reservoir Computing via Quantum Recurrent Neural Networks
Recent developments in quantum computing and machine learning have propelled the interdisciplinary study of quantum machine learning. Sequential modeling is an important task with high scientific and commercial value. Existing VQC or QNN-based methods require significant computational resources to perform the gradient-based optimization of a larger number of quantum circuit parameters. The major drawback is that such quantum gradient calculation requires a large amount of circuit evaluation, posing challenges in current near-term quantum hardware and simulation software. In this work, we approach sequential modeling by applying a reservoir computing (RC) framework to quantum recurrent neural networks (QRNN-RC) that are based on classical RNN, LSTM and GRU. The main idea to this RC approach is that the QRNN with randomly initialized weights is treated as a dynamical system and only the final classical linear layer is trained. Our numerical simulations show that the QRNN-RC can reach results comparable to fully trained QRNN models for several function approximation and time series prediction tasks. Since the QRNN training complexity is significantly reduced, the proposed model trains notably faster. In this work we also compare to corresponding classical RNN-based RC implementations and show that the quantum version learns faster by requiring fewer training epochs in most cases. Our results demonstrate a new possibility to utilize quantum neural network for sequential modeling with greater quantum hardware efficiency, an important design consideration for noisy intermediate-scale quantum (NISQ) computers.
Ferromagnetic ordering in mazelike stripe liquid of a dipolar six-state clock model
We present a comprehensive numerical study of a six-state clock model with a long-range dipolar type interaction. This model is motivated by the ferroelectric orders in the multiferroic hexagonal manganites. At low temperatures, trimerization of local atomic structures leads to six distinct but energetically degenerate structural distortion, which can be modeled by a six-state clock model. Moreover, the atomic displacements in the trimerized state further produce a local electric polarization whose sign depends on whether the clock variable is even or odd. These induced electric dipoles, which can be modeled by emergent Ising degrees of freedom, interact with each other via long-range dipolar interactions. Extensive Monte Carlo simulations are carried out to investigate low temperature phases resulting from the competing interactions. Upon lowering temperature, the system undergoes two Berezinskii-Kosterlitz-Thouless (BKT) transitions, characteristic of the standard six-state clock model in two dimensions. The dipolar interaction between emergent Ising spins induces a first-order transition into a ground state characterized by a three-fold degenerate stripe order. The intermediate phase between the discontinuous and the second BKT transition corresponds to a maze-like hexagonal liquid with short-range stripe ordering. Moreover, this intermediate phase also exhibits an unusual ferromagnetic order with two adjacent clock variables occupying the two types of stripes of the labyrinthine pattern.
Two-photon driven Kerr quantum oscillator with multiple spectral degeneracies
Kerr nonlinear oscillators driven by a two-photon process are promising systems to encode quantum information and to ensure a hardware-efficient scaling towards fault-tolerant quantum computation. In this paper, we show that an extra control parameter, the detuning of the two-photon drive with respect to the oscillator resonance, plays a crucial role in the properties of the defined qubit. At specific values of this detuning, we benefit from strong symmetries in the system, leading to multiple degeneracies in the spectrum of the effective confinement Hamiltonian. Overall, these degeneracies lead to a stronger suppression of bit-flip errors. We also study the combination of such Hamiltonian confinement with colored dissipation to suppress leakage outside of the bosonic code space. We show that the additional degeneracies allow us to perform fast and high-fidelity gates while preserving a strong suppression of bit-flip errors.
Practical Benchmarking of Randomized Measurement Methods for Quantum Chemistry Hamiltonians
Many hybrid quantum-classical algorithms for the application of ground state energy estimation in quantum chemistry involve estimating the expectation value of a molecular Hamiltonian with respect to a quantum state through measurements on a quantum device. To guide the selection of measurement methods designed for this observable estimation problem, we propose a benchmark called CSHOREBench (Common States and Hamiltonians for ObseRvable Estimation Benchmark) that assesses the performance of these methods against a set of common molecular Hamiltonians and common states encountered during the runtime of hybrid quantum-classical algorithms. In CSHOREBench, we account for resource utilization of a quantum computer through measurements of a prepared state, and a classical computer through computational runtime spent in proposing measurements and classical post-processing of acquired measurement outcomes. We apply CSHOREBench considering a variety of measurement methods on Hamiltonians of size up to 16 qubits. Our discussion is aided by using the framework of decision diagrams which provides an efficient data structure for various randomized methods and illustrate how to derandomize distributions on decision diagrams. In numerical simulations, we find that the methods of decision diagrams and derandomization are the most preferable. In experiments on IBM quantum devices against small molecules, we observe that decision diagrams reduces the number of measurements made by classical shadows by more than 80%, that made by locally biased classical shadows by around 57%, and consistently require fewer quantum measurements along with lower classical computational runtime than derandomization. Furthermore, CSHOREBench is empirically efficient to run when considering states of random quantum ansatz with fixed depth.
Quantum Generative Diffusion Model
This paper introduces the Quantum Generative Diffusion Model (QGDM), a fully quantum-mechanical model for generating quantum state ensembles, inspired by Denoising Diffusion Probabilistic Models. QGDM features a diffusion process that introduces timestep-dependent noise into quantum states, paired with a denoising mechanism trained to reverse this contamination. This model efficiently evolves a completely mixed state into a target quantum state post-training. Our comparative analysis with Quantum Generative Adversarial Networks demonstrates QGDM's superiority, with fidelity metrics exceeding 0.99 in numerical simulations involving up to 4 qubits. Additionally, we present a Resource-Efficient version of QGDM (RE-QGDM), which minimizes the need for auxiliary qubits while maintaining impressive generative capabilities for tasks involving up to 8 qubits. These results showcase the proposed models' potential for tackling challenging quantum generation problems.
Random Quantum Circuits
Quantum circuits -- built from local unitary gates and local measurements -- are a new playground for quantum many-body physics and a tractable setting to explore universal collective phenomena far-from-equilibrium. These models have shed light on longstanding questions about thermalization and chaos, and on the underlying universal dynamics of quantum information and entanglement. In addition, such models generate new sets of questions and give rise to phenomena with no traditional analog, such as new dynamical phases in quantum systems that are monitored by an external observer. Quantum circuit dynamics is also topical in view of experimental progress in building digital quantum simulators that allow control of precisely these ingredients. Randomness in the circuit elements allows a high level of theoretical control, with a key theme being mappings between real-time quantum dynamics and effective classical lattice models or dynamical processes. Many of the universal phenomena that can be identified in this tractable setting apply to much wider classes of more structured many-body dynamics.
Understanding Quantum Technologies 2025
Understanding Quantum Technologies 2025 is the 8th update of a free open science ebook that provides a 360 degrees overview of quantum technologies from science and technology to geopolitical and societal issues. It covers quantum physics history, quantum physics 101, gate-based quantum computing, quantum computing engineering (including quantum error corrections, quantum computing energetics and a new subsection of the effects of the Lieb-Robinson limit), quantum computing hardware (all qubit types, including quantum annealing and quantum simulation paradigms, history, science, research, implementation and vendors scientific and engineering approaches and roadmaps), quantum enabling technologies (cryogenics, control electronics, photonics, components fabs and manufacturing process, raw materials), unconventional computing (potential alternatives to quantum and classical computing), quantum computing algorithms, software development tools, resource estimate and benchmark tools, use case and case studies analysis methodologies, application use cases per market, quantum communications and cryptography (including QKD, PQC and QPU interconnect technologies), quantum sensing, quantum technologies around the world, quantum technologies societal impact and even quantum fake sciences. The main audience are computer science engineers, developers and IT specialists as well as quantum scientists and students who want to acquire a global view of how quantum technologies work, and particularly quantum computing. This version is an update to the 2024, 2023, 2022, and 2021 editions published respectively in October 2024, 2023, 2022 and 2021. An update log is provided at the end of the book.
All photonic quantum repeaters
Quantum communication holds promise for unconditionally secure transmission of secret messages and faithful transfer of unknown quantum states. Photons appear to be the medium of choice for quantum communication. Owing to photon losses, robust quantum communication over long lossy channels requires quantum repeaters. It is widely believed that a necessary and highly demanding requirement for quantum repeaters is the existence of matter quantum memories at the repeater nodes. Here we show that such a requirement is, in fact, unnecessary by introducing the concept of all photonic quantum repeaters based on flying qubits. As an example of the realization of this concept, we present a protocol based on photonic cluster state machine guns and a loss-tolerant measurement equipped with local high-speed active feedforwards. We show that, with such an all photonic quantum repeater, the communication efficiency still scales polynomially with the channel distance. Our result paves a new route toward quantum repeaters with efficient single-photon sources rather than matter quantum memories.
Blueprint for a Scalable Photonic Fault-Tolerant Quantum Computer
Photonics is the platform of choice to build a modular, easy-to-network quantum computer operating at room temperature. However, no concrete architecture has been presented so far that exploits both the advantages of qubits encoded into states of light and the modern tools for their generation. Here we propose such a design for a scalable and fault-tolerant photonic quantum computer informed by the latest developments in theory and technology. Central to our architecture is the generation and manipulation of three-dimensional hybrid resource states comprising both bosonic qubits and squeezed vacuum states. The proposal enables exploiting state-of-the-art procedures for the non-deterministic generation of bosonic qubits combined with the strengths of continuous-variable quantum computation, namely the implementation of Clifford gates using easy-to-generate squeezed states. Moreover, the architecture is based on two-dimensional integrated photonic chips used to produce a qubit cluster state in one temporal and two spatial dimensions. By reducing the experimental challenges as compared to existing architectures and by enabling room-temperature quantum computation, our design opens the door to scalable fabrication and operation, which may allow photonics to leap-frog other platforms on the path to a quantum computer with millions of qubits.
QDNA-ID Quantum Device Native Authentication
QDNA-ID is a trust-chain framework that links physical quantum behavior to digitally verified records. The system first executes standard quantum circuits with random shot patterns across different devices to generate entropy profiles and measurement data that reveal device-specific behavior. A Bell or CHSH test is then used to confirm that correlations originate from genuine non classical processes rather than classical simulation. The verified outcomes are converted into statistical fingerprints using entropy, divergence, and bias features to characterize each device. These features and metadata for device, session, and random seed parameters are digitally signed and time stamped to ensure integrity and traceability. Authenticated artifacts are stored in a hierarchical index for reproducible retrieval and long term auditing. A visualization and analytics interface monitors drift, policy enforcement, and device behavior logs. A machine learning engine tracks entropy drift, detects anomalies, and classifies devices based on evolving patterns. An external verification API supports independent recomputation of hashes, signatures, and CHSH evidence. QDNA-ID operates as a continuous feedback loop that maintains a persistent chain of trust for quantum computing environments.
Rate limits in quantum networks with lossy repeaters
The derivation of ultimate limits to communication over certain quantum repeater networks have provided extremely valuable benchmarks for assessing near-term quantum communication protocols. However, these bounds are usually derived in the limit of ideal devices and leave questions about the performance of practical implementations unanswered. To address this challenge, we quantify how the presence of loss in repeater stations affect the maximum attainable rates for quantum communication over linear repeater chains and more complex quantum networks. Extending the framework of node splitting, we model the loss introduced at the repeater stations and then prove the corresponding limits. In the linear chain scenario we show that, by increasing the number of repeater stations, the maximum rate cannot overcome a quantity which solely depends on the loss of a single station. We introduce a way of adapting the standard machinery for obtaining bounds to this realistic scenario. The difference is that whilst ultimate limits for any strategy can be derived given a fixed channel, when the repeaters introduce additional decoherence, then the effective overall channel is itself a function of the chosen repeater strategy (e.g., one-way versus two-way classical communication). Classes of repeater strategies can be analysed using additional modelling and the subsequent bounds can be interpreted as the optimal rate within that class.
Backpropagation training in adaptive quantum networks
We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or adaptive quantum networks. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate linear superposition within a predefined, decoherence-free subspace. Quantum parallelism facilitates simultaneous training and revision of the system within this coherent state space, resulting in accelerated convergence to a stable network attractor under consequent iteration of the implemented backpropagation algorithm. Parallel evolution of linear superposed networks incorporating backpropagation training provides quantitative, numerical indications for optimization of both single-neuron activation functions and optimal reconfiguration of whole-network quantum structure.
Efficient Quantum Algorithms for Quantum Optimal Control
In this paper, we present efficient quantum algorithms that are exponentially faster than classical algorithms for solving the quantum optimal control problem. This problem involves finding the control variable that maximizes a physical quantity at time T, where the system is governed by a time-dependent Schr\"odinger equation. This type of control problem also has an intricate relation with machine learning. Our algorithms are based on a time-dependent Hamiltonian simulation method and a fast gradient-estimation algorithm. We also provide a comprehensive error analysis to quantify the total error from various steps, such as the finite-dimensional representation of the control function, the discretization of the Schr\"odinger equation, the numerical quadrature, and optimization. Our quantum algorithms require fault-tolerant quantum computers.
Scaling silicon-based quantum computing using CMOS technology: State-of-the-art, Challenges and Perspectives
Complementary metal-oxide semiconductor (CMOS) technology has radically reshaped the world by taking humanity to the digital age. Cramming more transistors into the same physical space has enabled an exponential increase in computational performance, a strategy that has been recently hampered by the increasing complexity and cost of miniaturization. To continue achieving significant gains in computing performance, new computing paradigms, such as quantum computing, must be developed. However, finding the optimal physical system to process quantum information, and scale it up to the large number of qubits necessary to build a general-purpose quantum computer, remains a significant challenge. Recent breakthroughs in nanodevice engineering have shown that qubits can now be manufactured in a similar fashion to silicon field-effect transistors, opening an opportunity to leverage the know-how of the CMOS industry to address the scaling challenge. In this article, we focus on the analysis of the scaling prospects of quantum computing systems based on CMOS technology.
Quixer: A Quantum Transformer Model
Progress in the realisation of reliable large-scale quantum computers has motivated research into the design of quantum machine learning models. We present Quixer: a novel quantum transformer model which utilises the Linear Combination of Unitaries and Quantum Singular Value Transform primitives as building blocks. Quixer operates by preparing a superposition of tokens and applying a trainable non-linear transformation to this mix. We present the first results for a quantum transformer model applied to a practical language modelling task, obtaining results competitive with an equivalent classical baseline. In addition, we include resource estimates for evaluating the model on quantum hardware, and provide an open-source implementation for classical simulation. We conclude by highlighting the generality of Quixer, showing that its parameterised components can be substituted with fixed structures to yield new classes of quantum transformers.
Three-level Dicke quantum battery
Quantum battery (QB) is the energy storage and extraction device that is governed by the principles of quantum mechanics. Here we propose a three-level Dicke QB and investigate its charging process by considering three quantum optical states: a Fock state, a coherent state, and a squeezed state. The performance of the QB in a coherent state is substantially improved compared to a Fock and squeezed states. We find that the locked energy is positively related to the entanglement between the charger and the battery, and diminishing the entanglement leads to the enhancement of the ergotropy. We demonstrate the QB system is asymptotically free as N rightarrow infty. The stored energy becomes fully extractable when N=10, and the charging power follows the consistent behavior as the stored energy, independent of the initial state of the charger.
Quantum Multi-Model Fitting
Geometric model fitting is a challenging but fundamental computer vision problem. Recently, quantum optimization has been shown to enhance robust fitting for the case of a single model, while leaving the question of multi-model fitting open. In response to this challenge, this paper shows that the latter case can significantly benefit from quantum hardware and proposes the first quantum approach to multi-model fitting (MMF). We formulate MMF as a problem that can be efficiently sampled by modern adiabatic quantum computers without the relaxation of the objective function. We also propose an iterative and decomposed version of our method, which supports real-world-sized problems. The experimental evaluation demonstrates promising results on a variety of datasets. The source code is available at: https://github.com/FarinaMatteo/qmmf.
Quantum Internet Protocol Stack: a Comprehensive Survey
Classical Internet evolved exceptionally during the last five decades, from a network comprising a few static nodes in the early days to a leviathan interconnecting billions of devices. This has been possible by the separation of concern principle, for which the network functionalities are organized as a stack of layers, each providing some communication functionalities through specific network protocols. In this survey, we aim at highlighting the impossibility of adapting the classical Internet protocol stack to the Quantum Internet, due to the marvels of quantum mechanics. Indeed, the design of the Quantum Internet requires a major paradigm shift of the whole protocol stack for harnessing the peculiarities of quantum entanglement and quantum information. In this context, we first overview the relevant literature about Quantum Internet protocol stack. Then, stemming from this, we sheds the light on the open problems and required efforts toward the design of an effective and complete Quantum Internet protocol stack. To the best of authors' knowledge, a survey of this type is the first of its own. What emerges from this analysis is that the Quantum Internet, though still in its infancy, is a disruptive technology whose design requires an inter-disciplinary effort at the border between quantum physics, computer and telecommunications engineering.
Quantum Switch for the Quantum Internet: Noiseless Communications through Noisy Channels
Counter-intuitively, quantum mechanics enables quantum particles to propagate simultaneously among multiple space-time trajectories. Hence, a quantum information carrier can travel through different communication channels in a quantum superposition of different orders, so that the relative time-order of the communication channels becomes indefinite. This is realized by utilizing a quantum device known as quantum switch. In this paper, we investigate, from a communication-engineering perspective, the use of the quantum switch within the quantum teleportation process, one of the key functionalities of the Quantum Internet. Specifically, a theoretical analysis is conducted to quantify the performance gain that can be achieved by employing a quantum switch for the entanglement distribution process within the quantum teleportation with respect to the case of absence of quantum switch. This analysis reveals that, by utilizing the quantum switch, the quantum teleportation is heralded as a noiseless communication process with a probability that, remarkably and counter-intuitively, increases with the noise levels affecting the communication channels considered in the indefinite-order time combination.
Differentiable Quantum Architecture Search in Asynchronous Quantum Reinforcement Learning
The emergence of quantum reinforcement learning (QRL) is propelled by advancements in quantum computing (QC) and machine learning (ML), particularly through quantum neural networks (QNN) built on variational quantum circuits (VQC). These advancements have proven successful in addressing sequential decision-making tasks. However, constructing effective QRL models demands significant expertise due to challenges in designing quantum circuit architectures, including data encoding and parameterized circuits, which profoundly influence model performance. In this paper, we propose addressing this challenge with differentiable quantum architecture search (DiffQAS), enabling trainable circuit parameters and structure weights using gradient-based optimization. Furthermore, we enhance training efficiency through asynchronous reinforcement learning (RL) methods facilitating parallel training. Through numerical simulations, we demonstrate that our proposed DiffQAS-QRL approach achieves performance comparable to manually-crafted circuit architectures across considered environments, showcasing stability across diverse scenarios. This methodology offers a pathway for designing QRL models without extensive quantum knowledge, ensuring robust performance and fostering broader application of QRL.
Synergy Between Quantum Circuits and Tensor Networks: Short-cutting the Race to Practical Quantum Advantage
While recent breakthroughs have proven the ability of noisy intermediate-scale quantum (NISQ) devices to achieve quantum advantage in classically-intractable sampling tasks, the use of these devices for solving more practically relevant computational problems remains a challenge. Proposals for attaining practical quantum advantage typically involve parametrized quantum circuits (PQCs), whose parameters can be optimized to find solutions to diverse problems throughout quantum simulation and machine learning. However, training PQCs for real-world problems remains a significant practical challenge, largely due to the phenomenon of barren plateaus in the optimization landscapes of randomly-initialized quantum circuits. In this work, we introduce a scalable procedure for harnessing classical computing resources to provide pre-optimized initializations for PQCs, which we show significantly improves the trainability and performance of PQCs on a variety of problems. Given a specific optimization task, this method first utilizes tensor network (TN) simulations to identify a promising quantum state, which is then converted into gate parameters of a PQC by means of a high-performance decomposition procedure. We show that this learned initialization avoids barren plateaus, and effectively translates increases in classical resources to enhanced performance and speed in training quantum circuits. By demonstrating a means of boosting limited quantum resources using classical computers, our approach illustrates the promise of this synergy between quantum and quantum-inspired models in quantum computing, and opens up new avenues to harness the power of modern quantum hardware for realizing practical quantum advantage.
Phase sensitivity at the Heisenberg limit in an SU(1,1) interferometer via parity detection
We theoretically investigate the phase sensitivity with parity detection on an SU(1,1) interferometer with a coherent state combined with a squeezed vacuum state. This interferometer is formed with two parametric amplifiers for beam splitting and recombination instead of beam splitters. We show that the sensitivity of estimation phase approaches Heisenberg limit and give the corresponding optimal condition. Moreover, we derive the quantum Cram\'er-Rao bound of the SU(1,1) interferometer.
Automated distribution of quantum circuits via hypergraph partitioning
Quantum algorithms are usually described as monolithic circuits, becoming large at modest input size. Near-term quantum architectures can only manage a small number of qubits. We develop an automated method to distribute quantum circuits over multiple agents, minimising quantum communication between them. We reduce the problem to hypergraph partitioning and then solve it with state-of-the-art optimisers. This makes our approach useful in practice, unlike previous methods. Our implementation is evaluated on five quantum circuits of practical relevance.
A Comparative Study of Quantum Optimization Techniques for Solving Combinatorial Optimization Benchmark Problems
Quantum optimization holds promise for addressing classically intractable combinatorial problems, yet a standardized framework for benchmarking its performance, particularly in terms of solution quality, computational speed, and scalability is still lacking. In this work, we introduce a comprehensive benchmarking framework designed to systematically evaluate a range of quantum optimization techniques against well-established NP-hard combinatorial problems. Our framework focuses on key problem classes, including the Multi-Dimensional Knapsack Problem (MDKP), Maximum Independent Set (MIS), Quadratic Assignment Problem (QAP), and Market Share Problem (MSP). Our study evaluates gate-based quantum approaches, including the Variational Quantum Eigensolver (VQE) and its CVaR-enhanced variant, alongside advanced quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) and its extensions. To address resource constraints, we incorporate qubit compression techniques like Pauli Correlation Encoding (PCE) and Quantum Random Access Optimization (QRAO). Experimental results, obtained from simulated quantum environments and classical solvers, provide key insights into feasibility, optimality gaps, and scalability. Our findings highlight both the promise and current limitations of quantum optimization, offering a structured pathway for future research and practical applications in quantum-enhanced decision-making.
QUASAR: Quantum Assembly Code Generation Using Tool-Augmented LLMs via Agentic RL
Designing and optimizing task-specific quantum circuits are crucial to leverage the advantage of quantum computing. Recent large language model (LLM)-based quantum circuit generation has emerged as a promising automatic solution. However, the fundamental challenges remain unaddressed: (i) parameterized quantum gates require precise numerical values for optimal performance, which also depend on multiple aspects, including the number of quantum gates, their parameters, and the layout/depth of the circuits. (ii) LLMs often generate low-quality or incorrect quantum circuits due to the lack of quantum domain-specific knowledge. We propose QUASAR, an agentic reinforcement learning (RL) framework for quantum circuits generation and optimization based on tool-augmented LLMs. To align the LLM with quantum-specific knowledge and improve the generated quantum circuits, QUASAR designs (i) a quantum circuit verification approach with external quantum simulators and (ii) a sophisticated hierarchical reward mechanism in RL training. Extensive evaluation shows improvements in both syntax and semantic performance of the generated quantum circuits. When augmenting a 4B LLM, QUASAR has achieved the validity of 99.31% in Pass@1 and 100% in Pass@10, outperforming industrial LLMs of GPT-4o, GPT-5 and DeepSeek-V3 and several supervised-fine-tuning (SFT)-only and RL-only baselines.
Enhancing Quantum Variational Algorithms with Zero Noise Extrapolation via Neural Networks
In the emergent realm of quantum computing, the Variational Quantum Eigensolver (VQE) stands out as a promising algorithm for solving complex quantum problems, especially in the noisy intermediate-scale quantum (NISQ) era. However, the ubiquitous presence of noise in quantum devices often limits the accuracy and reliability of VQE outcomes. This research introduces a novel approach to ameliorate this challenge by utilizing neural networks for zero noise extrapolation (ZNE) in VQE computations. By employing the Qiskit framework, we crafted parameterized quantum circuits using the RY-RZ ansatz and examined their behavior under varying levels of depolarizing noise. Our investigations spanned from determining the expectation values of a Hamiltonian, defined as a tensor product of Z operators, under different noise intensities to extracting the ground state energy. To bridge the observed outcomes under noise with the ideal noise-free scenario, we trained a Feed Forward Neural Network on the error probabilities and their associated expectation values. Remarkably, our model proficiently predicted the VQE outcome under hypothetical noise-free conditions. By juxtaposing the simulation results with real quantum device executions, we unveiled the discrepancies induced by noise and showcased the efficacy of our neural network-based ZNE technique in rectifying them. This integrative approach not only paves the way for enhanced accuracy in VQE computations on NISQ devices but also underlines the immense potential of hybrid quantum-classical paradigms in circumventing the challenges posed by quantum noise. Through this research, we envision a future where quantum algorithms can be reliably executed on noisy devices, bringing us one step closer to realizing the full potential of quantum computing.
Quantum Monte Carlo simulations in the restricted Hilbert space of Rydberg atom arrays
Rydberg atom arrays have emerged as a powerful platform to simulate a number of exotic quantum ground states and phase transitions. To verify these capabilities numerically, we develop a versatile quantum Monte Carlo sampling technique which operates in the reduced Hilbert space generated by enforcing the constraint of a Rydberg blockade. We use the framework of stochastic series expansion and show that in the restricted space, the configuration space of operator strings can be understood as a hard rod gas in d+1 dimensions. We use this mapping to develop cluster algorithms which can be visualized as various non-local movements of rods. We study the efficiency of each of our updates individually and collectively. To elucidate the utility of the algorithm, we show that it can efficiently generate the phase diagram of a Rydberg atom array, to temperatures much smaller than all energy scales involved, on a Kagom\'e link lattice. This is of broad interest as the presence of a Z_2 spin liquid has been hypothesized recently.
Improved FRQI on superconducting processors and its restrictions in the NISQ era
In image processing, the amount of data to be processed grows rapidly, in particular when imaging methods yield images of more than two dimensions or time series of images. Thus, efficient processing is a challenge, as data sizes may push even supercomputers to their limits. Quantum image processing promises to encode images with logarithmically less qubits than classical pixels in the image. In theory, this is a huge progress, but so far not many experiments have been conducted in practice, in particular on real backends. Often, the precise conversion of classical data to quantum states, the exact implementation, and the interpretation of the measurements in the classical context are challenging. We investigate these practical questions in this paper. In particular, we study the feasibility of the Flexible Representation of Quantum Images (FRQI). Furthermore, we check experimentally what is the limit in the current noisy intermediate-scale quantum era, i.e. up to which image size an image can be encoded, both on simulators and on real backends. Finally, we propose a method for simplifying the circuits needed for the FRQI. With our alteration, the number of gates needed, especially of the error-prone controlled-NOT gates, can be reduced. As a consequence, the size of manageable images increases.
Qiskit HumanEval: An Evaluation Benchmark For Quantum Code Generative Models
Quantum programs are typically developed using quantum Software Development Kits (SDKs). The rapid advancement of quantum computing necessitates new tools to streamline this development process, and one such tool could be Generative Artificial intelligence (GenAI). In this study, we introduce and use the Qiskit HumanEval dataset, a hand-curated collection of tasks designed to benchmark the ability of Large Language Models (LLMs) to produce quantum code using Qiskit - a quantum SDK. This dataset consists of more than 100 quantum computing tasks, each accompanied by a prompt, a canonical solution, a comprehensive test case, and a difficulty scale to evaluate the correctness of the generated solutions. We systematically assess the performance of a set of LLMs against the Qiskit HumanEval dataset's tasks and focus on the models ability in producing executable quantum code. Our findings not only demonstrate the feasibility of using LLMs for generating quantum code but also establish a new benchmark for ongoing advancements in the field and encourage further exploration and development of GenAI-driven tools for quantum code generation.
Optimizing quantum noise-induced reservoir computing for nonlinear and chaotic time series prediction
Quantum reservoir computing is strongly emerging for sequential and time series data prediction in quantum machine learning. We make advancements to the quantum noise-induced reservoir, in which reservoir noise is used as a resource to generate expressive, nonlinear signals that are efficiently learned with a single linear output layer. We address the need for quantum reservoir tuning with a novel and generally applicable approach to quantum circuit parameterization, in which tunable noise models are programmed to the quantum reservoir circuit to be fully controlled for effective optimization. Our systematic approach also involves reductions in quantum reservoir circuits in the number of qubits and entanglement scheme complexity. We show that with only a single noise model and small memory capacities, excellent simulation results were obtained on nonlinear benchmarks that include the Mackey-Glass system for 100 steps ahead in the challenging chaotic regime.
Efficient Self-Consistent Quantum Comb Tomography on the Product Stiefel Manifold
Characterizing non-Markovian quantum dynamics is currently hindered by the self-inconsistency and high computational complexity of existing quantum comb tomography (QCT) methods. In this work, we propose a self-consistent framework that unifies the quantum comb, instrument set, and initial states into a single geometric entity, termed as the Comb-Instrument-State (CIS) set. We demonstrate that the CIS set naturally resides on a product Stiefel manifold, allowing the tomography problem to be solved via efficient unconstrained Riemannian optimization while automatically preserving physical constraints. Numerical simulations confirm that our approach is computationally scalable and robust against gate definition errors, significantly outperforming conventional isometry-based QCT methods. Our work indicates the potential to efficiently learn quantum comb with fewer computational resources.
The Computational and Latency Advantage of Quantum Communication Networks
This article summarises the current status of classical communication networks and identifies some critical open research challenges that can only be solved by leveraging quantum technologies. By now, the main goal of quantum communication networks has been security. However, quantum networks can do more than just exchange secure keys or serve the needs of quantum computers. In fact, the scientific community is still investigating on the possible use cases/benefits that quantum communication networks can bring. Thus, this article aims at pointing out and clearly describing how quantum communication networks can enhance in-network distributed computing and reduce the overall end-to-end latency, beyond the intrinsic limits of classical technologies. Furthermore, we also explain how entanglement can reduce the communication complexity (overhead) that future classical virtualised networks will experience.
Variational Quantum algorithm for Poisson equation
The Poisson equation has wide applications in many areas of science and engineering. Although there are some quantum algorithms that can efficiently solve the Poisson equation, they generally require a fault-tolerant quantum computer which is beyond the current technology. In this paper, we propose a Variational Quantum Algorithm (VQA) to solve the Poisson equation, which can be executed on Noise Intermediate-Scale Quantum (NISQ) devices. In detail, we first adopt the finite difference method to transform the Poisson equation into a linear system. Then, according to the special structure of the linear system, we find an explicit tensor product decomposition, with only 2log n+1 items, of its coefficient matrix under a specific set of simple operators, where n is the dimension of the coefficient matrix. This implies that the proposed VQA only needs O(log n) measurements, which dramatically reduce quantum resources. Additionally, we perform quantum Bell measurements to efficiently evaluate the expectation values of simple operators. Numerical experiments demonstrate that our algorithm can effectively solve the Poisson equation.
Curvature-Aware Optimization of Noisy Variational Quantum Circuits via Weighted Projective Line Geometry
We develop a differential-geometric framework for variational quantum circuits in which noisy single- and multi-qubit parameter spaces are modeled by weighted projective lines (WPLs). Starting from the pure-state Bloch sphere CP1, we show that realistic hardware noise induces anisotropic contractions of the Bloch ball that can be represented by a pair of physically interpretable parameters (lambda_perp, lambda_parallel). These parameters determine a unique WPL metric g_WPL(a_over_b, b) whose scalar curvature is R = 2 / b^2, yielding a compact and channel-resolved geometric surrogate for the intrinsic information structure of noisy quantum circuits. We develop a tomography-to-geometry pipeline that extracts (lambda_perp, lambda_parallel) from hardware data and maps them to the WPL parameters (a_over_b, b, R). Experiments on IBM Quantum backends show that the resulting WPL geometries accurately capture anisotropic curvature deformation across calibration periods. Finally, we demonstrate that WPL-informed quantum natural gradients (WPL-QNG) provide stable optimization dynamics for noisy variational quantum eigensolvers and enable curvature-aware mitigation of barren plateaus.
Post-Quantum Cryptography: Securing Digital Communication in the Quantum Era
The advent of quantum computing poses a profound threat to traditional cryptographic systems, exposing vulnerabilities that compromise the security of digital communication channels reliant on RSA, ECC, and similar classical encryption methods. Quantum algorithms, notably Shor's algorithm, exploit the inherent computational power of quantum computers to efficiently solve mathematical problems underlying these cryptographic schemes. In response, post-quantum cryptography (PQC) emerged as a critical field aimed at developing resilient cryptographic algorithms impervious to quantum attacks. This paper delineates the vulnerabilities of classical cryptographic systems to quantum attacks, elucidates the principles of quantum computing, and introduces various PQC algorithms such as lattice-based cryptography, code-based cryptography, hash-based cryptography, and multivariate polynomial cryptography. Highlighting the importance of PQC in securing digital communication amidst quantum computing advancements, this research underscores its pivotal role in safeguarding data integrity, confidentiality, and authenticity in the face of emerging quantum threats.
Foundations for Near-Term Quantum Natural Language Processing
We provide conceptual and mathematical foundations for near-term quantum natural language processing (QNLP), and do so in quantum computer scientist friendly terms. We opted for an expository presentation style, and provide references for supporting empirical evidence and formal statements concerning mathematical generality. We recall how the quantum model for natural language that we employ canonically combines linguistic meanings with rich linguistic structure, most notably grammar. In particular, the fact that it takes a quantum-like model to combine meaning and structure, establishes QNLP as quantum-native, on par with simulation of quantum systems. Moreover, the now leading Noisy Intermediate-Scale Quantum (NISQ) paradigm for encoding classical data on quantum hardware, variational quantum circuits, makes NISQ exceptionally QNLP-friendly: linguistic structure can be encoded as a free lunch, in contrast to the apparently exponentially expensive classical encoding of grammar. Quantum speed-up for QNLP tasks has already been established in previous work with Will Zeng. Here we provide a broader range of tasks which all enjoy the same advantage. Diagrammatic reasoning is at the heart of QNLP. Firstly, the quantum model interprets language as quantum processes via the diagrammatic formalism of categorical quantum mechanics. Secondly, these diagrams are via ZX-calculus translated into quantum circuits. Parameterisations of meanings then become the circuit variables to be learned. Our encoding of linguistic structure within quantum circuits also embodies a novel approach for establishing word-meanings that goes beyond the current standards in mainstream AI, by placing linguistic structure at the heart of Wittgenstein's meaning-is-context.
Qiskit Code Assistant: Training LLMs for generating Quantum Computing Code
Code Large Language Models (Code LLMs) have emerged as powerful tools, revolutionizing the software development landscape by automating the coding process and reducing time and effort required to build applications. This paper focuses on training Code LLMs to specialize in the field of quantum computing. We begin by discussing the unique needs of quantum computing programming, which differ significantly from classical programming approaches or languages. A Code LLM specializing in quantum computing requires a foundational understanding of quantum computing and quantum information theory. However, the scarcity of available quantum code examples and the rapidly evolving field, which necessitates continuous dataset updates, present significant challenges. Moreover, we discuss our work on training Code LLMs to produce high-quality quantum code using the Qiskit library. This work includes an examination of the various aspects of the LLMs used for training and the specific training conditions, as well as the results obtained with our current models. To evaluate our models, we have developed a custom benchmark, similar to HumanEval, which includes a set of tests specifically designed for the field of quantum computing programming using Qiskit. Our findings indicate that our model outperforms existing state-of-the-art models in quantum computing tasks. We also provide examples of code suggestions, comparing our model to other relevant code LLMs. Finally, we introduce a discussion on the potential benefits of Code LLMs for quantum computing computational scientists, researchers, and practitioners. We also explore various features and future work that could be relevant in this context.
Quantum Denoising Diffusion Models
In recent years, machine learning models like DALL-E, Craiyon, and Stable Diffusion have gained significant attention for their ability to generate high-resolution images from concise descriptions. Concurrently, quantum computing is showing promising advances, especially with quantum machine learning which capitalizes on quantum mechanics to meet the increasing computational requirements of traditional machine learning algorithms. This paper explores the integration of quantum machine learning and variational quantum circuits to augment the efficacy of diffusion-based image generation models. Specifically, we address two challenges of classical diffusion models: their low sampling speed and the extensive parameter requirements. We introduce two quantum diffusion models and benchmark their capabilities against their classical counterparts using MNIST digits, Fashion MNIST, and CIFAR-10. Our models surpass the classical models with similar parameter counts in terms of performance metrics FID, SSIM, and PSNR. Moreover, we introduce a consistency model unitary single sampling architecture that combines the diffusion procedure into a single step, enabling a fast one-step image generation.
Quantum Verifiable Rewards for Post-Training Qiskit Code Assistant
Qiskit is an open-source quantum computing framework that allows users to design, simulate, and run quantum circuits on real quantum hardware. We explore post-training techniques for LLMs to assist in writing Qiskit code. We introduce quantum verification as an effective method for ensuring code quality and executability on quantum hardware. To support this, we developed a synthetic data pipeline that generates quantum problem-unit test pairs and used it to create preference data for aligning LLMs with DPO. Additionally, we trained models using GRPO, leveraging quantum-verifiable rewards provided by the quantum hardware. Our best-performing model, combining DPO and GRPO, surpasses the strongest open-source baselines on the challenging Qiskit-HumanEval-hard benchmark.
Quantum coherence and distribution of N-partite bosonic fields in noninertial frame
We study the quantum coherence and its distribution of N-partite GHZ and W states of bosonic fields in the noninertial frames with arbitrary number of acceleration observers. We find that the coherence of both GHZ and W state reduces with accelerations and freezes in the limit of infinite accelerations. The freezing value of coherence depends on the number of accelerated observers. The coherence of N-partite GHZ state is genuinely global and no coherence exists in any subsystems. For the N-partite W state, however, the coherence is essentially bipartite types, and the total coherence is equal to the sum of coherence of all the bipartite subsystems.
Deep Neuromorphic Networks with Superconducting Single Flux Quanta
Conventional semiconductor-based integrated circuits are gradually approaching fundamental scaling limits. Many prospective solutions have recently emerged to supplement or replace both the technology on which basic devices are built and the architecture of data processing. Neuromorphic circuits are a promising approach to computing where techniques used by the brain to achieve high efficiency are exploited. Many existing neuromorphic circuits rely on unconventional and useful properties of novel technologies to better mimic the operation of the brain. One such technology is single flux quantum (SFQ) logic -- a cryogenic superconductive technology in which the data are represented by quanta of magnetic flux (fluxons) produced and processed by Josephson junctions embedded within inductive loops. The movement of a fluxon within a circuit produces a quantized voltage pulse (SFQ pulse), resembling a neuronal spiking event. These circuits routinely operate at clock frequencies of tens to hundreds of gigahertz, making SFQ a natural technology for processing high frequency pulse trains. Prior proposals for SFQ neural networks often require energy-expensive fluxon conversions, involve heterogeneous technologies, or exclusively focus on device level behavior. In this paper, a design methodology for deep single flux quantum neuromorphic networks is presented. Synaptic and neuronal circuits based on SFQ technology are presented and characterized. Based on these primitives, a deep neuromorphic XOR network is evaluated as a case study, both at the architectural and circuit levels, achieving wide classification margins. The proposed methodology does not employ unconventional superconductive devices or semiconductor transistors. The resulting networks are tunable by an external current, making this proposed system an effective approach for scalable cryogenic neuromorphic computing.
Covariant quantum kernels for data with group structure
The use of kernel functions is a common technique to extract important features from data sets. A quantum computer can be used to estimate kernel entries as transition amplitudes of unitary circuits. Quantum kernels exist that, subject to computational hardness assumptions, cannot be computed classically. It is an important challenge to find quantum kernels that provide an advantage in the classification of real-world data. We introduce a class of quantum kernels that can be used for data with a group structure. The kernel is defined in terms of a unitary representation of the group and a fiducial state that can be optimized using a technique called kernel alignment. We apply this method to a learning problem on a coset-space that embodies the structure of many essential learning problems on groups. We implement the learning algorithm with 27 qubits on a superconducting processor.
Fine-Tuning Large Language Models on Quantum Optimization Problems for Circuit Generation
Large language models (LLM) have achieved remarkable outcomes in addressing complex problems, including math, coding, and analyzing large amounts of scientific reports. Yet few works have explored the potential of LLM in quantum computing. The most challenging problem is how to leverage LLMs to automatically generate quantum circuits at a large scale. In this paper, we address such a challenge by fine-tuning LLMs and injecting the domain-specific knowledge of quantum computing. In particular, we investigate the mechanisms to generate training data sets and construct the end-to-end pipeline to fine-tune pre-trained LLMs that produce parameterized quantum circuits for optimization problems. We have prepared 14,000 quantum circuits covering a substantial part of the quantum optimization landscape: 12 optimization problem instances and their optimized QAOA, VQE, and adaptive VQE circuits. The fine-tuned LLMs can construct syntactically correct parametrized quantum circuits in the most recent OpenQASM 3.0. We have evaluated the quality of the parameters by comparing them to the optimized expectation values and distributions. Our evaluation shows that the fine-tuned LLM outperforms state-of-the-art models and that the parameters are better than random. The LLM-generated parametrized circuits and initial parameters can be used as a starting point for further optimization, e.g., templates in quantum machine learning and the benchmark for compilers and hardware.
Power of sequential protocols in hidden quantum channel discrimination
In many natural and engineered systems, unknown quantum channels act on a subsystem that cannot be directly controlled and measured, but is instead learned through a controllable subsystem that weakly interacts with it. We study quantum channel discrimination (QCD) under these restrictions, which we call hidden system QCD (HQCD). We find that sequential protocols achieve perfect discrimination and saturate the Heisenberg limit. In contrast, depth-1 parallel and multi-shot protocols cannot solve HQCD. This suggests that sequential protocols are superior in experimentally realistic situations.
Quantum error correction with an Ising machine under circuit-level noise
Efficient decoding to estimate error locations from outcomes of syndrome measurement is the prerequisite for quantum error correction. Decoding in presence of circuit-level noise including measurement errors should be considered in case of actual quantum computing devices. In this work, we develop a decoder for circuit-level noise that solves the error estimation problems as Ising-type optimization problems. We confirm that the threshold theorem in the surface code under the circuitlevel noise is reproduced with an error threshold of approximately 0.4%. We also demonstrate the advantage of the decoder through which the Y error detection rate can be improved compared with other matching-based decoders. Our results reveal that a lower logical error rate can be obtained using our algorithm compared with that of the minimum-weight perfect matching algorithm.
Superpositional Gradient Descent: Harnessing Quantum Principles for Model Training
Large language models (LLMs) are increasingly trained with classical optimization techniques like AdamW to improve convergence and generalization. However, the mechanisms by which quantum-inspired methods enhance classical training remain underexplored. We introduce Superpositional Gradient Descent (SGD), a novel optimizer linking gradient updates with quantum superposition by injecting quantum circuit perturbations. We present a mathematical framework and implement hybrid quantum-classical circuits in PyTorch and Qiskit. On synthetic sequence classification and large-scale LLM fine-tuning, SGD converges faster and yields lower final loss than AdamW. Despite promising results, scalability and hardware constraints limit adoption. Overall, this work provides new insights into the intersection of quantum computing and deep learning, suggesting practical pathways for leveraging quantum principles to control and enhance model behavior.
Generic Two-Mode Gaussian States as Quantum Sensors
Gaussian quantum channels constitute a cornerstone of continuous-variable quantum information science, underpinning a wide array of protocols in quantum optics and quantum metrology. While the action of such channels on arbitrary states is well-characterized under full channel knowledge, we address the inverse problem, namely, the precise estimation of fundamental channel parameters, including the beam splitter transmissivity and the two-mode squeezing amplitude. Employing the quantum Fisher information (QFI) as a benchmark for metrological sensitivity, we demonstrate that the symmetry inherent in mode mixing critically governs the amplification of QFI, thereby enabling high-precision parameter estimation. In addition, we investigate quantum thermometry by estimating the average photon number of thermal states, revealing that the transmissivity parameter significantly modulates estimation precision. Our results underscore the metrological utility of two-mode Gaussian states and establish a robust framework for parameter inference in noisy and dynamically evolving quantum systems.
Quantum simulation of generic spin exchange models in Floquet-engineered Rydberg atom arrays
Although quantum simulation can give insight into elusive or intractable physical phenomena, many quantum simulators are unavoidably limited in the models they mimic. Such is also the case for atom arrays interacting via Rydberg states - a platform potentially capable of simulating any kind of spin exchange model, albeit with currently unattainable experimental capabilities. Here, we propose a new route towards simulating generic spin exchange Hamiltonians in atom arrays, using Floquet engineering with both global and local control. To demonstrate the versatility and applicability of our approach, we numerically investigate the generation of several spin exchange models which have yet to be realized in atom arrays, using only previously-demonstrated experimental capabilities. Our proposed scheme can be readily explored in many existing setups, providing a path to investigate a large class of exotic quantum spin models.
QAISim: A Toolkit for Modeling and Simulation of AI in Quantum Cloud Computing Environments
Quantum computing offers new ways to explore the theory of computation via the laws of quantum mechanics. Due to the rising demand for quantum computing resources, there is growing interest in developing cloud-based quantum resource sharing platforms that enable researchers to test and execute their algorithms on real quantum hardware. These cloud-based systems face a fundamental challenge in efficiently allocating quantum hardware resources to fulfill the growing computational demand of modern Internet of Things (IoT) applications. So far, attempts have been made in order to make efficient resource allocation, ranging from heuristic-based solutions to machine learning. In this work, we employ quantum reinforcement learning based on parameterized quantum circuits to address the resource allocation problem to support large IoT networks. We propose a python-based toolkit called QAISim for the simulation and modeling of Quantum Artificial Intelligence (QAI) models for designing resource management policies in quantum cloud environments. We have simulated policy gradient and Deep Q-Learning algorithms for reinforcement learning. QAISim exhibits a substantial reduction in model complexity compared to its classical counterparts with fewer trainable variables.
Variational Quantum Algorithms
Applications such as simulating complicated quantum systems or solving large-scale linear algebra problems are very challenging for classical computers due to the extremely high computational cost. Quantum computers promise a solution, although fault-tolerant quantum computers will likely not be available in the near future. Current quantum devices have serious constraints, including limited numbers of qubits and noise processes that limit circuit depth. Variational Quantum Algorithms (VQAs), which use a classical optimizer to train a parametrized quantum circuit, have emerged as a leading strategy to address these constraints. VQAs have now been proposed for essentially all applications that researchers have envisioned for quantum computers, and they appear to the best hope for obtaining quantum advantage. Nevertheless, challenges remain including the trainability, accuracy, and efficiency of VQAs. Here we overview the field of VQAs, discuss strategies to overcome their challenges, and highlight the exciting prospects for using them to obtain quantum advantage.
Exact Coset Sampling for Quantum Lattice Algorithms
We give a simple, fully correct, and assumption-light replacement for the contested "domain-extension" in Step 9 of a recent windowed-QFT lattice algorithm with complex-Gaussian windows~chen2024quantum. The published Step~9 suffers from a periodicity/support mismatch. We present a pair-shift difference construction that coherently cancels all unknown offsets, produces an exact uniform CRT-coset state over Z_{P}, and then uses the QFT to enforce the intended modular linear relation. The unitary is reversible, uses poly(log M_2) gates, and preserves the algorithm's asymptotics. Project Page: https://github.com/yifanzhang-pro/quantum-lattice.
Evaluating noises of boson sampling with statistical benchmark methods
The lack of self-correcting codes hiders the development of boson sampling to be large-scale and robust. Therefore, it is important to know the noise levels in order to cautiously demonstrate the quantum computational advantage or realize certain tasks. Based on those statistical benchmark methods such as the correlators and the clouds, which are initially proposed to discriminate boson sampling and other mockups, we quantificationally evaluate noises of photon partial distinguishability and photon loss compensated by dark counts. This is feasible owing to the fact that the output distribution unbalances are suppressed by noises, which are actually results of multi-photon interferences. This is why the evaluation performance is better when high order correlators or corresponding clouds are employed. Our results indicate that the statistical benchmark methods can also work in the task of evaluating noises of boson sampling.
