When black cats prowl and pumpkins gleam, may luck be yours on Halloween. (Unknown)
, conferences, workshops, articles, and books on quantum computing have multiplied, opening new ways to process information and to reconsider the limits of classical systems. The interplay between classical and quantum research has also driven hybrid algorithms that combine familiar techniques with quantum resources. This article introduces the essentials of quantum computing and tries to elaborate on further applications to data science.
With the 2025 Nobel Prize in Physics [1] recognizing advances in quantum tunneling, it is clear that quantum technology will be even more present in the coming years. This key idea, developed since the 1980s, is that quantum tunneling enables devices that turn superposition, entanglement, and interference (refer to Figure 1 for definitions) into tools we can engineer, that means we can run real algorithms on real chips, not only in simulations, and explore new ways to learn from high-dimensional data more efficiently.
Before we dive into the basics, it’s worth asking why we need quantum in our workflows. The question is:
what are the limits in today’s methods that force us to reframe our approach and consider alternatives beyond the tools we already use?
Limitations of Moore’s law:
Moore’s law, proposed in 1965, predicted that the number of transistors on a chip, and thus computing power, would roughly double every two years. This expectation drove decades of progress through steady transistor miniaturization: chips fit about twice as many transistors every two years, making computing cheaper and faster [2].
However, as engineers push transistor sizes to the atomic scale, they encounter daunting physical limitations: fitting more, smaller devices into the same area rapidly increases both heat generation and power density, making cooling and stability much harder to manage. At tiny scales, electrons leak or escape from their intended paths, causing power loss and making the chip behave unpredictably, which can lead to errors or reduced performance. Moreover, wires, memory, and input/output systems do not scale as efficiently as transistors, resulting in serious bottlenecks for overall system performance [2].
All these barriers make it clear that the exponential growth predicted by Moore’s law cannot continue indefinitely; only relying on shrinkage alone is no longer viable. Instead, progress now depends on better algorithms, specialized hardware, and, where suitable, optimal algorithms that (when applicable) leverage quantum approaches for selected, high-impact subproblems.
As data volumes continue to grow and computational demands escalate, deep learning and other modern AI methods are reaching practical limits in time, energy, and memory efficiency. Quantum computing offers a different route, one that processes information through superposition, entanglement, and interference, allowing certain computations to scale more efficiently. The goal of quantum machine learning (QML) is to use qubits instead of bits to represent and transform data, potentially handling high-dimensional or uncertain problems more effectively than classical systems. Although today’s hardware is still developing, the conceptual foundations of QML already point toward a future where both quantum and classical resources work together to overcome computational bottlenecks.
Security Paradigm
Traditional encryption methods rely on complex mathematical problems that classical computers find hard to solve. However, quantum computers threaten to break many of these systems rapidly by exploiting quantum algorithms like Shor’s algorithm (one of the examples of quantum computational advantage) [3]. Many quantum-based security innovations are increasingly moving from theory into practical use in industries requiring the highest data protection standards.
A concrete example of this risk is known as “harvest now, decrypt later”: where attackers capture and store encrypted data today, even if they cannot decrypt it yet. Once large-scale quantum computers become available, they could use quantum algorithms to retroactively decrypt this information, exposing sensitive data such as health records, financial transactions, or classified communications [4].
To approach this challenge Google Chrome Browser Includes
Quantum-Resistance. Since version 116, Chrome has implemented a hybrid key agreement algorithm (X25519Kyber768) that combines traditional elliptic-curve cryptography with Kyber, one of the algorithms standardized by NIST for quantum-resistant encryption. This approach protects data against both classical and future quantum attacks.
Mathematical complexity
Using Quantum principles can help to explore vast solution spaces more efficiently than traditional methods. This makes quantum approaches particularly promising for optimization, machine learning, and simulation problems with high computational complexity (Big-O or how effort scales with problem size). For example, factoring large integers is computationally hard mainly due to mathematical complexity, not memory or brute force limits. This means that for very large numbers, like those used in cryptographic systems, factorization of large numbers is practically impossible on classical computers.
Understanding the basics
To understand more about these topics, it is necessary to grasp the basic rules of quantum mechanics and how they differ from the classical view that we use today.
In classical computing, data is represented as bits, which can have a value of 0 or 1. These bits are combined and manipulated using logical operations or logic gates (AND, OR, NOT, XOR, XNOR) to perform calculations and solve problems. However, the amount of information a classical computer can store and process is limited by the number of bits it has, which can represent only a finite number of possible combinations of 0s and 1s. Therefore, certain calculations like factoring large numbers are very difficult for conventional computers to perform.
On the other hand, in quantum computing, data is represented as quantum bits, or qubits, which can have a value of 0 and 1 simultaneously due to the principles of superposition, interference, and entanglement. These principles allow quantum systems to process information in parallel and solve some problems much faster. This is known as the ‘quantum cat state’ or Schrödinger’s cat state.
This idea can be explained with Schrödinger’s cat experiment (figure 1), in which a hypothetically radioactive atom is used in a closed mechanism that, if triggered, could end the life of a cat trapped inside 🙀🙀🙀. The idea is that the atom is in a superposition of states that either activates or does not activate the mechanism, and at the same time is entangled with the state of the cat, so until the atom’s state materializes, the cat’s state remains in a superposition of being both alive 😺 and dead ☠️ simultaneously. The cat’s state in Schrödinger’s experiment is not a real state of matter but rather a theoretical concept used to explain the strange behavior of quantum systems.
A similar idea can be illustrated with a quantum coin (a better example that protects the cats 🐱). A normal coin always has one face up, either heads or tails, but a quantum coin can exist in a superposition of both possibilities at once until it is observed. When someone checks, the superposition collapses into a definite outcome. The coin can also become entangled with the device or system that measures it, meaning that knowing one immediately determines the other (regardless of initial classical conditions). Interference further modifies the probabilities: sometimes the waves add together, making one outcome more likely, while in other cases they cancel out, making it less likely. Even the actions of starting, flipping, and landing can involve quantum phases and create superpositions or entanglement.
Building on these ideas, an n-qubit register lives in a space with 2^n possible states, meaning it can represent complex patterns of quantum amplitudes. However, this does not mean that n qubits store 2^n classical bits or that all answers can be read at once. When the system is measured, the state collapses, and only limited classical information is obtained, roughly n bits per run. The power of quantum computation lies in designing algorithms that prepare and manipulate superpositions and phases so that interference makes the correct outcomes more likely and the incorrect ones less likely. Superposition and entanglement are the essential resources, but true quantum advantage depends on how these effects are used within a specific algorithm or problem.
Different approaches
There are several kinds of approaches to quantum computing, which differ in the qubits they use, how they control them, the conditions they need, and the problems they’re good at. Figure 2 summarizes the main options, and as the field matures, more advanced techniques continue to emerge.
In gate-model quantum computers and quantum annealers, simulation on classical computers becomes impractical as quantum systems grow large (such as those with many qubits or complex problems like factorization of large numbers) due to the exponential resource demands. Real quantum hardware is needed to observe true quantum speedup at scale. However, classical computers still play a crucial role today by allowing researchers and practitioners to simulate small quantum circuits and experiment with quantum-inspired algorithms that mimic quantum behavior without requiring quantum hardware.
When you do need real quantum devices, access is mostly via cloud platforms (IBM Quantum, Rigetti, Azure Quantum, D-Wave). Libraries like Qiskit or PennyLane let you prototype on classical simulators and, with credentials, submit jobs to hardware. Simulation is essential for development but doesn’t perfectly capture physical limits (noise, connectivity, queueing, device size).
Gate models:
On gate-model hardware, the first step is usually setting up a circuit that encodes the quantum state you need to solve the problem. So, the info we know is encoded into quantum states using quantum bits or qubits, which are controlled by quantum gates. These gates are like the logic operations in classical computing, but they work on qubits and take advantage of quantum properties like superposition, entanglement, and interference. There are lots of ways to encode a quantum state into a circuit, and depending on how you do it, error rates can be very different. That’s why error correction techniques are used to fix mistakes and make calculations more accurate. After all the operations and calculations are done, the results need to be decoded back so we can understand them in the normal classical world.
In the case of QML or quantum ML, kernels and variational algorithms are used to encode and build models. These techniques have approaches somewhat different from those used in classical machine learning.
- Variational algorithms (VQAs): define a parameterized circuit and use classical optimization to tune parameters against a loss (e.g., for classification). Examples include Quantum Neural Networks (QNNs), Variational Quantum Eigensolver (VQE), and Quantum Approximate Optimization Algorithm (QAOA).
- Quantum-kernel methods: build quantum feature maps and measure similarities to feed classical classifiers or clusterers. Examples include Quantum SVM (QSVM), Quantum Kernel Estimation (QKE), and Quantum k-means.
QML algorithms, such as kernel-based methods and variational algorithms, have shown promising results in areas like optimization and image recognition and have the potential to revolutionize various industries, from healthcare to finance and cybersecurity. However, many challenges remain, such as the need for robust error correction techniques, the high cost of quantum hardware, and the shortage of quantum experts.
Quantum annealing
Many real-world problems are combinatorial, with possibilities growing factorially (e.g., 10!, 20!, etc.), making exhaustive search impractical. These problems often map naturally to graphs and can be formulated as Quadratic Unconstrained Binary Optimization (QUBO) or Ising models. Quantum annealers load these problem formulations and search for low-energy (optimal or near-optimal) states, providing an alternative heuristic for optimization tasks with graph structures. When compared fairly with strong classical baselines under the same time constraints, quantum annealing can show competitive performance.
In QML, quantum annealing can be applied to optimize parameters in machine learning models, discover patterns, or perform clustering by finding minimum energy configurations representing solutions. Although quantum annealers are hardware-specific and specialized, their practical application to machine learning and optimization makes them an important complementary approach to gate-model QML.
Quantum annealers often serve as heuristic solvers and are compared against classical strong baselines under similar time constraints. Access is generally via cloud services (like D-Wave), and their noise and hardware limitations distinguish them from gate-model quantum computers.
Quantum-inspired
These are classical algorithms that mimic ideas from quantum computing (e.g., annealing-style search, tensor methods). They run on CPUs/GPUs (no quantum hardware required ) and make strong baselines. You can use standard Python stacks or specialized packages to try them at scale.
Quantum-inspired algorithms provide a practical bridge by leveraging quantum principles within classical computing, offering potential speedups for certain problem classes without needing expensive quantum hardware. However, they do not provide the full advantages of true quantum computation, and their performance gains depend heavily on the problem and implementation details.
Example:
Today’s quantum advantage is still embryonic and highly problem-dependent. The biggest gains are expected on high-complexity problems with structure that quantum algorithms can exploit. The toy example presented is this section is purely illustrative and highlights differences between approaches, but real advantage is more likely to appear on problems that are currently hard or intractable for classical computers.
In this example, we use a tabular and simulated dataset in which most points are normal and a small fraction are anomalies (Figure 3). In this demo, normality corresponds to the dense cluster around the origin, while anomalies form a few small clusters far away.
Starting from the same tabular dataset, the workflow branches into three paths: (1) Classical ML (baseline), (2) Gate-based Quantum ML and (3) Quantum Annealing (QUBO). Image by the author.
The diagram of figure 4 illustrates a unified workflow for anomaly detection using three distinct approaches on the same tabular dataset: (1) classical machine learning (One-Class SVM)[7], (2) gate-based quantum machine learning (quantum kernel methods)[8], and (3) quantum annealing-inspired optimization. First, the dataset is cleaned, scaled, and split into training, validation, and test sets. For the classical path, polynomial feature engineering is applied before training a One-Class SVM and evaluating predictions. The gate-based quantum ML option encodes features using a quantum map and estimates quantum kernels for training and inference, followed by decoding and evaluation. The annealing route formulates the task as a QUBO, solves it with simulated annealing, decodes results, and evaluates performance. Each approach produces its own anomaly prediction outputs and evaluation metrics, providing complementary perspectives on the data and demonstrating how both classical and quantum-inspired tools can be integrated into a single analysis pipeline running on a classical computer.
Visualization of results on test dataset using (A) a Classical One-Class SVM, (B) a Quantum Kernel OCSVM (Gate-model QML simulation with PennyLane), and (C) a QUBO-based Simulated Annealing approach (Quantum-Inspired). Each plot shows normal points (blue) and predicted anomalies (orange). Image by the author.
On this tiny, imbalanced test set (22 normal, 4 anomalous points), the three approaches behaved differently. The quantum-kernel OCSVM achieved the best balance: higher overall accuracy (~0.77) by catching most anomalies (recall 0.75) while keeping false alarms lower than the others. The classical OCSVM (RBF) and the annealer-style QUBO both reached recall 1.0 (they found all 4 anomalies) but over-flagged normals, so their accuracies fell (≈0.58 and 0.65).
The objective here is demonstration, not performance: this example shows how to use the approaches, and the results are not the focus. It also illustrates that the feature map or representation can matter more than the classifier.
Any claim of quantum advantage ultimately depends on scaling: problem size and structure, circuit depth and width, entanglement in the feature map, and the ability to run on real quantum hardware to exploit interference rather than merely simulate it. We are not claiming quantum advantage here; this is a simple problem that classical computers can solve, even when using quantum-inspired ideas.
When to Go Quantum
It makes sense to start on simulators and only move to real quantum hardware if there are clear signals of benefit. Simulators are fast, cheap, and reproducible: you can prototype quantum-style methods (e.g., quantum kernels, QUBOs) alongside strong classical baselines under the same time/cost budget. This lets you tune feature maps, hyperparameters, and problem encodings, and see whether any approach shows better accuracy, time-to-good-solution, robustness, or scaling trends.
You then use hardware when it’s justified: for example, when the simulator suggests promising scaling, when the problem structure matches the device (e.g., good QUBO embeddings or shallow gate circuits), or when stakeholders need hardware evidence. On hardware you measure quality–time–cost with noise and connectivity constraints, apply error-mitigation, and compare fairly against tuned classical methods. In short: simulate first, then go quantum to validate real-world performance; adopt quantum only if the hardware results and curves truly warrant it.
As noted earlier, today’s quantum advantage is still embryonic and highly problem-dependent. The real challenge and opportunity is to turn promising simulations into hardware-verified gains on problems that remain difficult for classical computing, showing clear improvements in quality, time, and cost as problem size grows.
Quantum machine learning has the potential to go beyond classical methods in model compression and scalability, especially for data-rich fields like cybersecurity. The challenge is handling enormous datasets, with millions of normal interactions and very few attacks. Quantum models can compress complex patterns into compact quantum representations using superposition and entanglement, which allows for more efficient anomaly detection even in imbalanced data. Hybrid quantum-classical and federated quantum learning methods aim to improve scalability and privacy, making real-time intrusion detection more feasible. Despite current hardware limitations, research indicates quantum compression could enable future models to manage larger, complex cybersecurity data streams more effectively, paving the way for powerful practical defenses.
References
[1] Nobel Prize in Physics 2025. NobelPrize.org. Nobel Prize Outreach (2025). “Summary”. Accessed 19 Oct 2025. https://www.nobelprize.org/prizes/physics/2025/summary/
[2] DataCamp. (n.d.). Moore’s Law: What Is It, and Is It Dead? Retrieved October 2, 2025, from https://www.datacamp.com/tutorial/moores-law
[3] Classiq. (2022, July 19). Quantum Cryptography — Shor’s Algorithm Explained. Classiq Insights. https://www.classiq.io/insights/shors-algorithm-explained
[4] Gartner. (2024, March 14). Begin transitioning to post-quantum cryptography now. Retrieved October 10, 2025, from https://www.gartner.com/en/articles/post-quantum-cryptography
[5] The Quantum Insider. (2023, August 14). Google advances quantum-resistant cryptography efforts in Chrome browser. Retrieved October 10, 2025, from https://thequantuminsider.com/2023/08/14/google-advances-quantum-resistant-cryptography-efforts-in-chrome-browser/
[6] “Schrodinger’s Cat Coin (Antique Silver)” by BeakerHalfFull (accessed Oct 16, 2025). Taken from: Etsy: https://www.etsy.com/listing/1204776736/schrodingers-cat-coin-antique-silver
[7] Scikit-learn developers. “One-class SVM with non-linear kernel (RBF).” scikit-learn documentation, https://scikit-learn.org/stable/auto_examples/svm/plot_oneclass.html. Accessed 21 October 2025.
[8] Schuld, Maria. “Kernel-based training of quantum models with scikit-learn.” PennyLane Demos, https://pennylane.ai/qml/demos/tutorial_kernel_based_training. Published February 2, 2021. Last updated September 22, 2025. Accessed 21 October 2025.
[9] Augey, Axel. “Quantum AI: Ending Impotence!” Saagie Blog, 12 June 2019, https://www.saagie.com/en/blog/quantum-ai-ending-impotence/.
Source link
#Bother #withQuantum #Computing