Quantum vs Classical

Brian Lenahan
3 min readJan 12, 2022

--

Comparing performance in the world of two computers

Quantum vs Classical (Source: Pixabay.com)

Attending a recent IEEE Quantum Computing Innovation Summit, I was enthralled with a presentation by Peter Clark, Head of Computational Science & Engineering, Johnson & Johnson, Janssen Therapeutics Discovery. Now, I have the good fortune of attending dozens of webinars, meetings and conferences centered on quantum technologies each year. So what was so engaging about this particular presentation? Real results.

So often in #quantum, we talk about what could, would or might happen if #quantumtechnologies were applied to a practical business problem. Complex challenges with large datasets that would otherwise take hundreds of years to analyse.

When Clark presented the slide below “Solver Performance: Balancing Compute Time & Fitness of the Solution” as it related to amino protein analysis, I immediately took notice and give full credit to Peter and Janssen here. Consider the slide below.

Image source: Janssen

Business and IT leaders making decisions about investing in #quantumcomputing demand performance criteria on which to make the decision about running an analysis or operation on a #classicalcomputer versus a #quantumcomputer. Clark’s #lifesciences presentation illustrates criteria including compute time, energy consumed/fitness of solution and number of candidates across four different approaches (2 quantum and 2 classical).

Let’s explore each criteria then the whole. In terms of compute time, there was a wide variation in the 4 amino acid peptide (4AAP) scenario from 1 second for the 4MER on Braket approach to over 33,000 seconds using FICO Xpress Optimization. At 14AAP, the Tabu solver (classical) performed best at 1,000 seconds, while the FICO optimization could not complete as it ran for over a week. The Qbsolve approach duration was 3.5 hours.

In terms of energy consumption, there was no difference between solutions at 4AAP, yet a noticeable difference for the solvers that completed the 14AAP exercise (both in the negative range).

Finally, the number of candidates generated by the analysis varied widely again in the 4AAP scenario, from 1 to 21 candidates, however was identical in the 14AAP version at 20 candidates.

Stepping back to an extent, an IT leader looking at these results begins to understand the performance parameters they must consider in making investments in either classical or quantum technologies. No one solution stood out in all cases, so the conclusion is one needs to compare performance at various levels, add in the cost element of run time and draw your conclusion on real data.

It’s an exciting time to be in quantum, at the outset of a nascent technology, and business and IT leaders will need transparency in computing performance to enable cost and time-effective decisions as the technology matures.

Copyright 2022 Aquitaine Innovation Advisors

Brian Lenahan is Founder and Chair of Quantum Strategy Institute and author of Quantum Boost: Using Quantum Computing to Supercharge Your Business. He regularly speaks at quantum conferences and meetups offering insights into how organizations can accelerate the adoption of quantum technologies. He is a former executive in a Top 10 North American bank, a former University Instructor, and mentors innovative companies in the Halton and Hamilton areas. Brian’s training in quantum computers comes from CERN/University of Oviedo, and Technische Universiteit Delft, and he writes extensively on artificial intelligence and quantum computing.

Aquitaine Innovation Advisors: www.aquitaineinnovationadvisors.com

LinkedIn: https://www.linkedin.com/in/brian-lenahan-innovation/

--

--

Brian Lenahan

Brian Lenahan, former executive, advanced tech consultant, author of four Amazon-published books on AI and the author of the upcoming book “Quantum Boost”