Unpopular opinion I'm sure, but I very much quantum today as smoke and mirrors. I've tried to dive down that rabbit hole and I keep finding myself in a sea of theoretical mathematics that seems to fall into the "give me one miracle" category.
I expect this won't be the last time we hear about quantum research that has been foundational to a lot of work turns out to have been manipulated, or designed poorly and unverified by other research labs.
In 2001, a pure quantum computer using Shor's algorithm correctly gave the prime factors of 15. In 2012 they managed to find the prime factors of 21. Since then, everyone has given up on the purely quantum approach by using lots of traditional CPU-time to preprocess the input, somewhat defeating the purpose.
Its a shame. I was really looking forward to finding out what the prime factors of 34 are.
"As pointed out in [57], there has never been a genuine implementation of Shor’s algorithm. The only numbers ever to have been factored by that type of algorithm are 15 and 21, and those factorizations used a simplified version of Shor’s algorithm that requires one to know the factorization in advance..."
If you have a clue what these factors are, you can build an implementation of Shor's algorithm for them, I guess.
If I understand correctly, they didn't actually find the prime factors, they merely verified them, so it's unfortunately up to you to factor 34. Maybe some time in the future a quantum machine can verify whether you were right.
What amazes me is how big tech wants to be in on this bandwagon. There is fomo, and each company announces its own chip that does something - and nobody knows what. The risk of inaction is bigger than the risk of failure.
Meanwhile, a networking company wants to "network" these chips - what does that even mean ? And a gpu company produces a library for computing with quantum.
Smoke-and-mirrors can carry on for a long time, and fool the best of them. Isaac Newton was in on the alchemist bandwagon.
There are exactly 2 reasons we might want quantum networks.
1. 100% secure communication channels (even better we can detect any attempt at eavesdropping and whatever information is captured will be useless to the eavesdropper)
2. Building larger quantum computers. A high fidelity quantum network would allow you to compute simultaneously with multiple quantum chips by interfacing them.
The thing that makes quantum networking different from regular networking is that you have to be very careful to not disturb the state of the photons you are sending down the fiber optics.
Im currently doing my PhD building quantum networking devices so im a bit biased but I think it’s pretty cool :).
Now does it matter I’m not sure. Reason 1 isn’t really that useful because encryption is very secure. However if quantum computers start to scale up and some encryption methods get obsoleted this could be nice. Also having encryption that is provably secure would be nice regardless.
Reason 2 at the moment seems like the only path to building large scale quantum computing. Think a datacenter with many networked quantum chips.
> 100% secure communication channels (even better we can detect any attempt at eavesdropping and whatever information is captured will be useless to the eavesdropper) chips.
A few follow up questions:
1. What is it about quantum computers that can guarantee 100% secure communication channels?
2. If the communications are 100% secure, why are we worried about eavesdropping?
3. If it can detect eavesdropping, why do we need to concern ourselves with the information they might see/hear? Just respond to the detection.
4. What is it about quantum computing that would make an eavesdroppers’ overheard information useless to them, without also obviating said information to the intended recipients?
This is where the language used to discuss this topic turns into word salad for me. None of the things you said necessarily follow from the things that were said before them, but rather just levied as accepted fact.
1. Nothing. Quantum Key Distribution is what they're talking about, and it still requires P!=NP because there's a classical cryptographic step involved (several, actually). It just allows you to exchange symmetric keys with a party you've used classical cryptography to authenticate, it's vulnerable to MITM attacks otherwise. So you're dependent on classical signatures and PKI to authenticate the endpoints. And you're exchanging classical symmetric keys, so still dependent on the security of classical encryption like AES-GCM.
2. Because they're not 100% secure. Only the key exchange step with an authenticated endpoint is 100% secure.
3. Eavesdropping acts like a denial of service and breaks all communications on the channel.
4. It makes the information useless to everyone, both the eavesdropper and the recipients. Attempting to eavesdrop on a QKD channel randomizes the transmitted data. It's a DOS attack. The easier DOS attack is to break the fiber-optic cable transmitting the light pulses, since every endpoint needs a dedicated fiber to connect to every other endpoint.
> Only the key exchange step with an authenticated endpoint is 100% secure.
It's 100% secure in theory, assuming a model of the hardware (which is impossible to verify even if you could build it to "perfectly" satisfy all model assumptions, which of course you also can't).
Sorry, but I think the way you're phrasing this implies a burden on them to explain well understood and widely accepted principles of quantum physics that you seem to be implying are pseudoscience.
I feel like most of your answer was just re-stating the question. I’m happy to admit that’s almost certainly a mix of my ignorance on the topic at hand, and I have been primed to view the discussions surrounding quantum computing with suspicion, but either way, that’s the way it reads to this layperson.
AFAIK, in the case of Microsoft, it's less FOMO and more about execs being able to impress their peers at other companies. So not really a fear of missing out but a desire to have an exclusive access to a technology that has already been socialized and widely understood to be impressive. It's a simple message, 'that impressive thing you've been reading about, we're the ones building that'.
Also: the big company "thought leaders" need something new to talk about every year at conferences like "Microsoft Ignite" or whatever. These people will push funding into things like quantum research just for this. I'm sure they're getting lots of mileage out of LLMs these days...
I'm maybe a little jaded having worked on whole products that had no market success, but were in fact just so that the company had something new to talk about.
It’s not only big tech. Since months I’m reading about joint venture types between companies of European countries with state sponsoring in QC. When you follow the path there are a bundle of fresh created companies in every country each claiming a branch like quantum communication, quantum encryption, quantum this.. all working together and cooperating with the same companies in other EU countries.
Still trying to figure out what is going on. Are they preposition for the upcoming breakthroughs and until then it will be like the beginning in AI where many claimed to have it but actually just pretended.
Additionally they likely want to access the money flow.
It is really desperation, the low-hanging fruit of computing paradigm shifts to fuel the "tech" industry's growth was completely plucked more than a decade ago.
> won't be the last time we hear about quantum research that has been foundational to a lot of work
This research wasn't foundational to a lot of work. Most of important/foundational works in quantum (doesn't matter if computing or general, I'm not sure which one you meant) are verified. How can you possibly base your experimental work on someone else's work if you can't replicate it?
Scientific research today is largely about publishing positive results, we rarely see negative results published and most people focus on publish novel work rather than attempting to validate someone else's work.
I agree with you, its a terrible idea to base your work on someone else's when it hasn't been well confirmed in independent research.
I consider the source work in the OP as foundational because Microsoft built so much work and spent so many resources building on top of it. It's not foundational to the entire field but it is foundational to a lot of follow-up research.
> I agree with you, its a terrible idea to base your work on someone else's when it hasn't been well confirmed in independent research.
It's not about whether it's good or bad idea. To make follow-up experiments you need to first reproduce the original experiment. That's why faking "big" experiments like Schön could never work.
> Microsoft built so much work and spent so many resources building on top of it. It's not foundational to the entire field but it is foundational to a lot of follow-up research.
Will all due respect, a single group (even large one) doing a single type of experiments (even important and complicated one) is not a lot of research. Also, Microsoft knew about data manipulation, that why they moved the experiments in house. They didn't do experiments under assumption that the early Majorana papers are correct, then they wouldn't need to develop their own complicated (and somewhat controversial) protocol to detect Majoranas. It was quite clear for everyone that regardless of data manipulation people were too optimistic interpreting Majorana signatures in these early papers.
Pessimistically I think it's most comparable to fusion. Theoretically possible but very difficult. I'm biased because I'm in the industry, but nothing has cropped up that I've seen that requires a miracle.
> I'm biased because I'm in the industry, but nothing has cropped up that I've seen that requires a miracle.
Scaling is itself the open question. Gravitational effects start creeping in when you scale up sensitive entangled systems and we don't have a good understanding of how gravity interacts with entanglement. Entangled systems above a certain size may just be impossible.
And every time you use a transistor, observe a green plant living, or see that your hand does not pass through the table when you tap it, you see quantum mechanical effects working at scale. Every time you use a telescope, you see quantum information processing (interference) at scale. The control over that process is the difficult part, same as with fusion.
Transistors, plants, hands, and tables are all macroscopic. We see quantum mechanical effects, but we do not see stable superpositions. None of these real-world examples seem to shield a quantum state from decoherence in the way a quantum computer needs to. The sun demonstrates clearly that fusion is controllable (albeit in a regime we struggle to match). I don't think your examples show that a quantum state can be controlled at the scale we need it to be, and I don't know of any extant phenomena that do. But I am no expert, yell at me if I'm wrong.
On the contrary, we have plenty of examples of long-lived (many hours, days, or more) examples of superposition in many solid state materials and chemical systems. For a quantum computer you need both (1) something that can retain superposition and (2) be easily controllable. Point 2 is the difficult one, because if you can control it (interact with it), the "environment" can interact with it and destroy it as well.
All of the examples of macroscopic effects above are possible thanks to effects explainable only through the existence of superposition. It is just that they are not particularly controllable and thus not of interest for storing quantum information.
Another fun point: the example you are focusing on, fusion happening in the sun, is only possible due to the quantum tunneling effect, which is itself dependent on "superposition" being a real thing. Looking past the clouds at our star is already an example of quantum mechanics working, which is very much an experimental observation of an effect possible only thanks to the existence of superposition.
Quantum state is the miracle in my opinion. By definition it can never really be confirmed.
You cannot observe the initial state because that collapses the super position. Said more simply, we can only see the end result and make educated guesses as to how it happened and what the state was prior to the experiment.
It is all a scam. The research side is interesting for what it is, but the idea of having any type of useful "quantum computer" is sci-fi make believe. The grifters will keep stringing investors and these large corporations along for as long as possible. Total waste of resources.
IBM has given the public access to qubits for close to a decade, including a free tier, and as far as I know it produced a stream of research articles that fizzled out several years ago and nothing generally useful.
I became disillusioned when I learned that 5x3=15 was the largest number that has been factored by a quantum computer without tricks or scams. Then I became even more disillusioned when I learned the 15 may not be legit…
In the 40s and 50s programmable general-purpose electronic computers were solving problems.
Ballistics tables, decryption of enemy messages, and more. Early programmable general-purpose electronic computers, from the moment they were turned on could solve problems in minutes that would take human computers months or years. In the 40s, ENIAC proved the feasibility of thermonuclear weaponry.
By 1957 the promise and peril of computing entered popular culture with the Spencer Tracy and Katharine Hepburn film "Desk Set" where a computer is installed in a library and runs amok, firing everybody, all while romantic shenanigans occur. It was sponsored by IBM and is one of the first instances of product placement in films.
People knew "electronic brains" were the future the second they started spitting out printouts of practically unsolvable problems instantly-- they just didn't (during your timeframe) predict the invention and adoption of the transistor and its miniaturization, which made computers ubiquitous household objects.
Even the quote about the supposed limited market for computers trotted out from time-to-time to demonstrate the hesitance of industry and academia to adopt computers is wrong.
In 1953 when Thomas Watson said that "there's only a market for five computers" what he actually said was "When we developed the IBM 701 we created a customer list of 20 organizations who might want it and because it is so expensive we expected to only sign five deals, but we ended up signing 18" (paraphrased).
Militaries, universities, and industry all wanted all of the programmable general-purpose electronic computers they could afford the second it became available because they all knew that it could solve problems.
Included for comparison is a list of problems that quantum computing has solved:
I don't think that you can really make that comparison. "Conventional" computers had more proven practical usage (especially by nation states) in the 40s/50s than quantum computing does today.
Survivor bias. Just because a certain thing seemed like a scam and turned out useful does not mean all things that seem like a scam will turn out useful.
By the 1940s and 50s, computers were already being used for practical and useful work, and calculating machines had a _long_ history of being useful, and it didn't take that long between the _idea_ of a calculating machine and having something that people paid for and used because it had practical value.
They've been plugging along at quantum computers for decades now and have not produced a single useful machine (although a lot of the math and science behind it has been useful for theoretical physics).
I'm not the OP, but when you're of a certain age, you don't need citations for that. Memory serves. And my family was saying those sorts of things and teasing me about being into computers as late as the 1970's.
I can attest to the fact that people who didn't understand computers at all were questioning the value of spending time on them long after the 1970s. The issue is that there are people today who do understand quantum computing that are questioning their value and that's not a great sign.
Quantum annealers have been working on real world problems for a while now - assuming they can be expressed as combinatorial optimization problems of course.
But there is no scalable computational advantage with quantum annealers. They are not what most people in the field would call a "(scalable/digital) quantum computer".
This is such a beautiful theoretical idea (a type of "natural" error correction which protects the qubits without having to deal with the exorbitant overhead of error correcting codes). It is very disheartening and discouraging and just plain exhausting that there has been so much "data manipulation" in this subfield (see all the other retracted papers from the last 5 years mentioned in the article). I can only imagine how hard this must be on the junior scientists on the team who have been swept into it without much control.
Hopefully people are keeping lists of the PIs on these redacted papers and keeping that in mind for future grants, hiring, etc. I know almost nobody is, but one can hope.
Academic fraud ranging from plagiarism to outright faking data should, more often than not, make it basically impossible for you to get any academic job whatsoever, in your field or others.
This chip is an extreme example, but potentially millions of dollars of productivity, hundreds or even thousands of people spending months or years on something based in a fabrication.
The person or people directly responsible for this should never work again.
Totally agree! As with any behavior which is difficult to detect and often goes by unnoticed; the punishment should be large enough for the expected value of fraud being clearly net negative for those that might feel tempted at "tweaking some numbers".
In case anybody else also isn't familiar with "PI" as an abbreviation in this context:
> In many countries, the term principal investigator (PI) refers to the holder of an independent grant and the lead researcher for the grant project, usually in the sciences, such as a laboratory study or a clinical trial.
> Academic fraud ranging from plagiarism to outright faking data should, more often than not, make it basically impossible for you to get any academic job whatsoever, in your field or others.
Sadly, the system is often rewarding fake or, especially, exaggerated/misrepresented data and conclusions. I think that a significant proportion of articles exaggerate findings and deliberately cherry-pick data.
It's a market of lemons. Proving misrepresentation is really hard, and the rewards for doing so are immense. Publishing an article in Nature, Science, or Cell is a career-defining moment.
Yeah I agree it's not an easy problem to solve by any stretch. I'm not a professor or scientist so I won't pretend to understand the intricacies of journal publication and that sort of thing.
But I do wonder when someone's PhD thesis gets published and it turns out they plagiarized large parts of it, why isn't their degree revoked? When someone is a professor at a prestigious institution and they fabricate data, why are they still teaching the following year?
Serious universities do often revoke doctoral degrees if plagiarism is proven. I've seen Oxford University going as far as demanding someone to issue a correction of a journal article to cite prior work because they were making some claims of novelty that were not true.
> When someone is a professor at a prestigious institution and they fabricate data, why are they still teaching the following year?
Internal politics. Committees judging potential misconduct are not independent. If you are sufficiently high up in the ladder, you can get away with many things. Sweden recently created a Swedish National Board for Assessment of Research Misconduct (Npof) to address this problem. I think this is a step in the right direction.
But, ultimately, I think academic fraud should be judged in court. However, e.g. Leonid Schneider (forbetterscience.com) has been taken to court several times for reporting fraud, including fraud that led to patient death, and some judges didn't seem to care much about data fabrication / misrepresentation.
>Academic fraud ranging from plagiarism to outright faking data should, more often than not, make it basically impossible for you to get any academic job whatsoever, in your field or others.
That might actually be a perverse incentive. If you've already nuked your career with some fraud, you can't make it worse by extra fraud... why ever stop? People inclined to do this sort of thing, when faced with that deterrent just double down and commit even more fraud, they figure the best that can be hoped for is to do it so much and so perfectly that they're never discovered.
The trouble is that the system for science worked well when there exists only some tiny number of scientists, but now we're a planet of 8 billion and where people tell their children they have to go to college and get a STEM degree. Hell, you can only become a scientist by producing new research, even if there's not much left to research in your field. And the only way to maintain that position as a scientist is "to publish or perish". We have finite avenues of research with an ever-growing population of scientists, bullshit is inevitable.
My hunch is that these double down types won't be dissuaded by much of anything. I think fundamentally this kind of person has a risk taking personality and often feels they will get away with it.
Even if "bullshit is inevitable" is true -- I don't think it is -- that doesn't mean we shouldn't punish people who make up data, who steal others' work, who steal grant money by using their fake data to justify future grants.
"Well there's lots of people now" is not really a great justification. You become a low trust society by allowing trust to deteriorate. That happens in part because you choose not to punish people who violate that trust in the first place.
>that doesn't mean we shouldn't punish people who make up data,
I am not wishy-washy on punishment. A part of me that I do not deny nor suppress wants punishment for those who do wrong.
But sometimes punishments are counter-productive. The easiest example is the death penalty for heinous, non-murder crimes. This incentivizes the rapist or child molester (or whatever) to kill the victim. You can't execute them twice, after all, so if they're already on the hook for a death penalty crime, murdering their victim also gets rid of a prime witness who could get them the death penalty by testifying, but without increasing the odds of the death penalty.
"Career death penalty" here is like that.
>"Well there's lots of people now" is not really a great justification.
It wasn't meant to be a justification. It was an explanation of the problem, and (in part, at least) and attempt to show that things need to change if we want the fraud to go away.
>You become a low trust society by allowing trust to deteriorate
We've been a low trust society for a long time now. People need to start thinking about how to accomplish the long, slow process of changing a low trust society to a high trust one.
In the case of academia, I’m fine with harsh punishment for people who fabricate data, even if it does incentivize them to be more brazen with their fabrications in the short term. Makes it easier to catch them!
The fact is, we don’t want these people in academia at all. You want researchers who are naturally inclined not to fabricate data, not people who only play by the rules because they think they’re otherwise going to get caught.
> We've been a low trust society for a long time now. People need to start thinking about how to accomplish the long, slow process of changing a low trust society to a high trust one.
The core problem is that most people define their self-worth by their employment, and no matter what, this is all going to crash hard due to automation. The generation currently in power is doing everything they can to deny and downplay what is about to happen, instead of helping our societies prepare.
We're all being thrown into the rat race, it is being told to us verbally and in personal experience that there is no alternative than to become the top dog at all costs because that will be the only chance to survive once automation truly hits home. The result is that those who have the feeling they have failed the rat race and have no hope of catching up withdraw from the "societal contract" and just do whatever they want, at the expense of others if need be.
Sabine Hossenfelder has grown a little… controversial… lately. You should probably do some googling (or YouTube searching, in this case.) It's not entirely clear to me what's going on but some of her videos do raise serious question marks.
No, I'm intentionally not taking a position or alleging anything. I'm pointing out the existence of some controversy. It's up to you to decide whether you want to look into it, and if yes, what sources to prefer.
She has a tendency to be wrong on things outside her domain of expertise. It's the classic being an expert in one field and thinking you're an expert in all of them.
Please give specific examples. I keep seeing vague comments like this about her, but very little in the way of specifics. Without specifics, this is just ad hominem rumor mongering.
Extreme specifics: her comments on work out of MIT on Color Center Qubits was basically "finally an example of actual progress in quantum computing because of reason A, B, C". That statement was in the class of "not even wrong" -- it was just complete non sequitur. People actually in the fields she comments on frequently laugh at her uninformed nonsense. In this particular case, the people that did the study she praised were also among the ones laughing at her.
There was an email she claimed to have received many years ago from another academic essentially saying "you're right that a lot of academic research is BS and just a jobs program for academics, but you shouldn't point that out because it's threatening a lot of people's livelihood." Some people are claiming she fabricated this alleged email etc., I haven't looked too much into it myself.
I end up playing father confessor often enough at work that I have had to launder things people have complained about.
When you are trying to make the right calls for a team, you need to know what the pushback is, but the bullies and masochists on the team don’t need to know who specifically brought forward a complaint as long as the leadership accept it as a valid concern.
So if everyone knows I had a private meeting with Mike yesterday it’s going to be pretty fucking obvious that I got this from Mike unless I fib a bit about the details.
Saying a conversation during a visit happened in email sounds like about the sort of thing I might lie about while not making up the conversation from whole cloth.
Not that Sabine is perfect. I’ve let the YouTube algorithm know I want to see less of her stuff. But just because there is no email doesn’t mean there was no conversation.
Looking at the paper, cherry picking 5 out of 21 devices is in itself not a deal breaker IMO, but it's certainly something they should have disclosed. I bet this happens all the time with these kinds of exotic devices that take almost a year to manufacture only for a single misplaced atom to ruin the whole measurement.
Average of positive and negative Vbias data and many other manipulations are hard to justify, this reeks of "desperate PhD needed to publish at all costs". Yet at the same time I wouldn't fully disqualify the findings, but make the conclusion a lot weaker "there might be something here".
All in all, it's in Microsoft's interests that the data is not cooked. They can only ride on vaporware for so long. Sooner or later the truth will come out; and if Microsoft is burning a lot of cash to lie to everyone, the only loser will be Microsoft.
It wasn't just that by itself. There was a list of several undisclosed data tweaks and manipulations. None were particularly fraudulent or anything, but once you have them all included in the paper, as the former author was complaining, it seems more likely that they just manipulated the theory and data as needed to make them match. There's a big difference between predicting something and demonstrating it in experiment, versus showing your theory can be made to fit some data you have been given when you can pick the right adjustments and subset of data.
It's a physical device at the bleeding edge of capabilities. Defects are pretty much a guarantee, and getting a working sample is a numbers game. Is it really that strange to not get a 100% yield?
Having 5 working devices out of 21 is normal. The problem is that the other 16 weren't mentioned.
Well you also need to account for what kind of deviation are we talking about between the 21. If they selected the 5 because they were the best, but the others showed results that were within say 0-5% of the 5, then sure that is acceptable. But if we’re talking about flipping a coin 21 times, seeing heads 16 times and then choosing the 5 tails outcomes as the results, then I would say that’s pretty unacceptable.
I do not think this is the right metaphor. Having 5 devices work out of 21 is actually a better yield than what TSMC would get with a brand new process. This is not just normal, this is expected. It is all the other allegations that make this be a very questionable case.
Like I said, a single misplaced atom is enough to wreak havoc in the behaviour of these things. That's not the problem, everyone knows there's a large gap between phenomena observed, and making it consistently manufacturable with high yield.
"Fake it till you make it" was practically the motto of young scientists when I was matriculating. In fairness, I don't think they really meant "fake your research" but our entire incentive/competition based society encourages positive misrepresentation - you can't do science, good or bad, if you get competed out of the system entirely.
Guy Debord wrote a book about what he called "The Society of the Spectacle," wherein he argues that capitalism, mostly by virtue of transforming objects into cash at the point of exchange, (that is, a person can take the money and run) tends to cause all things to become evacuated, reduced as much as possible to their image, rather than their substance.
I believe even GK Chesterton understood this when he said that the purpose of a shovel is to dig, not to be sold, but that capitalism tends to see everything as something to be sold primarily and then as something to be used perhaps secondarily.
There is some truth in all this, I think, though obviously the actual physical requirements of living and doing things place some restrictions on how far we can transform things into their images.
As far as I can tell the only thing >25 years of development into quantum computing implementations has resulted in is the prodigious consumption of helium-3.
At least with fusion we've gotten some cool lasers, magnets, and test and measurement gear.
This kind of fundamental research though is absolutely worth it. For a fairly small amount of money (on the nation-state scale) you can literally change the world order. Same deal with fusion or other long-term research programs.
Quantum computers are still in a hype bubble right now, but having a "real" functional one (nothing right now is close IMO) is a big a shift as nuclear energy or the transistor.
Even if we don't get a direct result, ancillary research products can still be useful, as you mentioned with fusion.
You are right about that (well, except all the progress in classical complexity theory and algorithms, cosmology, condensed matter physics, material science, and sensing, which stemmed from this domain).
But, for the little it is worth, it took much longer between Babbage conceiving of a classical computer and humanity developing the technology to make classical computers reliable. Babbage died before it was possible to build the classical computers he invented.
If you are going to use Babbage as the start of the clock, we must use the mechanical and electromechanical logarithmic and analytical engines created in the late 1800s/early 1900s as the stop.
We must also use 1980 as the year in which quantum computing was "invented".
As far as progress goes, in all of those fields there are naught but papers that say "quantum computing would be totally rad in these fields" or simulations that are slower than classical computers. (by, like, a lot)
There has been a programmable electromechanical computer build in the late 1800? Not just a simple calculator? Please share examples, this sounds awesome.
Yes, late 1980s is when I would say quantum computing was conceived.
I gave plenty of examples of positive outcomes thanks to quantum information science in my parenthetical. It is much more than the overhyped VC-funded vapor.
Based on the comments in this thread... Guys, Microsoft fuckery doesn't invalidate an entire field.
I think certain VCs are a little too optimistic about quantum computing timelines, but that doesn't mean it's not steadily progressing. I saw a comment talking about prime factorization from 2001 with some claim that people haven't been working on pure quantum computing since then?
It's really hard. It's still firmly academic, with the peculiar factor that much of it is industry backed. Google quantum was a UCSB research lab turned into a Google branch, while still being powered by grad students. You can begin to see how there's going to be some culture clash and unfortunate pressure to make claims and take research paths atypical of academia (not excusing any fraud, edit: also to be very clear, not accusing Google quantum of anything). It's a hard problem in a funky environment.
1) it's a really hard problem. Anything truly quantum is hard to deal with, especially if you require long coherence times. Consider the entire field of condensed matter (+ some amo). Many of the experiments to measure special quantum properties/confirm theories do so in a destructive manner - I'm not talking only about the quantum measurement problem, I'm talking about the probes themselves physically altering the system such that you can only get one or maybe a few good measurements before the sample is useless. In quantum computing, things need to be cold, isolated, yet still read/write accessible over many many cycles in order to be useful.
2) given the difficulty, there's been many proposals for how to meet the "practical quantum computer" requirement. This ranges from giving up on a true general purpose quantum computer (quantum annealers) to NV vacancies, neutral/ionic lattices, squid/Josephson based,photonic, hybrid system with mechanical resonators, and yeah, topological/anyon shit.
3) It's hard to predict what will actually work, so every approach is a gamble and different groups take different gambles. Some take bigger gambles than the others. Id say topological quantum was a pretty damn big gamble given how new the theory was.
4) Then you need to gradually build up the actually system + infrastructure, validating each subsystem then subsystem interactions and finally full systems. Think system preparation, system readout, system manipulation, isolation, gate design... Each piece of this could be multiple +/- physicist, ece/cse, me, CS PhDs + postdocs amount of work. This is deep expertise and specialization.
4) Then if one approach seems to work, however poorly*, you need to improve it, scale it. Scaling is not guaranteed. This will mean many more PhDs worth trying to improve subsystems.
5) again, this is really hard. Truly, purely quantum systems are very difficult to work with. Classical computing is built on transistors, which operate just fine at room temperature*(plenty of noise, no need for cold isolation) with macroscopic classical observables/manipulations like current, voltage. Yes, transistors work because of quantum effects, and with more recent transistors more directly use quantum effects (tunneling). For example, the "atomic" units of memory are still effectively macroscopic. The systems as a whole are very well described classically, with only practical engineering concerns related to putting things too close together, impurities, heat dissipation. Not to say that any of that is easy at all, but there's no question of principle like "will this even work?"
* With a bunch of people on HN shitting on how poorly + a bunch of other people saying its a full blown quantum computer + probably higher ups trying to make you say it is a real quantum computer or something about quantum supremacy.
*Even in this classical regime think how much effort went into redundancy and encoding/decoding schemes to deal with the very rare bit flips. Now think of what's needed to build a functioning quantum computer at similar scale
No, I don't work in quantum computing, don't invest in it, have no stake in it.
Why couldn't single-user quantum computers be a viable path?
General computing is great, but we built large hadron collider to validate a few specific physics theories, couldn't we we make do with single-use quantum computer for important problems? Prove out some physics simulation, or to break some military encryption or something?
Oh sure, I'm all for that in the mean time. But the people funding this are looking for big payoff. I want to be clear that this is not my field and I'm probably a bit behind on the latest, especially on the algorithmic side.
IIRC some of them have done proof of principle solutions to hydrogen atom ground state, for example. I haven't kept up but I'm guessing they've solved more complicated systems by now. I don't know if they've gone beyond ground states.
Taking this particular problem as an example... The challenge, in my mind, is that we already have pretty good classical approaches the problem. Say the limit of current approaches is characterized by something like the number of electrons ( I don't know actual scaling factors) and that number is N_C(lassical). I think the complexity and thus required advances (difficulty) for building special purpose hypothetical quantum ground state solver that can solve the problem for N_Q >> N_C is similar enough to the difficulty required to scale a more general quantum computer to some "problem" size of moderately smaller magnitude that it's probably hard to justify the funding for the special purpose one over the generic one.
I could be way off, and it's very possible there's new algorithms to solve specific problems that I'm unaware of. Such algorithms with an accompanying special purpose quantum computer could make its construction investible in the sense that efficient solutions to problem under consideration are worth enough to offset the cost. Sorry that was very convoluted phrasing but I'm on my phone and I gtg.
For every framework that ever existed there's somewhere out there a computer running it and doing real work with it, without any updates since autumn 1988, while the google wannabe solo founders worry about the best crutch^H^H^H^H tooling, their CI/CDs and not scaling.
intellectual property is the entire point of modern tech. it doesnt matter if it doesnt work. they want the IP and sit on it. that way if someone else actually does the work they can claim they own it.
I bet you quantum computing will go the way of Newtonian physics - wrong and only proven so by our ability to measure things.
It's as if Newton insisted that he was right despite the orbit of mercury being weird, and blaming his telescope.
Physics is just a description of reality. If your business depends on reality being the same as the model, then you're going to have an expensive time.
We still use Newton's equations for building rockets (and a lot of other things). A theory being replaced by a better one does not mean devices motivated by the obsoleted theory stop working...
We use general relativity for anything fast moving I think. Not sure, but pretty sure. GPS wouldn't work with newton. But that's the point, newton mostly works to within an error of measurement
Uuh no? Quantum computing relies on some aspects of quantum physics, that at this point, have been thoroughly and extensively experimentally verified.
If there are objections to quantum computing, and I believe there are many, those are to be found in questioning the capability of current engineering to build a quantum computer, or the usefulness of one if it could be built.
As the old saying goes: the proof is in the pudding. And quantum computing has produced zero pudding. All hype, and zero pudding. When they actually do something useful (like the equivalent of general relativity and solving GPS), then we can see it as a useful theory.
"In early May, news reports gushed that a quantum computation device had for the first time outperformed classical computers, solving certain problems thousands of times faster. The media coverage sent ripples of excitement through the technology community. A full-on quantum computer, if ever built, would revolutionize large swathes of computer science, running many algorithms dramatically faster, including one that could crack most encryption protocols in use today.
"Over the following weeks, however, a vigorous controversy surfaced among quantum computation researchers. Experts argued over whether the device, created by D-Wave Systems, in Burnaby, British Columbia, really offers the claimed speedups, whether it works the way the company thinks it does, and even whether it is really harnessing the counterintuitive weirdness of quantum physics, which governs the world of elementary particles such as electrons and photons."
Also, how about thinking about it this way: either there's this magical property of the universe that promises to do these magical things which no one can even dream of achieving otherwise, or there's something wrong with physicists' interpretation of physics. Oh, and they haven't proved it yet but promise they will, very soon.
Unpopular opinion I'm sure, but I very much quantum today as smoke and mirrors. I've tried to dive down that rabbit hole and I keep finding myself in a sea of theoretical mathematics that seems to fall into the "give me one miracle" category.
I expect this won't be the last time we hear about quantum research that has been foundational to a lot of work turns out to have been manipulated, or designed poorly and unverified by other research labs.
In 2001, a pure quantum computer using Shor's algorithm correctly gave the prime factors of 15. In 2012 they managed to find the prime factors of 21. Since then, everyone has given up on the purely quantum approach by using lots of traditional CPU-time to preprocess the input, somewhat defeating the purpose.
Its a shame. I was really looking forward to finding out what the prime factors of 34 are.
https://eprint.iacr.org/2015/1018.pdf
"As pointed out in [57], there has never been a genuine implementation of Shor’s algorithm. The only numbers ever to have been factored by that type of algorithm are 15 and 21, and those factorizations used a simplified version of Shor’s algorithm that requires one to know the factorization in advance..."
If you have a clue what these factors are, you can build an implementation of Shor's algorithm for them, I guess.
There was for this year's sigbovik [1]
[1] https://fixupx.com/CraigGidney/status/1907199729362186309
If I understand correctly, they didn't actually find the prime factors, they merely verified them, so it's unfortunately up to you to factor 34. Maybe some time in the future a quantum machine can verify whether you were right.
It's 2 and 17, I asked Claude.
AI can do that now? Looks like I have to upgrade all of my 5-bit SSL keys.
>5-bit SSL keys.
34 requires 6 bits, though
Hence the urgency.
Not sure if it was more wasteful of energy asking Claude or trying to solve it with Quantum.
We could ask Claude to generate the schematics for a quantum computer that can find the prime factors of 21. Then we get the best of both worlds.
A twelve year old could do it for 500 kcal of cookies.
I volunteer as a tribute
The twelve year old would probably be too full and tired after eating you to do any math.
Reminds me of the Groucho Marx line: "A child of five could understand this. Send someone to fetch a child of five."
At least quantum computers are cool.
The infamous 21 (which is half of 42) was my 1st thought when I heard 'unpopular' which is of course a very popular opinion.
What amazes me is how big tech wants to be in on this bandwagon. There is fomo, and each company announces its own chip that does something - and nobody knows what. The risk of inaction is bigger than the risk of failure.
Meanwhile, a networking company wants to "network" these chips - what does that even mean ? And a gpu company produces a library for computing with quantum.
Smoke-and-mirrors can carry on for a long time, and fool the best of them. Isaac Newton was in on the alchemist bandwagon.
There are exactly 2 reasons we might want quantum networks.
1. 100% secure communication channels (even better we can detect any attempt at eavesdropping and whatever information is captured will be useless to the eavesdropper)
2. Building larger quantum computers. A high fidelity quantum network would allow you to compute simultaneously with multiple quantum chips by interfacing them.
The thing that makes quantum networking different from regular networking is that you have to be very careful to not disturb the state of the photons you are sending down the fiber optics.
Im currently doing my PhD building quantum networking devices so im a bit biased but I think it’s pretty cool :).
Now does it matter I’m not sure. Reason 1 isn’t really that useful because encryption is very secure. However if quantum computers start to scale up and some encryption methods get obsoleted this could be nice. Also having encryption that is provably secure would be nice regardless.
Reason 2 at the moment seems like the only path to building large scale quantum computing. Think a datacenter with many networked quantum chips.
> 100% secure communication channels (even better we can detect any attempt at eavesdropping and whatever information is captured will be useless to the eavesdropper) chips. A few follow up questions:
1. What is it about quantum computers that can guarantee 100% secure communication channels?
2. If the communications are 100% secure, why are we worried about eavesdropping?
3. If it can detect eavesdropping, why do we need to concern ourselves with the information they might see/hear? Just respond to the detection.
4. What is it about quantum computing that would make an eavesdroppers’ overheard information useless to them, without also obviating said information to the intended recipients?
This is where the language used to discuss this topic turns into word salad for me. None of the things you said necessarily follow from the things that were said before them, but rather just levied as accepted fact.
1. Nothing. Quantum Key Distribution is what they're talking about, and it still requires P!=NP because there's a classical cryptographic step involved (several, actually). It just allows you to exchange symmetric keys with a party you've used classical cryptography to authenticate, it's vulnerable to MITM attacks otherwise. So you're dependent on classical signatures and PKI to authenticate the endpoints. And you're exchanging classical symmetric keys, so still dependent on the security of classical encryption like AES-GCM.
2. Because they're not 100% secure. Only the key exchange step with an authenticated endpoint is 100% secure.
3. Eavesdropping acts like a denial of service and breaks all communications on the channel.
4. It makes the information useless to everyone, both the eavesdropper and the recipients. Attempting to eavesdrop on a QKD channel randomizes the transmitted data. It's a DOS attack. The easier DOS attack is to break the fiber-optic cable transmitting the light pulses, since every endpoint needs a dedicated fiber to connect to every other endpoint.
> Only the key exchange step with an authenticated endpoint is 100% secure.
It's 100% secure in theory, assuming a model of the hardware (which is impossible to verify even if you could build it to "perfectly" satisfy all model assumptions, which of course you also can't).
Sorry, but I think the way you're phrasing this implies a burden on them to explain well understood and widely accepted principles of quantum physics that you seem to be implying are pseudoscience.
This seems like a decent overview if you want to learn more: https://www.chalmers.se/en/centres/wacqt/discover-quantum-te....
I feel like most of your answer was just re-stating the question. I’m happy to admit that’s almost certainly a mix of my ignorance on the topic at hand, and I have been primed to view the discussions surrounding quantum computing with suspicion, but either way, that’s the way it reads to this layperson.
What is the difference between channel error or distortion and eavesdropping?
For eavesdropping, there is someone there who cares about what you're sending and is successfully learning things about it.
If studio execs have their way, Quantum DRM will be the killer use case…
Jokes on them, we'll just end up creating and using quantum pirating systems or even the dreaded Quantum Analog Hole to evade it.
AFAIK, in the case of Microsoft, it's less FOMO and more about execs being able to impress their peers at other companies. So not really a fear of missing out but a desire to have an exclusive access to a technology that has already been socialized and widely understood to be impressive. It's a simple message, 'that impressive thing you've been reading about, we're the ones building that'.
Also: the big company "thought leaders" need something new to talk about every year at conferences like "Microsoft Ignite" or whatever. These people will push funding into things like quantum research just for this. I'm sure they're getting lots of mileage out of LLMs these days...
I'm maybe a little jaded having worked on whole products that had no market success, but were in fact just so that the company had something new to talk about.
>Isaac Newton was in on the alchemist bandwagon
An often overlooked or unmentioned fact too!
I think its a shame, because it humanizes the (for lack of a better term) smartest people in history to know these things about them.
Yes, Newton invented calculus, but he also tried to turn lead into gold!
So you too, might be able to do something novel, is the idea.
It’s not only big tech. Since months I’m reading about joint venture types between companies of European countries with state sponsoring in QC. When you follow the path there are a bundle of fresh created companies in every country each claiming a branch like quantum communication, quantum encryption, quantum this.. all working together and cooperating with the same companies in other EU countries.
Still trying to figure out what is going on. Are they preposition for the upcoming breakthroughs and until then it will be like the beginning in AI where many claimed to have it but actually just pretended. Additionally they likely want to access the money flow.
It is really desperation, the low-hanging fruit of computing paradigm shifts to fuel the "tech" industry's growth was completely plucked more than a decade ago.
D-Wave venturing into blockchain stuff raised a red flag for me as a layman investor: https://ir.dwavesys.com/news/news-details/2025/D-Wave-Introd...
There are maybe other reasons to invest, but this caused me to sell my shares
> won't be the last time we hear about quantum research that has been foundational to a lot of work
This research wasn't foundational to a lot of work. Most of important/foundational works in quantum (doesn't matter if computing or general, I'm not sure which one you meant) are verified. How can you possibly base your experimental work on someone else's work if you can't replicate it?
Scientific research today is largely about publishing positive results, we rarely see negative results published and most people focus on publish novel work rather than attempting to validate someone else's work.
I agree with you, its a terrible idea to base your work on someone else's when it hasn't been well confirmed in independent research.
I consider the source work in the OP as foundational because Microsoft built so much work and spent so many resources building on top of it. It's not foundational to the entire field but it is foundational to a lot of follow-up research.
> I agree with you, its a terrible idea to base your work on someone else's when it hasn't been well confirmed in independent research.
It's not about whether it's good or bad idea. To make follow-up experiments you need to first reproduce the original experiment. That's why faking "big" experiments like Schön could never work.
> Microsoft built so much work and spent so many resources building on top of it. It's not foundational to the entire field but it is foundational to a lot of follow-up research.
Will all due respect, a single group (even large one) doing a single type of experiments (even important and complicated one) is not a lot of research. Also, Microsoft knew about data manipulation, that why they moved the experiments in house. They didn't do experiments under assumption that the early Majorana papers are correct, then they wouldn't need to develop their own complicated (and somewhat controversial) protocol to detect Majoranas. It was quite clear for everyone that regardless of data manipulation people were too optimistic interpreting Majorana signatures in these early papers.
Pessimistically I think it's most comparable to fusion. Theoretically possible but very difficult. I'm biased because I'm in the industry, but nothing has cropped up that I've seen that requires a miracle.
> I'm biased because I'm in the industry, but nothing has cropped up that I've seen that requires a miracle.
Scaling is itself the open question. Gravitational effects start creeping in when you scale up sensitive entangled systems and we don't have a good understanding of how gravity interacts with entanglement. Entangled systems above a certain size may just be impossible.
The difference is that whenever it’s daytime and there aren’t many clouds in the sky, you can see an example of fusion working at scale…
And every time you use a transistor, observe a green plant living, or see that your hand does not pass through the table when you tap it, you see quantum mechanical effects working at scale. Every time you use a telescope, you see quantum information processing (interference) at scale. The control over that process is the difficult part, same as with fusion.
Transistors, plants, hands, and tables are all macroscopic. We see quantum mechanical effects, but we do not see stable superpositions. None of these real-world examples seem to shield a quantum state from decoherence in the way a quantum computer needs to. The sun demonstrates clearly that fusion is controllable (albeit in a regime we struggle to match). I don't think your examples show that a quantum state can be controlled at the scale we need it to be, and I don't know of any extant phenomena that do. But I am no expert, yell at me if I'm wrong.
On the contrary, we have plenty of examples of long-lived (many hours, days, or more) examples of superposition in many solid state materials and chemical systems. For a quantum computer you need both (1) something that can retain superposition and (2) be easily controllable. Point 2 is the difficult one, because if you can control it (interact with it), the "environment" can interact with it and destroy it as well.
All of the examples of macroscopic effects above are possible thanks to effects explainable only through the existence of superposition. It is just that they are not particularly controllable and thus not of interest for storing quantum information.
Another fun point: the example you are focusing on, fusion happening in the sun, is only possible due to the quantum tunneling effect, which is itself dependent on "superposition" being a real thing. Looking past the clouds at our star is already an example of quantum mechanics working, which is very much an experimental observation of an effect possible only thanks to the existence of superposition.
Quantum state is the miracle in my opinion. By definition it can never really be confirmed.
You cannot observe the initial state because that collapses the super position. Said more simply, we can only see the end result and make educated guesses as to how it happened and what the state was prior to the experiment.
Wait just so I'm sure I understand what you're saying: you tried to read but don't understand the mathematics, therefore it's smoke and mirrors.
Perhaps one only needs to realise that reality is not mathematics?
It is all a scam. The research side is interesting for what it is, but the idea of having any type of useful "quantum computer" is sci-fi make believe. The grifters will keep stringing investors and these large corporations along for as long as possible. Total waste of resources.
IBM has given the public access to qubits for close to a decade, including a free tier, and as far as I know it produced a stream of research articles that fizzled out several years ago and nothing generally useful.
https://en.wikipedia.org/wiki/IBM_Quantum_Platform
I became disillusioned when I learned that 5x3=15 was the largest number that has been factored by a quantum computer without tricks or scams. Then I became even more disillusioned when I learned the 15 may not be legit…
https://www.reddit.com/r/QuantumComputing/comments/1535lii/w...
Why do you think so?
Your words sounds like what people said in the 40s and 50s about computers.
In the 40s and 50s programmable general-purpose electronic computers were solving problems.
Ballistics tables, decryption of enemy messages, and more. Early programmable general-purpose electronic computers, from the moment they were turned on could solve problems in minutes that would take human computers months or years. In the 40s, ENIAC proved the feasibility of thermonuclear weaponry.
By 1957 the promise and peril of computing entered popular culture with the Spencer Tracy and Katharine Hepburn film "Desk Set" where a computer is installed in a library and runs amok, firing everybody, all while romantic shenanigans occur. It was sponsored by IBM and is one of the first instances of product placement in films.
People knew "electronic brains" were the future the second they started spitting out printouts of practically unsolvable problems instantly-- they just didn't (during your timeframe) predict the invention and adoption of the transistor and its miniaturization, which made computers ubiquitous household objects.
Even the quote about the supposed limited market for computers trotted out from time-to-time to demonstrate the hesitance of industry and academia to adopt computers is wrong.
In 1953 when Thomas Watson said that "there's only a market for five computers" what he actually said was "When we developed the IBM 701 we created a customer list of 20 organizations who might want it and because it is so expensive we expected to only sign five deals, but we ended up signing 18" (paraphrased).
Militaries, universities, and industry all wanted all of the programmable general-purpose electronic computers they could afford the second it became available because they all knew that it could solve problems.
Included for comparison is a list of problems that quantum computing has solved:
I don't think that you can really make that comparison. "Conventional" computers had more proven practical usage (especially by nation states) in the 40s/50s than quantum computing does today.
Survivor bias. Just because a certain thing seemed like a scam and turned out useful does not mean all things that seem like a scam will turn out useful.
GP's comment didn't suggest that every supposed scam will turn out to be useful.
Quite the opposite, in fact. It was pointing out that some supposed scams do turn out to be useful.
GP is just blatantly wrong. Electronic computation was NEVER considered a "scam".
The Navy, Air Force, government, private institutions, etc didn't dump billions of funding into computers because they thought they were overrated.
You can't put a man on the Sun just because you put one on the Moon.
By the 1940s and 50s, computers were already being used for practical and useful work, and calculating machines had a _long_ history of being useful, and it didn't take that long between the _idea_ of a calculating machine and having something that people paid for and used because it had practical value.
They've been plugging along at quantum computers for decades now and have not produced a single useful machine (although a lot of the math and science behind it has been useful for theoretical physics).
Do you have any citations for that?
https://www.ittc.ku.edu/~evans/stuff/famous.html
Do you have any citations for that?
I'm not the OP, but when you're of a certain age, you don't need citations for that. Memory serves. And my family was saying those sorts of things and teasing me about being into computers as late as the 1970's.
I can attest to the fact that people who didn't understand computers at all were questioning the value of spending time on them long after the 1970s. The issue is that there are people today who do understand quantum computing that are questioning their value and that's not a great sign.
When you’re of a certain age, time has likely blurred your memories. Citation becomes more important then. Source: me I’m an old SOB.
Source: me I’m an old SOB.
By your own criteria, a citation better than "me" is needed.
Looks like you've got it.
I would actually like to read about that, though.
[flagged]
> I very much quantum today as smoke and mirrors
The most accurate and expirimentaly tested theory of reality is "smoke and mirrors".
There are so many other areas to say that about, even in physics. But this?...
With the context of the article its clear that GP means quantum computing.
Quantum annealers have been working on real world problems for a while now - assuming they can be expressed as combinatorial optimization problems of course.
But there is no scalable computational advantage with quantum annealers. They are not what most people in the field would call a "(scalable/digital) quantum computer".
This is such a beautiful theoretical idea (a type of "natural" error correction which protects the qubits without having to deal with the exorbitant overhead of error correcting codes). It is very disheartening and discouraging and just plain exhausting that there has been so much "data manipulation" in this subfield (see all the other retracted papers from the last 5 years mentioned in the article). I can only imagine how hard this must be on the junior scientists on the team who have been swept into it without much control.
Hopefully people are keeping lists of the PIs on these redacted papers and keeping that in mind for future grants, hiring, etc. I know almost nobody is, but one can hope.
Academic fraud ranging from plagiarism to outright faking data should, more often than not, make it basically impossible for you to get any academic job whatsoever, in your field or others.
This chip is an extreme example, but potentially millions of dollars of productivity, hundreds or even thousands of people spending months or years on something based in a fabrication.
The person or people directly responsible for this should never work again.
Totally agree! As with any behavior which is difficult to detect and often goes by unnoticed; the punishment should be large enough for the expected value of fraud being clearly net negative for those that might feel tempted at "tweaking some numbers".
In case anybody else also isn't familiar with "PI" as an abbreviation in this context:
> In many countries, the term principal investigator (PI) refers to the holder of an independent grant and the lead researcher for the grant project, usually in the sciences, such as a laboratory study or a clinical trial.
Source: https://en.wikipedia.org/wiki/Principal_investigator
Or get rehabilitated, like Leo Kouwenhoven, see https://delta.tudelft.nl/article/gerehabiliteerde-kouwenhove...
I missed the redemption.
This would be half of the nsf grants according to the replication crisis work.
> Academic fraud ranging from plagiarism to outright faking data should, more often than not, make it basically impossible for you to get any academic job whatsoever, in your field or others.
Sadly, the system is often rewarding fake or, especially, exaggerated/misrepresented data and conclusions. I think that a significant proportion of articles exaggerate findings and deliberately cherry-pick data.
It's a market of lemons. Proving misrepresentation is really hard, and the rewards for doing so are immense. Publishing an article in Nature, Science, or Cell is a career-defining moment.
Yeah I agree it's not an easy problem to solve by any stretch. I'm not a professor or scientist so I won't pretend to understand the intricacies of journal publication and that sort of thing.
But I do wonder when someone's PhD thesis gets published and it turns out they plagiarized large parts of it, why isn't their degree revoked? When someone is a professor at a prestigious institution and they fabricate data, why are they still teaching the following year?
Serious universities do often revoke doctoral degrees if plagiarism is proven. I've seen Oxford University going as far as demanding someone to issue a correction of a journal article to cite prior work because they were making some claims of novelty that were not true.
> When someone is a professor at a prestigious institution and they fabricate data, why are they still teaching the following year?
Internal politics. Committees judging potential misconduct are not independent. If you are sufficiently high up in the ladder, you can get away with many things. Sweden recently created a Swedish National Board for Assessment of Research Misconduct (Npof) to address this problem. I think this is a step in the right direction.
But, ultimately, I think academic fraud should be judged in court. However, e.g. Leonid Schneider (forbetterscience.com) has been taken to court several times for reporting fraud, including fraud that led to patient death, and some judges didn't seem to care much about data fabrication / misrepresentation.
>Academic fraud ranging from plagiarism to outright faking data should, more often than not, make it basically impossible for you to get any academic job whatsoever, in your field or others.
That might actually be a perverse incentive. If you've already nuked your career with some fraud, you can't make it worse by extra fraud... why ever stop? People inclined to do this sort of thing, when faced with that deterrent just double down and commit even more fraud, they figure the best that can be hoped for is to do it so much and so perfectly that they're never discovered.
The trouble is that the system for science worked well when there exists only some tiny number of scientists, but now we're a planet of 8 billion and where people tell their children they have to go to college and get a STEM degree. Hell, you can only become a scientist by producing new research, even if there's not much left to research in your field. And the only way to maintain that position as a scientist is "to publish or perish". We have finite avenues of research with an ever-growing population of scientists, bullshit is inevitable.
My hunch is that these double down types won't be dissuaded by much of anything. I think fundamentally this kind of person has a risk taking personality and often feels they will get away with it.
You stop because you can’t get a job?
Even if "bullshit is inevitable" is true -- I don't think it is -- that doesn't mean we shouldn't punish people who make up data, who steal others' work, who steal grant money by using their fake data to justify future grants.
"Well there's lots of people now" is not really a great justification. You become a low trust society by allowing trust to deteriorate. That happens in part because you choose not to punish people who violate that trust in the first place.
>that doesn't mean we shouldn't punish people who make up data,
I am not wishy-washy on punishment. A part of me that I do not deny nor suppress wants punishment for those who do wrong.
But sometimes punishments are counter-productive. The easiest example is the death penalty for heinous, non-murder crimes. This incentivizes the rapist or child molester (or whatever) to kill the victim. You can't execute them twice, after all, so if they're already on the hook for a death penalty crime, murdering their victim also gets rid of a prime witness who could get them the death penalty by testifying, but without increasing the odds of the death penalty.
"Career death penalty" here is like that.
>"Well there's lots of people now" is not really a great justification.
It wasn't meant to be a justification. It was an explanation of the problem, and (in part, at least) and attempt to show that things need to change if we want the fraud to go away.
>You become a low trust society by allowing trust to deteriorate
We've been a low trust society for a long time now. People need to start thinking about how to accomplish the long, slow process of changing a low trust society to a high trust one.
In the case of academia, I’m fine with harsh punishment for people who fabricate data, even if it does incentivize them to be more brazen with their fabrications in the short term. Makes it easier to catch them!
The fact is, we don’t want these people in academia at all. You want researchers who are naturally inclined not to fabricate data, not people who only play by the rules because they think they’re otherwise going to get caught.
>We've been a low trust society for a long time now.
Although trust has been decreasing, the US remains a high-trust society compared to the global average.
> We've been a low trust society for a long time now. People need to start thinking about how to accomplish the long, slow process of changing a low trust society to a high trust one.
The core problem is that most people define their self-worth by their employment, and no matter what, this is all going to crash hard due to automation. The generation currently in power is doing everything they can to deny and downplay what is about to happen, instead of helping our societies prepare.
We're all being thrown into the rat race, it is being told to us verbally and in personal experience that there is no alternative than to become the top dog at all costs because that will be the only chance to survive once automation truly hits home. The result is that those who have the feeling they have failed the rat race and have no hope of catching up withdraw from the "societal contract" and just do whatever they want, at the expense of others if need be.
Sabine was already skeptical in February [0]. Although to be fair, she usually is :) But in this field, I think it is warranted.
[0]: https://backreaction.blogspot.com/2025/02/microsoft-exaggera...
Is there something she is not skeptical of or controversial about?
I don’t know if she has changed or I have changed but I don’t enjoy her stuff anymore. Maybe I should watch some of her old stuff and figure that out.
I’m starting to feel the same indeed.
Einstein equations
Hm, that may be controversial in itself, depending on where you stand in the current cosmology.
A broken clock....
Sabine Hossenfelder has grown a little… controversial… lately. You should probably do some googling (or YouTube searching, in this case.) It's not entirely clear to me what's going on but some of her videos do raise serious question marks.
I've found great success in ignoring, entirely, baseless aspersions cast by faceless anon avatars about people in the public eye.
can you be more specific what you are alleging?
and little controversy is not automatically a problem or reason to discount/ignore someone anyway
No, I'm intentionally not taking a position or alleging anything. I'm pointing out the existence of some controversy. It's up to you to decide whether you want to look into it, and if yes, what sources to prefer.
She has a tendency to be wrong on things outside her domain of expertise. It's the classic being an expert in one field and thinking you're an expert in all of them.
Please give specific examples. I keep seeing vague comments like this about her, but very little in the way of specifics. Without specifics, this is just ad hominem rumor mongering.
Extreme specifics: her comments on work out of MIT on Color Center Qubits was basically "finally an example of actual progress in quantum computing because of reason A, B, C". That statement was in the class of "not even wrong" -- it was just complete non sequitur. People actually in the fields she comments on frequently laugh at her uninformed nonsense. In this particular case, the people that did the study she praised were also among the ones laughing at her.
There was an email she claimed to have received many years ago from another academic essentially saying "you're right that a lot of academic research is BS and just a jobs program for academics, but you shouldn't point that out because it's threatening a lot of people's livelihood." Some people are claiming she fabricated this alleged email etc., I haven't looked too much into it myself.
I end up playing father confessor often enough at work that I have had to launder things people have complained about.
When you are trying to make the right calls for a team, you need to know what the pushback is, but the bullies and masochists on the team don’t need to know who specifically brought forward a complaint as long as the leadership accept it as a valid concern.
So if everyone knows I had a private meeting with Mike yesterday it’s going to be pretty fucking obvious that I got this from Mike unless I fib a bit about the details.
Saying a conversation during a visit happened in email sounds like about the sort of thing I might lie about while not making up the conversation from whole cloth.
Not that Sabine is perfect. I’ve let the YouTube algorithm know I want to see less of her stuff. But just because there is no email doesn’t mean there was no conversation.
Looking at the paper, cherry picking 5 out of 21 devices is in itself not a deal breaker IMO, but it's certainly something they should have disclosed. I bet this happens all the time with these kinds of exotic devices that take almost a year to manufacture only for a single misplaced atom to ruin the whole measurement.
Average of positive and negative Vbias data and many other manipulations are hard to justify, this reeks of "desperate PhD needed to publish at all costs". Yet at the same time I wouldn't fully disqualify the findings, but make the conclusion a lot weaker "there might be something here".
All in all, it's in Microsoft's interests that the data is not cooked. They can only ride on vaporware for so long. Sooner or later the truth will come out; and if Microsoft is burning a lot of cash to lie to everyone, the only loser will be Microsoft.
It wasn't just that by itself. There was a list of several undisclosed data tweaks and manipulations. None were particularly fraudulent or anything, but once you have them all included in the paper, as the former author was complaining, it seems more likely that they just manipulated the theory and data as needed to make them match. There's a big difference between predicting something and demonstrating it in experiment, versus showing your theory can be made to fit some data you have been given when you can pick the right adjustments and subset of data.
> cherry picking 5 out of 21 devices is in itself not a deal breaker IMO
Might as well draw a straight line through a cloud of data points that look like a dog
It's a physical device at the bleeding edge of capabilities. Defects are pretty much a guarantee, and getting a working sample is a numbers game. Is it really that strange to not get a 100% yield?
Having 5 working devices out of 21 is normal. The problem is that the other 16 weren't mentioned.
Well you also need to account for what kind of deviation are we talking about between the 21. If they selected the 5 because they were the best, but the others showed results that were within say 0-5% of the 5, then sure that is acceptable. But if we’re talking about flipping a coin 21 times, seeing heads 16 times and then choosing the 5 tails outcomes as the results, then I would say that’s pretty unacceptable.
I do not think this is the right metaphor. Having 5 devices work out of 21 is actually a better yield than what TSMC would get with a brand new process. This is not just normal, this is expected. It is all the other allegations that make this be a very questionable case.
Like I said, a single misplaced atom is enough to wreak havoc in the behaviour of these things. That's not the problem, everyone knows there's a large gap between phenomena observed, and making it consistently manufacturable with high yield.
> the only loser will be Microsoft.
Not really - that cash could have been allocated to more productive work and more honest people.
Sadly I have the feeling some people are starting to just "play" being scientists/engineers and not actually doing the real work anymore.
MBA science.
Only perception matters?
"Fake it till you make it" was practically the motto of young scientists when I was matriculating. In fairness, I don't think they really meant "fake your research" but our entire incentive/competition based society encourages positive misrepresentation - you can't do science, good or bad, if you get competed out of the system entirely.
Guy Debord wrote a book about what he called "The Society of the Spectacle," wherein he argues that capitalism, mostly by virtue of transforming objects into cash at the point of exchange, (that is, a person can take the money and run) tends to cause all things to become evacuated, reduced as much as possible to their image, rather than their substance.
I believe even GK Chesterton understood this when he said that the purpose of a shovel is to dig, not to be sold, but that capitalism tends to see everything as something to be sold primarily and then as something to be used perhaps secondarily.
There is some truth in all this, I think, though obviously the actual physical requirements of living and doing things place some restrictions on how far we can transform things into their images.
"Fake it till you make it." has turned into "fake it." recently, and it seems to be working disturbingly well in society.
As far as I can tell the only thing >25 years of development into quantum computing implementations has resulted in is the prodigious consumption of helium-3.
At least with fusion we've gotten some cool lasers, magnets, and test and measurement gear.
This kind of fundamental research though is absolutely worth it. For a fairly small amount of money (on the nation-state scale) you can literally change the world order. Same deal with fusion or other long-term research programs.
Quantum computers are still in a hype bubble right now, but having a "real" functional one (nothing right now is close IMO) is a big a shift as nuclear energy or the transistor.
Even if we don't get a direct result, ancillary research products can still be useful, as you mentioned with fusion.
People who believe in quantum can put their own money where their mouth is. Plenty of quantum stocks to speculate on out there.
You are right about that (well, except all the progress in classical complexity theory and algorithms, cosmology, condensed matter physics, material science, and sensing, which stemmed from this domain).
But, for the little it is worth, it took much longer between Babbage conceiving of a classical computer and humanity developing the technology to make classical computers reliable. Babbage died before it was possible to build the classical computers he invented.
If you are going to use Babbage as the start of the clock, we must use the mechanical and electromechanical logarithmic and analytical engines created in the late 1800s/early 1900s as the stop.
We must also use 1980 as the year in which quantum computing was "invented".
As far as progress goes, in all of those fields there are naught but papers that say "quantum computing would be totally rad in these fields" or simulations that are slower than classical computers. (by, like, a lot)
There has been a programmable electromechanical computer build in the late 1800? Not just a simple calculator? Please share examples, this sounds awesome.
Yes, late 1980s is when I would say quantum computing was conceived.
I gave plenty of examples of positive outcomes thanks to quantum information science in my parenthetical. It is much more than the overhyped VC-funded vapor.
Based on the comments in this thread... Guys, Microsoft fuckery doesn't invalidate an entire field.
I think certain VCs are a little too optimistic about quantum computing timelines, but that doesn't mean it's not steadily progressing. I saw a comment talking about prime factorization from 2001 with some claim that people haven't been working on pure quantum computing since then?
It's really hard. It's still firmly academic, with the peculiar factor that much of it is industry backed. Google quantum was a UCSB research lab turned into a Google branch, while still being powered by grad students. You can begin to see how there's going to be some culture clash and unfortunate pressure to make claims and take research paths atypical of academia (not excusing any fraud, edit: also to be very clear, not accusing Google quantum of anything). It's a hard problem in a funky environment.
1) it's a really hard problem. Anything truly quantum is hard to deal with, especially if you require long coherence times. Consider the entire field of condensed matter (+ some amo). Many of the experiments to measure special quantum properties/confirm theories do so in a destructive manner - I'm not talking only about the quantum measurement problem, I'm talking about the probes themselves physically altering the system such that you can only get one or maybe a few good measurements before the sample is useless. In quantum computing, things need to be cold, isolated, yet still read/write accessible over many many cycles in order to be useful.
2) given the difficulty, there's been many proposals for how to meet the "practical quantum computer" requirement. This ranges from giving up on a true general purpose quantum computer (quantum annealers) to NV vacancies, neutral/ionic lattices, squid/Josephson based,photonic, hybrid system with mechanical resonators, and yeah, topological/anyon shit.
3) It's hard to predict what will actually work, so every approach is a gamble and different groups take different gambles. Some take bigger gambles than the others. Id say topological quantum was a pretty damn big gamble given how new the theory was.
4) Then you need to gradually build up the actually system + infrastructure, validating each subsystem then subsystem interactions and finally full systems. Think system preparation, system readout, system manipulation, isolation, gate design... Each piece of this could be multiple +/- physicist, ece/cse, me, CS PhDs + postdocs amount of work. This is deep expertise and specialization.
4) Then if one approach seems to work, however poorly*, you need to improve it, scale it. Scaling is not guaranteed. This will mean many more PhDs worth trying to improve subsystems.
5) again, this is really hard. Truly, purely quantum systems are very difficult to work with. Classical computing is built on transistors, which operate just fine at room temperature*(plenty of noise, no need for cold isolation) with macroscopic classical observables/manipulations like current, voltage. Yes, transistors work because of quantum effects, and with more recent transistors more directly use quantum effects (tunneling). For example, the "atomic" units of memory are still effectively macroscopic. The systems as a whole are very well described classically, with only practical engineering concerns related to putting things too close together, impurities, heat dissipation. Not to say that any of that is easy at all, but there's no question of principle like "will this even work?"
* With a bunch of people on HN shitting on how poorly + a bunch of other people saying its a full blown quantum computer + probably higher ups trying to make you say it is a real quantum computer or something about quantum supremacy.
*Even in this classical regime think how much effort went into redundancy and encoding/decoding schemes to deal with the very rare bit flips. Now think of what's needed to build a functioning quantum computer at similar scale
No, I don't work in quantum computing, don't invest in it, have no stake in it.
Why couldn't single-user quantum computers be a viable path?
General computing is great, but we built large hadron collider to validate a few specific physics theories, couldn't we we make do with single-use quantum computer for important problems? Prove out some physics simulation, or to break some military encryption or something?
Oh sure, I'm all for that in the mean time. But the people funding this are looking for big payoff. I want to be clear that this is not my field and I'm probably a bit behind on the latest, especially on the algorithmic side.
IIRC some of them have done proof of principle solutions to hydrogen atom ground state, for example. I haven't kept up but I'm guessing they've solved more complicated systems by now. I don't know if they've gone beyond ground states.
Taking this particular problem as an example... The challenge, in my mind, is that we already have pretty good classical approaches the problem. Say the limit of current approaches is characterized by something like the number of electrons ( I don't know actual scaling factors) and that number is N_C(lassical). I think the complexity and thus required advances (difficulty) for building special purpose hypothetical quantum ground state solver that can solve the problem for N_Q >> N_C is similar enough to the difficulty required to scale a more general quantum computer to some "problem" size of moderately smaller magnitude that it's probably hard to justify the funding for the special purpose one over the generic one.
I could be way off, and it's very possible there's new algorithms to solve specific problems that I'm unaware of. Such algorithms with an accompanying special purpose quantum computer could make its construction investible in the sense that efficient solutions to problem under consideration are worth enough to offset the cost. Sorry that was very convoluted phrasing but I'm on my phone and I gtg.
that's going to be a banger bobbybroccoli video
So the chip is a paperweight ?
Its 5 years away, just like cold fusion and AI.
We've had cold fusion for years : https://en.wikipedia.org/wiki/Adobe_ColdFusion
And while searching for this silly joke, I'm now baffled by the fact that it's still alive !
For every framework that ever existed there's somewhere out there a computer running it and doing real work with it, without any updates since autumn 1988, while the google wannabe solo founders worry about the best crutch^H^H^H^H tooling, their CI/CDs and not scaling.
I propose everything is a paperweight until you show an implementation.
intellectual property is the entire point of modern tech. it doesnt matter if it doesnt work. they want the IP and sit on it. that way if someone else actually does the work they can claim they own it.
repeal IP laws and regulate tech.
[flagged]
Microsoft's finest, ladies and gentlemen.
Looks like the end of the world has been delayed.
I bet you quantum computing will go the way of Newtonian physics - wrong and only proven so by our ability to measure things.
It's as if Newton insisted that he was right despite the orbit of mercury being weird, and blaming his telescope.
Physics is just a description of reality. If your business depends on reality being the same as the model, then you're going to have an expensive time.
We still use Newton's equations for building rockets (and a lot of other things). A theory being replaced by a better one does not mean devices motivated by the obsoleted theory stop working...
We use general relativity for anything fast moving I think. Not sure, but pretty sure. GPS wouldn't work with newton. But that's the point, newton mostly works to within an error of measurement
Uuh no? Quantum computing relies on some aspects of quantum physics, that at this point, have been thoroughly and extensively experimentally verified.
If there are objections to quantum computing, and I believe there are many, those are to be found in questioning the capability of current engineering to build a quantum computer, or the usefulness of one if it could be built.
As the old saying goes: the proof is in the pudding. And quantum computing has produced zero pudding. All hype, and zero pudding. When they actually do something useful (like the equivalent of general relativity and solving GPS), then we can see it as a useful theory.
From 12 Years Ago: August 26, 2013
"In early May, news reports gushed that a quantum computation device had for the first time outperformed classical computers, solving certain problems thousands of times faster. The media coverage sent ripples of excitement through the technology community. A full-on quantum computer, if ever built, would revolutionize large swathes of computer science, running many algorithms dramatically faster, including one that could crack most encryption protocols in use today.
"Over the following weeks, however, a vigorous controversy surfaced among quantum computation researchers. Experts argued over whether the device, created by D-Wave Systems, in Burnaby, British Columbia, really offers the claimed speedups, whether it works the way the company thinks it does, and even whether it is really harnessing the counterintuitive weirdness of quantum physics, which governs the world of elementary particles such as electrons and photons."
https://metanexus.net/proof-quantum-pudding/
https://spectrum.ieee.org/d-wave-quantum
Also, how about thinking about it this way: either there's this magical property of the universe that promises to do these magical things which no one can even dream of achieving otherwise, or there's something wrong with physicists' interpretation of physics. Oh, and they haven't proved it yet but promise they will, very soon.