Computer Simulation of EPR Scenarios

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sat Feb 15, 2014 1:30 pm

Joy Christian wrote:
gill1109 wrote:Chantal Roth is taking to R like a fish to water

http://rpubs.com/chenopodium/


All three of Chantal's fish are swimming in a boxed aquarium (i.e., in ), not in an ocean without boundary (i.e., in ). In the real ocean there are no "fudge factors" (i.e., no detector, or any other type of loopholes). The real ocean without boundaries can be found on my blog: http://libertesphilosophica.info/blog/.


Exactly. It is *not* possible to simulate your model with an ordinary computer, full stop. The inner workings of a classical computer are "flat".

We need a quantum computer to do it.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby Heinera » Sat Feb 15, 2014 1:43 pm

gill1109 wrote:
Heinera wrote:Fantastic observation by Richard! Given the obvious connection to the detection loophole, at least I assumed that Christian and Fodje managed to get the correlations correct, since this is not difficult to achieve... And the connection to Caroline Thompson's paper is just priceless. I'll follow this up after the weekend when I have analysed it further (and there went some weekend plans down the drain ;) )


I am not sure that it is so easy to get the correlations *perfect*. It is easy to get a very close approximation to the cosine curve. It is known that one can get arbitrarily close. However I am not aware of a detection loophole type simulation which generates *exactly* the cosine.


http://arxiv.org/abs/quant-ph/9905018

Edit: If it was difficult to get the correlations perfect, the detection loophole wouldn't really be a conceptual problem, because models exploiting this could then be falsified empirically by just improving the accuracy of the experiments. But as is the situation, we need to improve detector efficiency beyond the theoretical threshold to falsify them (which has recently been done).
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sat Feb 15, 2014 2:09 pm

Thanks! Indeed, I had forgotten this one. And it even has all the desired rotational symmetries.

Heinera wrote:http://arxiv.org/abs/quant-ph/9905018
Edit: If it was difficult to get the correlations perfect, the detection loophole wouldn't really be a conceptual problem, because models exploiting this could then be falsified empirically by just improving the accuracy of the experiments. But as is the situation, we need to improve detector efficiency beyond the theoretical threshold to falsify them (which has recently been done).
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby Joy Christian » Sat Feb 15, 2014 2:26 pm

gill1109 wrote:Exactly. It is *not* possible to simulate your model with an ordinary computer, full stop. The inner workings of a classical computer are "flat".

We need a quantum computer to do it.


Complete hogwash.

To begin with, Bell did not claim that "no "ordinary" computer can simulate EPR-Bohm correlation." He made a false analytical claim, which has been refuted by me in numerous analytical demonstrations, as documented on my blog: http://libertesphilosophica.info/blog/.

Moreover, there is no such thing as a "quantum" computer. All computers are---and will ever be---classical computers.

And finally, my model has been simulated by many people on classical computers, including yourself. Although you haven't been able to do a very good job.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Computer Simulation of EPR Scenarios

Postby Ben6993 » Sat Feb 15, 2014 2:31 pm

For Richard:
I agree .... I don't quite see how one can get a perfect cosine curve with a simulation using randomly generated variables. One will get arbitrarily close I guess.

For Heinera:
What is the agreed theoretical threshold?

For Michel:
I agree .... one would want to run repetitions of a simulation and get average results over n runs. Just because the result in run 1 has a very small standard error does not mean that run 2 will have the same result as run 1 to within that small standard error. Well it might happen, but it would be important to try replications.
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sat Feb 15, 2014 2:45 pm

Ben6993 wrote:For Richard:
I agree .... I don't quite see how one can get a perfect cosine curve with a simulation using randomly generated variables. One will get arbitrarily close I guess.


No. The simulation generates some random variables from a certain desired probability distribution. But does the desired probability distribution have the desired mean value? (cosine of alpha minus beta).

Of course there are all kinds of numerical limitations in this. The computer's "cosine" is not exactly the cosine ... indeed, the computer's real number can only take on finitely many different values.

What I mean is the following. Convert Minkwe's algorithm into mathematics. It says: let theta_0 be a uniformly distributed random number between 0 and pi/2 and independently of this, ... . We get a mathematical model for a bunch of random variables. We are interested in the mean value of some function of those random variables. In a Monte Carlo experiment, we "simulate" the random variables, we compute the function, we observe values of the function of those random variables. We average a whole heap of them. If the simulation was perfect and we went on for ever, the average would converge to the mean.

I am saying that the mean value of the thing Minkwe is simulating - the expectation value of the product of measurements of A and B - is not exactly equal to cosine of whatever... . It is off by up to approximately 0.001, depending on the angles concerned.

I'm saying that if Minkwe's simulation ran for ever, on a computer with infinite precision and a perfect pseudo random number generator, the observed correlation would converge to the wrong answer. Close, but different. But I'm saying that a version of his simulation "lifted" from S^1 to S^2 would do a little bit better.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby minkwe » Sat Feb 15, 2014 3:03 pm

Richard,
In one of your papers, you derived an upper limit above which you claim the CHSH can not be violated using any known loophole. Please could you enlighten us what that number is? is it 75% or 83% or some other number?
Thanks.
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Computer Simulation of EPR Scenarios

Postby Heinera » Sat Feb 15, 2014 3:04 pm

Ben6993 wrote:For Richard:
I agree .... I don't quite see how one can get a perfect cosine curve with a simulation using randomly generated variables. One will get arbitrarily close I guess.

For Heinera:
What is the agreed theoretical threshold?

For Michel:
I agree .... one would want to run repetitions of a simulation and get average results over n runs. Just because the result in run 1 has a very small standard error does not mean that run 2 will have the same result as run 1 to within that small standard error. Well it might happen, but it would be important to try replications.


The threshold (which is the consequence of a theorem, and not necessarily agreement ;) ) is 2*(sqrt(2)-1), approximately 83% detection. This wikipedia page has more info:
http://en.wikipedia.org/wiki/Loopholes_ ... xperiments
Heinera
 
Posts: 917
Joined: Thu Feb 06, 2014 1:50 am

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sat Feb 15, 2014 3:46 pm

Heinera: 2*(sqrt(2)-1), approximately 83% , is the threshold in a CHSH experiment. Only two settings per measurement station. If you want to generate the cosine correlation for all possible settings, you have a harder job. The threshold (above which it is impossible with LHV) would be lower.

I just wrote a script for the Gisin and Gisin LHV model (http://arxiv.org/abs/quant-ph/9905018). It is pretty (the model, that is). Exactly 50% of the pairs are rejected: 25% of the time one of the measurement stations gives a "no show", another 25% of the time the other. The other 50% of the time, both measurement stations deliver an outcome.

My R script and some output:

http://rpubs.com/gill1109/13344
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby Ben6993 » Sat Feb 15, 2014 4:07 pm

For Richard:

Thanks for the explanation. I follow you now. You mean that there is a very slight bias towards missing the target. So running the program longer and longer would tend towards giving more and more consistent results but always biased. Corresponding to ever increasing consistency but always imperfect validity.

For Heinera:

Thanks for the link. I completely misunderstood you previously.
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sat Feb 15, 2014 4:10 pm

minkwe wrote:Richard,
In one of your papers, you derived an upper limit above which you claim the CHSH can not be violated using any known loophole. Please could you enlighten us what that number is? is it 75% or 83% or some other number?
Thanks.

There are different numbers depending on what loopholes are being exploited. According to Larsson (1998, 1999) and Larsson and Gill (2004) the threshold is 71% for a clocked experiment (pairs determined by externally determined coincidence windows), and 88% for an unclocked experiment (pairs determined by coincidence window determined by their own measurement window). And this threshold is the minimum over all setting pairs, of the chance that Alice sees a particle given Bob does, and vice versa (altogether 8 combinations). References from my paper arXiv 1207.5103

And you have to know what we mean by this: we mean that operating above this threshhold, a local hidden variables model can't get the CHSH quantity equal to 2 sqrt 2.

The chance you violate the CHSH inequality "... <= 2" can easily be arranged to be 50%, with a local realist model, no detection (or other) loophole. The point is that QM can, on average, get 2 sqrt 2. Substantially more than 2.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby Joy Christian » Sat Feb 15, 2014 4:26 pm

gill1109 wrote:...a local hidden variables model can't get the CHSH quantity equal to 2 sqrt 2.


It is beyond my comprehension why people keep repeating this blatant lie. Here is but one example of how a local hidden variable model can and does get the CHSH quantity equal to 2 sqrt 2 (see, especially, eqs. 9.994 to 9.100 on page 232): http://libertesphilosophica.info/blog/w ... hapter.pdf.

And this, of course, is not the only example: http://arxiv.org/abs/1106.0748. What has happened to the scientific objectivity and pursuit of apolitical truth?
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Computer Simulation of EPR Scenarios

Postby minkwe » Sat Feb 15, 2014 5:05 pm

gill1109 wrote:There are different numbers depending on what loopholes are being exploited. According to Larsson (1998, 1999) and Larsson and Gill (2004) the threshold is 71% for a clocked experiment (pairs determined by externally determined coincidence windows), and 88% for an unclocked experiment (pairs determined by coincidence window determined by their own measurement window).


Please could you clarify the difference between "pairs determined by externally determined coincidence windows", and "pairs determined by coincidence window determined by their own measurement window"?

And the percentages 71%, 88% is not the detection efficiency but rather the coincidence efficiency, is that a correct reading of your statement? Which means you could still have less than those numbers even if your detectors are 99.9% efficient, but you do not know how to match the pairs properly?
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sat Feb 15, 2014 5:25 pm

minkwe wrote:Please could you clarify the difference between "pairs determined by externally determined coincidence windows", and "pairs determined by coincidence window determined by their own measurement window"?

And the percentages 71%, 88% is not the detection efficiency but rather the coincidence efficiency, is that a correct reading of your statement? Which means you could still have less than those numbers even if your detectors are 99.9% efficient, but you do not know how to match the pairs properly?


The percentages I quoted are not the efficiencies of the detectors. They are the probabilities that one particle is detected given that the other particle is detected.

About timing: some experiments use pulsed lasers or have some otherwise pre-determined or externally determined timing schedule. There are fixed time slots. In each time slot there is one setting in force in each wing of the experiment. During that time interval there is at most one event. The outcome per time slot, in each wing, is -1, 0 or +1. See also Bell "Bertlemann's socks" on "event ready detectors" for a variant on this kind of scheme.

However in most experiments, the settings are being varied rapidly and randomly, and from time to time particles are registered in the two wings of the experiment. We note down the time of each event. Two events, one in each wing of the experiment, are chosen by the experimenter to represent a pair if they occur within a certain small time interal of one another. This is called the coincidence window. The width of the window is fixed in advance, but the locations of the windows which give us pairs are determined by the events themselves.

One can somewhat more easily violate CHSH by a large amount in event-based local-realistic simulations of experiments of the second kind, than in those of the first kind.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby minkwe » Sat Feb 15, 2014 8:13 pm

gill1109 wrote:The percentages I quoted are not the efficiencies of the detectors. They are the probabilities that one particle is detected given that the other particle is detected.

My question was that it is not at all about detection but about data matching at the end. If you throw away detected particles at the end because you could not reliably match them, then even though you may know that the the particle's sibling was not matched, you may not know for sure that it was not detected, rather than simply not considered in your analysis. So really it is not about detection but about consideration in the calculations of correlations. Isn't that the case?

About timing: some experiments use pulsed lasers or have some otherwise pre-determined or externally determined timing schedule. There are fixed time slots. In each time slot there is one setting in force in each wing of the experiment. During that time interval there is at most one event. The outcome per time slot, in each wing, is -1, 0 or +1. See also Bell "Bertlemann's socks" on "event ready detectors" for a variant on this kind of scheme.

By "event", do you mean emission event, or detection event. They are not necessarily the same, unless you are making some hidden assumptions about the process. Also, these pre-determined timing schedule, has that been done in any experiment or is it merely gedanken? Has any experiment been done in which we know exactly when a single particle is emitted, or by pulsed lasers do you mean we know when a pulse of particles is measured. Significant difference don't you think?

However in most experiments, the settings are being varied rapidly and randomly, and from time to time particles are registered in the two wings of the experiment. We note down the time of each event. Two events, one in each wing of the experiment, are chosen by the experimenter to represent a pair if they occur within a certain small time interal of one another. This is called the coincidence window. The width of the window is fixed in advance, but the locations of the windows which give us pairs are determined by the events themselves. One can somewhat more easily violate CHSH by a large amount in event-based local-realistic simulations of experiments of the second kind, than in those of the first kind.

Doesn't this also make an assumption about the time of arrival of the particles? What reason do we have to assume that the time differences will be fixed?
The reasons for my questioning has to do with the issue of what we think loopholes mean. Why can't it be that nature just works like that and not all particles can be detected at all angles (detection loophole), time differences are not fixed (coincidence time loophole)?
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Computer Simulation of EPR Scenarios

Postby minkwe » Sat Feb 15, 2014 9:11 pm

Richard,
Lets say I write a simulation with a source producing particle pairs and two stations recording output files. You can pick any angle setting or range of angle settings to randomly switch between at the stations. You can save and restore your random number seeds as much as you like for the stations used in picking the settings, but you can't control the hidden randomness in the particles, the stations or the source. Each station produces an output file with time-stamped results. On each line we have the time of measurement, the angle setting and the outcome. One such file for each station. No other information provided. Will that be a reasonable representation of a feasible EPR-type experiment or not? What is the loophole in such a simulation in your opinion.

Now let us say that our source can generate two files representing the set of particle pairs sent to each arm, and you can then run your station to read one of the files, apply it's randomly selected settings and then write the results to the output file. The we decide to do the following test:

Case A: One set of source particle pairs ONLY. (ie, the same two files will be used or all tests)
- We run Alice's station to measure the A column of our spreadsheet, then repeat the measurement on the same set to obtain our A' column
- We run Bob's station to measure the B column of the spreasheet and then repeat the measurement on the same set to obtain the B' column.
- We then do our analysis, which must include a procedure to match the data into pairs and then calculate the correlators.
Do you believe that it is possible to violate the CHSH in such a case, even if your answer to the loophole question above applies?

Case B: In addition to the set of pairs in case A, we have 3 other sets of pairs. A total of 8 files which include the first two files.
- We repeat the measurements as in case A, except every time we are measuring from a different set of particles for the arm of the experiment (no set used more than once).
- We use exactly the same matching/analysis procedure used in Case A.
Do you believe that it is possible to violate the CHSH, even if your answer to the loophole question above applies?
Do you expect to see any differences between the results of Case A and Case B? If so, what does that say to your claim that results from disjoint sets should be equivalent to those in a single set if we average a large number?
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sun Feb 16, 2014 6:13 am

Dear Minkwe

That's two long postings both with a long list of questions!

Here are the short answers to the questions with short answers:

By "event" I meant "detection event".

I think of two broad classes of experiments. In one of them, "emission times" are somehow controlled and/or observed. They are well spread out. There is an emission. Then we wait for a detection at each station, within some reasonable length of time. This is what I would call a "clocked" experiment. Experiments have been done with pulsed lasers which is some kind of surrogate for a clocked experiment. Experiments have been done on ions with what is called "event ready detectors". As John Bell explained in "Bertlmann's socks" one would like to have *three* particles emitted at the source. Two of them fly away to distant detectors. The third is registered by a detector close by the source. One calculates the correlations between Alice and Bob's outcomes, restricted to those occasions when Cuthbert did register an event.

In the other class of experiments, much more common in practice, there is neither control nor registration of emissions. We know (or rather, guess) that there has been an emission, when we get a detection at Alice or Bob's site. Especially, when we get detections within a short time interval of one another at both Alice and Bob's site.

Now actually the second class of experiments can be (correctly analysed) in the same way as the first class, simply by superimposing on the data a fixed lattice of short time intervals. Two events at the two stations are taken to be a pair when they both occur in the *same* interval and there are no other events in the same interval. If there are more than two events at one station in one interval it is taken to be a non-event.

This way we will accidentally break some of the true pairs when they happen to have arrival times just before and just after the boundary between two time intervals. On the other hand we rule out the "coincidence loophole" so we only need a lower coincidence rate in order to conclude that no local realistic model fits the data. But we have also lowered the numbers of paired events so our error bars get wider still.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sun Feb 16, 2014 6:31 am

minkwe wrote:What reason do we have to assume that the time differences will be fixed?
The reasons for my questioning has to do with the issue of what we think loopholes mean. Why can't it be that nature just works like that and not all particles can be detected at all angles (detection loophole), time differences are not fixed (coincidence time loophole)?


Loopholes are "escape clauses". Experimentalists do these experiments in the hope of seeing a big violation of CHSH and with the aim of therefore being able to reject local realism. The underlying math/logic is that a violation of CHSH implies that we must either reject locality (local relativistic causality), or realism (counterfactual definiteness), or freedom (no-conspiracy). However this logical implication only holds within a certain basic framework and that framework in particular explicitly assumes binary outcomes and implicitly assumes a clocked or event-ready experiment.

So if an experiment does not adhere to the basic framework, we need to do some more work and/or make supplementary assumptions.

The detection loophole arises when not all particles are measured. The outcome is not binary. We can make it binary (that is what Clauser-Horne does) or we can use a ternary outcome generalized Bell inequality (chained CHSH inequality) or we can use some other technique, or we can make an assumption: the "fair sampling assumption". We simply assume the problem away.

The locality loophole arises when the particles are measured so close to one another that each one could in principle easily "sense" how the other one is being measured.

The freedom (conspiracy) loophole arises when the measurement settings are fixed in advance, or are kept constant for a long time, so that each particle could easily "guess" how the other is going to be measured.

The whole theory has been developed with a clocked experiment in mind, but hardly any experiments are clocked, pulsed, or use event-ready detectors. It was only a few years ago that Larsson and Gill for the first time showed that this was potentially serious. The "coincidence loophole" which arises when you allow particles to determine by their arrival times relative to one another, whether or not they are a pair, is a whole lot more disastrous than the detection loophole.

In the most recent experiments, one is using the Clauser-Horner inequality (this is basically just CHSH with the outcomes "0" and "-1" merged), and one is taking explicit account of the coincidence loophole, by imposing a fixed lattice of time intervals on top of the data records of times and types of events and setting values. The empirical results were succesful (violation of the bound set by local realism). Unfortunately the most recent experiments (last year - both by Geneva-Boulder and by Vienna) did not yet have the good separation between the measurement stations and the rapid random switching of settings.

But the experimentalists could justly claim that for the first time all the known loopholes have been closed on experiments on polarization of photons, albeit not yet simultaneously (i.e., in one and the same experiment). Both groups claimed to be first. Actually, Vienna was first, but they needed to modify their data-analysis in the light of the coincidence loophole, and they only did that after Colorado was in.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

Re: Computer Simulation of EPR Scenarios

Postby minkwe » Sun Feb 16, 2014 9:10 am

Thanks Richard,
Though I think you missed a crucial question. You say the coincidence loophole is potentially serious. I'm trying to get you to tell me if you believe it is possible in principle to avoid it. That is why I asked you why you would assume that the arrival time differences should be fixed within a narrow window. What physical justification do you have to assume that? And that is why I also suggested that the threshold of 88% threshold is not a threshold for detection but one for consideration. Clearly this is not an experimental limitation. Once you combine these two issues, it starts appearing that it might not be possible to avoid these issues in any experiment.

What about my description of the simulation? Is that a fair test?
minkwe
 
Posts: 1441
Joined: Sat Feb 08, 2014 10:22 am

Re: Computer Simulation of EPR Scenarios

Postby gill1109 » Sun Feb 16, 2014 12:31 pm

minkwe wrote:Thanks Richard,
Though I think you missed a crucial question. You say the coincidence loophole is potentially serious. I'm trying to get you to tell me if you believe it is possible in principle to avoid it. That is why I asked you why you would assume that the arrival time differences should be fixed within a narrow window. What physical justification do you have to assume that? And that is why I also suggested that the threshold of 88% threshold is not a threshold for detection but one for consideration. Clearly this is not an experimental limitation. Once you combine these two issues, it starts appearing that it might not be possible to avoid these issues in any experiment.

What about my description of the simulation? Is that a fair test?


Minkwe, regarding your first question: of course the coincidence loophole can be "erased". One simply imposes an externally defined, fixed lattice of coincidence windows onto the experiment. The experiment is the same. The analysis of the data is different. Secondly, it is up to the quantum optics physicist to tell us what is a reasonable window length. Same experiment, same experimental data, different windows -> different results. The shorter the window the harder it is for two events to "coincide" but if they do we can be more confident that they belonged to the same emission. As you decrease the window the observed correlation can be expected to increase, but at the same time, it is based on less events, so its statistical variation gets larger. Jan Ake Larsson wrote beautiful Python programs allowing the user to control the width of the window with a slider and directly see the effect on the correlations (Weihs' data).

Using particle detection time determined windows probably gives you better correlations (you get more of the good pairs), and smaller standard errors (more pairs), but on the other hand the opportunities for local realism to generate the same correlations are greater, so the results less convincing.

The threshhold of 88% is a mathematical/logical threshhold. Once you are above this threshold, and if you are still seeing the cosine curve, then your experiment can't be explained in a local realist way.

For your second question, please give me some more time.
gill1109
Mathematical Statistician
 
Posts: 2812
Joined: Tue Feb 04, 2014 10:39 pm
Location: Leiden

PreviousNext

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 121 guests

cron
CodeCogs - An Open Source Scientific Library