Numerical Validation of Christian’s Local-realistic Model

Foundations of physics and/or philosophy of physics, and in particular, posts on unresolved or controversial issues

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Joy Christian » Thu May 28, 2015 12:30 am

Schmelzer wrote:But this state preparation procedure is presumed to be successfully finished before Alice and Bob make their decisions what to measure. Thus, Nature is, in this theory, also obliged to prepare the state without knowing a or b - else, we simply have the superdetermination loophole.

Wrong again!

Superdeterminism or any other loopholes have nothing to do with this. The physical space respects the topology of S^3, not that of R^3 as usually assumed. That is all there is to the correlation -a.b. All Nature is "obliged" to know is the topology of S^3. The rest follows, as explained in this paper: http://arxiv.org/abs/1405.2355.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Schmelzer » Thu May 28, 2015 2:29 am

Joy Christian wrote:
Schmelzer wrote:But this state preparation procedure is presumed to be successfully finished before Alice and Bob make their decisions what to measure. Thus, Nature is, in this theory, also obliged to prepare the state without knowing a or b - else, we simply have the superdetermination loophole.

Superdeterminism or any other loopholes have nothing to do with this. The physical space respects the topology of S^3, not that of R^3 as usually assumed. That is all there is to the correlation -a.b. All Nature is "obliged" to know is the topology of S^3. The rest follows, as explained in this paper: http://arxiv.org/abs/1405.2355.

So, why in this case the mentioned above simulations use the knowledge of a and b to identify the states which can exist?

I would recommend you to write a variant where it is clarified which states exist without using information about the experimenters choices a and b, which are presumed to be unknown (modulo superdeterminism) at the time of the preparation of the states. This would avoid such explanations.

Nobody from the "Bell camp" has any doubt that if the model is allowed to use information about a and b to decide if "the state exists", then the BI can be violated.
Schmelzer
 
Posts: 123
Joined: Mon May 25, 2015 2:44 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Joy Christian » Thu May 28, 2015 2:58 am

Schmelzer wrote:So, why in this case the mentioned above simulations use the knowledge of a and b to identify the states which can exist?

The simulation does not use the knowledge of the specific a and b to be freely selected by Alice and Bob at the time of their measurements. The simulation simply imposes the global topology of S^3 on all vectors and prepares the initial state (v, g) accordingly. The topology of S^3 constrains all vectors --- all a, all b, and all v.

Schmelzer wrote:I would recommend you to write a variant where it is clarified which states exist without using information about the experimenters choices a and b, which are presumed to be unknown (modulo superdeterminism) at the time of the preparation of the states. This would avoid such explanations.

This is a good recommendation, but the existing simulation already accomplishes this. To be sure, a better simulation is worth considering, but this one is already good enough. As noted above, it produces the initial state (v, g) without using any information about the specific a and b to be freely selected by the experimenters.

I am not a programmer. I know very little about codes and programs. Fortunately, there are several knowledgeable people who are working to improve the simulation.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Schmelzer » Thu May 28, 2015 4:59 am

Joy Christian wrote:The simulation does not use the knowledge of the specific a and b to be freely selected by Alice and Bob at the time of their measurements. The simulation simply imposes the global topology of S^3 on all vectors and prepares the initial state (v, g) accordingly. The topology of S^3 constrains all vectors --- all a, all b, and all v.

In this case, it would be quite trivial to modify the program in such a way that it looks nice from this point of view.

Choose, together with a and b, also two other, completely independent, directions c and d. Then, use the test which now uses a and b with c and d. If it is as you describe, and all this is simply the preparation of the initial state (v,g), without knowledge of (a,b), to use arbitrary different (c,d) instead of (a,b) should be fine.

Joy Christian wrote:
Schmelzer wrote:I would recommend you to write a variant where it is clarified which states exist without using information about the experimenters choices a and b, which are presumed to be unknown (modulo superdeterminism) at the time of the preparation of the states. This would avoid such explanations.

This is a good recommendation, but the existing simulation already accomplishes this. To be sure, a better simulation is worth considering, but this one is already good enough. As noted above, it produces the initial state (v, g) without using any information about the specific a and b to be freely selected by the experimenters.

Sorry, but the actual program uses the information about a and b. Replace a,b with random c,d in the preparation phase and this objection disappears. Then, compute the results for Alice and Bob using different random numbers to create them. Given that the preparation has been successful for some c, d, even if probably different ones, but in combination with your claim that the states are produced without using information about a, b, this should be unproblematic, and now for every a, b, one should obtain a value +1 or -1, but never 0.

Joy Christian wrote:I am not a programmer. I know very little about codes and programs. Fortunately, there are several knowledgeable people who are working to improve the simulation.

The problem is not programming - to modify this code would be simple if your claim would be true. But I think you will find people who can solve this little problem.

I would say (without having used this particular programming language, so there will be a lot of trivial syntax errors)
one would have to do the following:
-----------------
aliceAngle = RandomReal[{0, 2 π}];
aliceDeg[[i]] = aliceAngle /Degree;
bobAngle = RandomReal[{0, 2 π}];
bobDeg[[i]] = bobAngle /Degree;
aliceDet[[i]] = test[aliceAngle, eLeft, λ];
bobDet[[i]] = test[bobAngle, eRight, λ],
{i, trials}
------------------
copy this part and replace all "Alice" and "Bob" by "Charly" and "Diana" or so.
------------------
Do[
θ = Round[aliceDeg[[i]] -bobDeg[[i]]];
aliceD = aliceDet[[i]]; bobD = bobDet[[i]];
charlyD =charlyDet[[i]]; dianaD = dianaDet[[i]];
If[aliceD ⩵ 1 && bobD ⩵ 1, nPP[[θ]]++];
If[aliceD ⩵ 1 && bobD ⩵ -1, nPN[[θ]]++];
If[aliceD ⩵ -1 && bobD ⩵ 1, nNP[[θ]]++];
If[aliceD ⩵ -1 && bobD ⩵ -1, nNN[[θ]]++];
if[(aliceD = 0 or bobD = 0) and not (charlyD=0 or dianaD=0), error[[θ]]++],
{i, trials}]
of course, with the correct brackets, or, and, not and equality operators of this language. If your claim is correct, using charlyD and dianaD would be sufficient to identify if the state is valid. Thus, charlyD and dianaD being nonzero would indicate a valid state. If, nonetheless, aliceD or bobD appears zero, your claim is invalid, and the test has used the specific information about a and b to reject the state, and not some general property of the state.

Looking at the formulas, I would suspect that this modified program will give a lot of errors, but try it.
Schmelzer
 
Posts: 123
Joined: Mon May 25, 2015 2:44 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Joy Christian » Thu May 28, 2015 6:43 am

I will let Fred or Ben (or someone else) work on this first. In the mean time, here is a simulation which is far more unambiguous: http://rpubs.com/jjc/16415.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby FrediFizzx » Thu May 28, 2015 12:14 pm

What Ilja is suggesting is just like what Albert Jan did with separation of the loops and Michel got to work right in Python. I will try changing a and b in the first "good" loops to c and d to see what happens.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby FrediFizzx » Thu May 28, 2015 12:35 pm

Yep, it works. Here is the Python code by Michel changing a and b in the first set of loops to c and d.

Code: Select all
import matplotlib
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import numpy
from numpy import linalg
from numpy.core.umath_tests import inner1d

def random_vec3d(lo=0, hi=2*numpy.pi, size=1):
    v = numpy.zeros((size,3))
    theta = numpy.random.uniform(lo, hi, size=size)
    v[:,2] = numpy.random.uniform(-1, 1, size=size) # z
    cs = numpy.sqrt(1-v[:,2]**2)
    v[:,1] = cs*numpy.sin(theta) # y
    v[:,0] = cs*numpy.cos(theta) # x
    return v

M = 1e5
N = 33

angles = numpy.linspace(0, 2*numpy.pi, N)
corrs = numpy.zeros((N, N))
u = random_vec3d(size=M)
s = numpy.random.uniform(0, numpy.pi, size=M)
p = (-1 + (2/(numpy.sqrt(1 + (3 * s/numpy.pi)))))
v = {}
for alpha in angles:
    c = numpy.array([numpy.cos(alpha), numpy.sin(alpha), 0.0])
    uc = inner1d(u, c)
    for beta in angles:
        d = numpy.array([numpy.cos(beta), numpy.sin(beta), 0.0])
        ud = inner1d(u, d)

        good = (numpy.abs(uc) > p) & (numpy.abs(ud) > p)
        v[(alpha,beta)] = u[good]

for i, alpha in enumerate(angles):
    a = numpy.array([numpy.cos(alpha), numpy.sin(alpha), 0])
    for j, beta in enumerate(angles):
        b = numpy.array([numpy.cos(beta), numpy.sin(beta), 0])

        _v = v[(alpha,beta)]
        va = numpy.sign(inner1d(_v, a))
        vb = numpy.sign(-inner1d(_v, b))
        corrs[i,j] = (va*vb).mean()

X, Y = numpy.meshgrid(numpy.degrees(angles), numpy.degrees(angles))
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X, Y, corrs, rstride=1, cstride=1, cmap=cm.coolwarm)
plt.show()

And here is the result. An even sexier surface plot. :D

Image
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Schmelzer » Thu May 28, 2015 1:19 pm

Joy Christian wrote:I will let Fred or Ben (or someone else) work on this first. In the mean time, here is a simulation which is far more unambiguous: http://rpubs.com/jjc/16415.


I do not understand (17) of http://arxiv.org/pdf/0806.3078v2.pdf. What I think is sign is a function which is always +1 or -1 (ok, possibly with the exception of 0 where sign may be defined as 0). But sim to +- 1???

I also do not understand the difference between (1), (2), which lead to (3), on the one hand, and (16) which is supposed to lead to -a.b, on the other hand.
Schmelzer
 
Posts: 123
Joined: Mon May 25, 2015 2:44 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Joy Christian » Thu May 28, 2015 1:46 pm

Schmelzer wrote:I do not understand (17) of http://arxiv.org/pdf/0806.3078v2.pdf. What I think is sign is a function which is always +1 or -1 (ok, possibly with the exception of 0 where sign may be defined as 0). But sim to +- 1???

I also do not understand the difference between (1), (2), which lead to (3), on the one hand, and (16) which is supposed to lead to -a.b, on the other hand.

There is a more recent paper that may answer some of your questions. Please jump to the last appendix of this paper: http://arxiv.org/abs/1501.03393.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Schmelzer » Thu May 28, 2015 1:49 pm

FrediFizzx wrote:Yep, it works. Here is the Python code by Michel changing a and b in the first set of loops to c and d.


Fine, unfortunately I'm even less firm in python, so I can hardly comment. But wait:

I see "random" only in connection with theta, v, not in connection with the angles. Once the angles do not seem to be independent random sequences, simply renaming them changes nothing.
Schmelzer
 
Posts: 123
Joined: Mon May 25, 2015 2:44 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Schmelzer » Thu May 28, 2015 4:27 pm

Joy Christian wrote:I will let Fred or Ben (or someone else) work on this first. In the mean time, here is a simulation which is far more unambiguous: http://rpubs.com/jjc/16415.


I don't know who has written this code, but I would suggest you not to trust him. The code contains clear attempts to fake properties it does not have.

for (i in 1:K) {
alpha <- angles[i]
a <- c(cos(alpha), sin(alpha), 0) ## Measurement direction 'a'

for (j in 1:K) {
beta <- angles[j]
b <- c(cos(beta), sin(beta), 0) ## Measurement direction 'b'

ua <- colSums(u * a) ## Inner products of 'u' with 'a'
ub <- colSums(u * b) ## Inner products of 'u' with 'b'

corrs[i] <- sum(sign(-ua) * sign(ub))/N

# corrs[i] <- sum(sign(-ua))/N
}
}

Looks like two loops, with variation over alpha and beta, then computation of the correlation. But the loop over beta is a complete fake. The correlations for alpha and beta is computed, written into corrs[i], but only to be overwritten in the next step. So, if I would like to economize computer time without changing the result, I could rewrite the code in the following way:

for (i in 1:K) {
alpha <- angles[i]
a <- c(cos(alpha), sin(alpha), 0) ## Measurement direction 'a'
ua <- colSums(u * a) ## Inner products of 'u' with 'a'

## no loop necessary for (j in 1:K) {
beta <- angles[K] ## = 360 degree.
## can be simplified b <- c(cos(beta), sin(beta), 0) ## Measurement direction 'b'
b <- c(1,0,0)
u0 <- colSums(u * b) ## Inner products of 'u' with 'b' ## denotation u0 is more honest than ub.
corrs[i] <- sum(sign(-ua) * sign(u0))/N
# corrs[i] <- sum(sign(-ua))/N
}
}
Note the comment - which is, clearly, essentially the same as what remains after my simplification. Thus, the simplified code is, essentially, the original,
and the unnecessary code was added later to fake a computation for arbitrary alpha,beta.

The point is that if beta is known, as it is in this simulation as beta=360, then to create a "local" realistic model is no longer a problem.

The second part of the code is the same trick with a faked loop over alpha.
Schmelzer
 
Posts: 123
Joined: Mon May 25, 2015 2:44 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby FrediFizzx » Thu May 28, 2015 10:27 pm

Schmelzer wrote:
FrediFizzx wrote:Yep, it works. Here is the Python code by Michel changing a and b in the first set of loops to c and d.


Fine, unfortunately I'm even less firm in python, so I can hardly comment. But wait:

I see "random" only in connection with theta, v, not in connection with the angles. Once the angles do not seem to be independent random sequences, simply renaming them changes nothing.


Not sure what you mean by "theta" here? But you are right; changing the names doesn't do the trick. However, the way that simulation is setup, it doesn't work to try to do a different angle set for the second loop set. We will have to try something different.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Joy Christian » Thu May 28, 2015 10:35 pm

Schmelzer wrote:The point is that if beta is known, as it is in this simulation as beta=360, then to create a "local" realistic model is no longer a problem.

beta is known to whom? It is certainly not known to Alice what angle Bob has chosen. As far as Alice is concerned Bob may not even exist.

See also discussion in this thread: viewtopic.php?f=6&t=69#p3225.
Joy Christian
Research Physicist
 
Posts: 2793
Joined: Wed Feb 05, 2014 4:49 am
Location: Oxford, United Kingdom

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Schmelzer » Fri May 29, 2015 4:38 am

Joy Christian wrote:
Schmelzer wrote:The point is that if beta is known, as it is in this simulation as beta=360, then to create a "local" realistic model is no longer a problem.

beta is known to whom? It is certainly not known to Alice what angle Bob has chosen.


beta is known to the programmer, who has designed the program in such a way that only the last value beta = 360 survives the computation, for all the other beta the value will be computed (giving the nice correct looking formula), but then simply overwritten in the next loop and ignored. In the second round, he knows, by the same trick, that only the result for alpha=360 remains.

This knowledge would allow the writer to obtain the nice two pictures by a careful special choice of the point distribution. This would allow him to design a nice result for beta=360, but probably not for all other values of beta. If the chosen point distribution is a special choice for beta=360 or not can be easily tested by running the same program for other values of the end of the loop. If, say, instead of
for (j in 1:K) { ... }
one uses
for (j in 1:125) { ... } # or whatever else < 360
then the result which remains would be the value for beta=125 degrees.
Schmelzer
 
Posts: 123
Joined: Mon May 25, 2015 2:44 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Q-reeus » Sat May 30, 2015 9:22 pm

Schmelzer wrote:beta is known to the programmer, who has designed the program in such a way that only the last value beta = 360 survives the computation, for all the other beta the value will be computed (giving the nice correct looking formula), but then simply overwritten in the next loop and ignored. In the second round, he knows, by the same trick, that only the result for alpha=360 remains....

A long pause follows. Well Joy, Fred, is there or is there not agreement Schmelzer's finding is valid thus fatal to current simulations?
Q-reeus
 
Posts: 314
Joined: Sun Jun 08, 2014 12:18 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby FrediFizzx » Sat May 30, 2015 9:45 pm

Not fatal at all. Just a lack of Ilja's understanding of the complete states concept. It's not easy to comprehend. We will try to explain better.
FrediFizzx
Independent Physics Researcher
 
Posts: 2905
Joined: Tue Mar 19, 2013 7:12 pm
Location: N. California, USA

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Q-reeus » Sat May 30, 2015 10:01 pm

FrediFizzx wrote:Not fatal at all. Just a lack of Ilja's understanding of the complete states concept. It's not easy to comprehend. We will try to explain better.

Great; I so much want Joy's endeavor to be found true. It would mean avoiding all the nasty 'magical' implications that imo standard QM interpretations imply, as per my first post here:
http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=70:
Notwithstanding, I continue to have grave doubts about an essential linkage in Joy's 7-sphere reality to a postulated intrinsic spacetime torsion. That evidently only manifests in certain QM entangled/'entangled' states, yet afaik nowhere in any classical physics scenarios. Final arbiter will of course be down to results of experiments.
Q-reeus
 
Posts: 314
Joined: Sun Jun 08, 2014 12:18 am

Re: Numerical Validation of Christian’s Local-realistic Mode

Postby Ben6993 » Thu Jun 04, 2015 4:48 am

I have read that there is another cubeland criticism of Fred's GA program: the correlation routine needs the sign of the trivector to be inputted within the correlation routine. That does not seem to be too difficult a criticism to answer because the correlation is within a simulation of a mathematical model (though it is not a simulation of a real experiment particle-at-a-time) and using the hidden variables and the trivector sign is necessary in that information for each term separately in the summation. The criticism uses this supposed lapse in the calculation of a formula to suggest that the program ought to be recast as a simulation of a real experiment instead. The implication is that the GA program trims the data horizontally (or continuously) while the old R programs trimmed the data vertically (or discretely), and it is somewhat easier to point a finger at discrete trimming in the simulation of a real experiment. Simple random data are points on a 3D cube. Joy's model needs points on a 3D sphere (somehow) allowing for double cover. The algorithm in R (and GA?) uses points on a sphere using single cover. So I can see that there may need to be some valid trimming of the points on a single cover sphere to get the "good" indexed data. Has there been any analysis of the positions of the 'good' or valid data pairs? Or does this fall back to patterns on the single covered chaotic balls from a different model? Is it possible to pre-select valid pairs of data and then restrict random choice of pairs from them, rather than use a "good" index calculated on the hoof? I.e so as to have a data set or algorithm called say ValidPairsofpointsonaSphere?

A different point concerns symmetry.
The two possibilities of pair production are:
either electron1 & positron2 where λ = -1
or electron3 & positron4 where λ = +1

I have asked this before and have been told that I was wrong, but if say the model was:
either electron1 (λ = -1) & positron2 (λ = 1)
or electron3 (λ = 1) & positron4 (λ = -1)
then the lambda would distinguish matter from antimatter and simultaneously determine particle chirality.
Also, there could be matter and antimatter in the micro pairs, but the macro observer and apparatus would be in the world dominated by matter and would see the outcomes from the perspective of matter. (And I could even more readily believe that the electron and positron were in mirror image spaces, given their time reversed paths in the Feynmann diagrams.) The macro choice of εijk sign would already have been be made.

What I have been struggling to see in a common sense way is how can Alice and Bob have any faith at all in their measurements if macro bodies inhabit a double cover. If Bob had risen out of bed on the wrong side that morning, all his detector outputs would be reversed? Well, it is the detectors that count not Bob's personal movements, but the issue is still the same, and again a question I have asked before, why are the detectors set at angles which do not reflect the double cover of space, i.e. the angles are not set from 0 to 4pi. If space has a double cover, why don't we bother to set the detectors accordingly? (Or are they set thus?)
Ben6993
 
Posts: 287
Joined: Sun Feb 09, 2014 12:53 pm

Previous

Return to Sci.Physics.Foundations

Who is online

Users browsing this forum: No registered users and 87 guests

CodeCogs - An Open Source Scientific Library