Exploratory Analysis of Brexit Vote
On June 23, 2016, a referendum on whether the UK should exit from the European Union was held. The UK Electoral Commission announced the final results about 7:20 London time (06:20 UTC) on the 24th.
A group of psi researchers at the combined meeting of the SSE and PA in Boulder CO learned that the result was likely a vote to leave, and proposed it would be an appropriate global event to assess. We agreed that the time period to analyse would begin at 9 pm Mountain Time and continue for 8 hours. This is 03:00 to 11:00 UTC on the 24th. The first figure below shows this period, and the second shows the full 24 hour UTC day, with the time of the announcement of final results (07:20 London time) marked. The preplanned analysis period can be read from the UTC time scale.
Specific Hypothesis and Results
The GCP formal series is finished (at 500 events) and this is not part of that dataset, but the 8 hour period analysis can be regarded as a proper test of the GCP hypothesis. The result is Chisquare 29029 on 28800 df for p = 0.167 and Z = 0.955. For comparison, the average Z-score across the formal dataset is 0.33.
The following graph is a visual display of the statistical result. It shows the second-by-second accumulation of small deviations of the data from what’s expected. Our prediction is that deviations will tend to be positive, and if this is so, the jagged line will tend to go upward. If the endpoint is positive, this is evidence for the general hypothesis and adds to the bottom line. If the endpoint is outside the smooth curve showing 0.05 probability, the deviation is nominally significant. If the trend of the cumulative deviation is downward, this is evidence against the hypothesis, and is subtracted from the bottom line. For more detail on how to interpret the results, see The Science and related pages, as well as the standard caveat below.
It is important to keep in mind that we have only a tiny statistical effect, so that it is always hard to distinguish signal from noise. This means that every
success might be largely driven by chance, and every
null might include a real signal overwhelmed by noise. In the long run, a real effect can be identified only by patiently accumulating replications of similar analyses.