jeudi 31 août 2017

Ondes gravitationnelles et résonances d'orages africains/ Gravitational wave signals and unexpectedly strong Schumann resonance transients correlated noise

Raphaël Enthoven,Jacques Perry-salkow

(The validation of) A great discovery requires a genuinely independent analysis of data

To date, the LIGO collaboration has detected three gravitational wave (GW) events appearing in both its Hanford and Livingston detectors. In this article we reexamine the LIGO data with regard to correlations between the two detectors. With special focus on GW150914, we report correlations in the detector noise which, at the time of the event, happen to be maximized for the same time lag as that found for the event itself. Specifically, we analyze correlations in the calibration lines in the vicinity of 35 Hz as well as the residual noise in the data after subtraction of the best-fit theoretical templates. The residual noise for the other two events, GW151226 and GW170104, exhibits similar behavior. A clear distinction between signal and noise therefore remains to be established in order to determine the contribution of gravitational waves to the detected signals

(Submitted on 13 Jun 2017 (v1), last revised 9 Aug 2017 (this version, v2))

A debate about how to sift the astrophysical wheat from the terrestrial chaff

Recent claims in a preprint by Creswell et al. of puzzling correlations in LIGO data have broadened interest in understanding the publicly available LIGO data around the times of the detected gravitational-wave events. We see that the features presented in Creswell et al. arose from misunderstandings of public data products. The LIGO Scientific Collaboration and Virgo Collaboration (LVC) have full confidence in our published results, and we are preparing a paper in which we will provide more details about LIGO detector noise properties and the data analysis techniques used by the LVC to detect gravitational-wave signals and infer their waveforms.

News from LIGO Scientific Collaboration
undated (between 7 July and 1 August 2017)
In our view, if we are to conclude reliably that this signal is due to a genuine astrophysical event, apart from chance-correlations, there should be no correlation between the "residual" time records from LIGO's two detectors in Hanford and Livingston. The residual records are defined as the difference between the cleaned records and the best GW template found by LIGO. Residual records should thus be dominated by noise, and they should show no correlations between Hanford and Livingston. Our investigation revealed that these residuals are, in fact, strongly correlated. Moreover, the time delay for these correlations coincides with the 6.9 ms time delay found for the putative GW signal itself...
During a two-week period at the beginning of August, we had a number of "unofficial" seminars and informal discussions with colleagues participating in the LIGO collaboration... Given the media hype surrounding our recent publication, these meetings began with some measure of scepticism on both sides. The atmosphere improved dramatically as our meetings progressed. 
The focus of these meetings was on the detailed presentation and lively critical discussion of the data analysis methods adopted by the two groups. While there was unofficial agreement on a number of important topics - such as the desirability of better public access to LIGO data and codes - we emphasize that no consensus view emerged on fundamental issues related to data analysis and interpretation.
In view of unsubstantiated claims of errors in our calculations, we appreciated the opportunity to go through our respective codes together - line by line when necessary - until agreement was reached. This check did not lead to revisions in the results of calculations reported in versions 1 and 2 of arXiv:1706.04191 or in the version of our paper published in JCAP. It did result in changes to the codes used by our visitors.
There are a number of in-principle issues on which we disagree with LIGO's approach. Given the importance of LIGO's claims, we believe that it is essential to establish the correlation between Hanford and Livingston signals and to determine the shape of these signals without employing templates. Before such comparisons can be made, the quality of data cleaning (which necessarily includes the removal of non-Gaussian and non-stationary instrumental "foreground" effects) must be demonstrated by showing that the residuals consist only of uncorrelated Gaussian noise. We believe that suitable cleaning is a mandatory prerequisite for any meaningful comparisons with specific astrophysical models of GW events. This is why we are concerned, for example, about the pronounced "phase lock" in the LIGO data.
James Creswell, Sebastian von Hausegger, Andrew D. Jackson, Hao Liu, Pavel Naselsky
August 21, 2017

Disentangling the man-made detectors from the Earth-shaped one

As the LIGO detectors are extremely sensitive instruments they are prone to many sources of noise that need to be identified and removed from the data. An impressive amount of efforts were undertaken by the LIGO collaboration to ensure that GW150914 signal was really the first detection of gravitational waves with all transient noise backgrounds being under a good control [4, 5, 6]. 

It was claimed, however, in a recent publication [7] that the residual noise of the GW150914 event in LIGO’s two widely separated detectors exhibit correlations that are maximized for the same 7 ms time lag as that found for the gravitational-wave signal itself. Thus questions on the integrity and reliability of the gravitational waves detection were raised and informally discussed [8, 9]. It seems at present time it is not quite clear whether there is something unexplained in LIGO noise that may be of genuine interest. It was argued that even assuming that the claims of [7] about correlated noise are true, it would not affect the 5-sigma confidence associated with GW0150914 [8]. Nevertheless, in this case it will be interesting to find out the origin of this correlated noise.
Correlated magnetic fields from Schumann resonances constitute a well known potential source of correlated noise in gravitational waves detectors [11, 12, 13]... Schumann resonances are global electromagnetic resonances in the Earthionosphere cavity [14, 15]. The electromagnetic waves in the extremely low frequencies (ELF) range (3Hz to 3 kHz) are mostly confined in this spherical cavity and their propagation is characterized by very low attenuation which in the 5 Hz to 60 Hz frequency range is of the order of 0.5-1 db/Mm. Schumann resonances are eigenfrequencies of the Earth-ionosphere cavity. They are constantly excited by lightning discharges around the globe. While individual lightning signals below 100 Hz are very weak, thanks to the very low attenuation, related ELF electromagnetic waves can be propagated a number of times around the globe, constructively interfere for wavelengths comparable with the Earth’s circumference and create standing waves in the cavity.

Note that there exists some day-night variation of the resonance frequencies, and some catastrophic events, like a nuclear explosion, simultaneously lower all the resonance frequencies by about 0.5 Hz due to lowering of the effective ionosphere height [16]. Interestingly, frequency decrease of comparable magnitude of the first Schumann resonance, caused by the extremely intense cosmic gamma-ray flare, was reported in [17]. Usually eight distinct Schumann resonances are reliably detected in the frequency range from 7 Hz to 52 Hz. However five more were detected thanks to particularly intense lightning discharges, thus extending the frequency range up to 90 Hz [18].

...  For short duration gravitationalwave transients, like the three gravitational-waves signals observed by LIGO, Schumann resonances are not considered as significant noise sources because the magnetic field amplitudes induced by even strong remote lightning strikes usually are of the order of a picotesla, too small to produce strong signals in the LIGO gravitational-wave channel [4].

Interestingly enough, the Schumann resonances make the Earth a natural gravitational-wave detector, albeit not very sensitive [20]. As the Earth is positively charged with respect to ionosphere, a static electric field, the so-called fair weather field is present in the earth-ionosphere cavity. In the presence of this background electric field, the infalling gravitational wave of suitable frequency resonantly excites the Schumann eigenmodes, most effectively the second Schumann resonance [20]. Unfortunately, it is not practical to turn Earth into a gravitational-wave detector. Because of the weakness of the fair weather field (about 100 V/m) and low value of the quality factor (from 2 to 6) of the Earth-ionosphere resonant cavity, the sensitivity of such detector will be many orders of magnitude smaller than the sensitivity of the modern gravitational-wave detectors

However, a recent study of short duration magnetic field transients that were coincident in low-noise magnetometers in Poland and Colorado revealed that there was about 2.3 coincident events per day where the amplitudes of the pulses exceeded 200 pT, strong enough to induce a gravitational-wave like signal in the LIGO gravitational-wave channel of the same amplitude as in the GW150914 event [21]...

The main source of the Schumann ELF waves are negative cloud-toground lightning discharges with the typical charge moment change of about 6 Ckm. On Earth, storm cells, mostly in the tropics, generate about 50 such discharges per second.

The so-called Q-bursts are more strong positive cloud-to-ground atmospheric discharges with charge moment changes of order of 1000 Ckm. ELF pulses excited by Q-bursts propagate around the world. At very far distances only the low frequency components of the ELF pulse will be clearly visible, because the higher frequency components experience more attenuation than the lower frequency components...

In [22] Earth’s lightning hotspots are revealed in detail using 16 years of space-based Lightning Imaging Sensor observations. Information about locations of these lightning hotspots allows us to calculate time lags between arrivals of the ELF transients from these locations to the LIGO-Livingston (latitude 30.563◦ , longitude −90.774◦ ) and LIGO-Hanford (latitude 46.455◦ , longitude −119.408◦ ) gravitational-wave detectors...

We have taken Earth’s lightning hotspots from [22] with lightning flash rate densities more than about 100 fl km−2 yr−1 and calculated the expected time lags between ELF transients arrivals from these locations to the LIGO detectors... Note that the observed group velocity for short ELF field transients depends on the upper frequency limit of the receiver [21]. For the magnetometers used in [21] this frequency limit was 300 Hz corresponding to the quoted group velocity of about 0.88c. For the LIGO detectors the coupling of magnetic field to differential arm motion decreases by an order of magnitude for 30 Hz compared to 10 Hz [4]. Thus for the LIGO detectors, as the ELF transients receivers, the more appropriate upper frequency limit is about 30 Hz, not 300 Hz. According to (2), low frequencies propagate with smaller velocities 0.75c-0.8c. Therefore the inferred time lags in the Table1 might be underestimated by about 15%...

If the strong lightnings and Q-bursts indeed contribute to the LIGO detectors correlated noise then the distribution of lightning hotspots around the globe can lead to some regularities in this correlated noise. Namely, extremely low frequency transients due to lightnings in Africa will be characterized by 5-7 ms time lags between the LIGO-Hanford and LIGO-Livingston detectors. Asian lightnings lead to time lags which have about the same magnitude but the opposite sign. Lightnings in North and South Americas should lead positive time lags of about 11-13 ms, greater than the light propagation time between the LIGO-Hanford and LIGO-Livingston detectors. 

(Submitted on 27 Jul 2017)

mercredi 22 février 2017

{Bohmian mechanics, is} [subtle, malicious] (?)

Here is my post consisting as usual in quotes from some scientific articles fully available online, underlining (or emphasizing with a bold font) selected parts in order to sketch a draft response to the question in its title. This time, I was mostly inspired by reading this post at another blog named Elliptic Composability.

Inconclusive Bohmian positions in the macroscopic way ...
Bohmian mechanics differs deeply from standard quantum mechanics. In particular, in Bohmian mechanics particles, here called Bohmian particles, follow continuous trajectories; hence in Bohmian mechanics there is a natural concept of time-correlation for particles’ positions. This led M. Correggi and G. Morchio [1] and more recently Kiukas and Werner [2] to conclude that Bohmian mechanics “can’t violate any Bell inequality”, hence is disproved by experiments. However, the Bohmian community maintains its claim that Bohmian mechanics makes the same predictions as standard quantum mechanics (at least as long as only position measurements are considered, arguing that, at the end of the day, all measurements result in position measurement, e.g. pointer’s positions).  
Here we clarify this debate. First, we recall why two-time position correlation is at a tension with Bell inequality violation. Next, we show that this is actually not at odd with standard quantum mechanics because of some subtleties. For this purpose we do not go for full generality, but illustrate our point on an explicit and rather simple example based on a two-particle interferometers, partly already experimentally demonstrated and certainly entirely experimentally feasible (with photons, but also feasible at the cost of additional technical complications with massive particles). The subtleties are illustrates by explicitly coupling the particles to macroscopic systems, called pointers, that measure the particles’ positions. Finally, we raise questions about Bohmian positions, about macroscopic systems and about the large difference in appreciation of Bohmian mechanics by the philosophers and physicists communities... 
Part of the attraction of Bohmian mechanics lies then in the assumption that • Assumption H : Position measurements merely reveal in which (spatially separated and non-overlapping) mode the Bohmian particle actually is.   
A Bohmian particle and its pilot wave arrive on a Beam-Splitter (BS) from the left in mode “in”. The pilot wave emerges both in modes 1 and 2, as the quantum state in standard quantum theory. However, the Bohmian particle emerges either in mode 1 or in mode 2, depending on its precise initial position. As Bohmian trajectories can’t cross each other, if the initial position is in the lower half of mode “in”, then the Bohmian particle exists the BS in mode 1, else in mode 2.

Two Bohmian particles spread over 4 modes. The quantum state is entangled... hence the two particle are either in modes 1 and 4, or in modes 2 and 3. Alice applies a phase x on mode 1 and Bob a phase y on mode 4. Accordingly, after the two beam-splitters the correlations between the detectors allow Alice and Bob to violate Bell inequality... Alice’s first “measurement”, with phase x, can be undone because in Bohmian mechanics there is no collapse of the wavefunction. Hence, after having applied the phase −x after her second beam-splitter, Alice can perform a second “measurement” with phase x′ .

... There is no doubt that according to Bohmian mechanics there is a well-defined joint probability distribution for Alice’s particle at two times and Bob’s particle: P(rA, r′A, rB|x, x′ , y), where rA denotes Alice’s particle after the first beam-splitter and r′A after the third beamsplitter of {the last figure above}... But here comes the puzzle. According to Assumption H, if rA∈′′1′′, then any position measurement performed by Alice in-between the first and second beam-splitter would necessarily result in a=1. Similarly rA ∈′′2′′ implies a=2. And so on, Alice’s position measurement after the third beam-splitter is determined by r ′ A and Bob’s measurement determined by rB. Hence, it seems that one obtains a joint probability distribution for both of Alice’s measurements results and for Bob’s: P(a, a′ , b|x, x′ , y). 
But such a joint probability distribution implies that Alice doesn’t have to make any choice (she merely makes both choices, one after the other), and in such a situation there can’t be any Bell inequality violation.
... Let’s have a closer look at the probability distribution that lies at the bottom of our puzzle: P(rA, r′ A, rB|x, x′ , y)... now comes the catch... as the Bohmian particles’s positions are assumed to be “hidden”... they have to be hidden in order to avoid signalling in Bohmian mechanics. ... it implies that Bohmian particles are postulated to exist “only” to immediately add that they are ultimately not fully accessible... Consequently, defining a joint probability for the measurement outcomes a, a ′ and b in the natural way: 
P (a, a′ , b|x, x′ , y) ≡ P (rA ∈ “a“, rA ∈ “a ′ “, rB ∈ “b“|x, x′ , y) (10) 
can be done mathematically, but can’t have a physical meaning, as P(a, a′, b|x, x′ , y) would be signaling.
In summary, it is the identification (10) that confused the authors of [1, 2] and led them to wrongly conclude that Bohmian mechanics can’t predict violations of Bell inequalities in experiments involving only position measurements. Note that the identification (10) follows from the assumption H, hence assumption H is wrong. Every introduction to Bohmian mechanics should emphasize this. Indeed, assumption H is very natural and appealing, but wrong and confusing.

To elaborate on this let’s add an explicit position measurement after the first beam-splitter on Alice side. The fact is that both according to standard quantum theory and according to Bohmian mechanics, this position measurement perturbs the quantum state (hence the pilot wave) in such a way that the second measurement, labelled x ′ on Fig. 4, no longer shares the correlation (9) with the first measurement, see [4, 5]...

From all we have seen so far, one should, first of all, recognize that Bohmian mechanics is deeply consistent and provides a nice and explicit existence proof of a deterministic nonlocal hidden variables model. Moreover, the ontology of Bohmian mechanics is pretty straightforward: the set of Bohmian positions is the real stuff. This is especially attractive to philosopher. Understandably so. But what about physicists mostly interested in research? What new physics did Bohmian mechanics teach us in the last 60 years? Here, I believe fair to answer: not enough! Understandably disappointing... 
This is unfortunate because it could inspire courageous ideas to test quantum physics. 

Probably surrealistic Bohm Trajectories in the microscopic world?

... we maintain that Bohmian Mechanics is not needed to have the Schrödinger equation "embedded into a physical theory". Standard quantum theory has already clarified the significance of Schrödinger's wave function as a tool used by theoreticians to arrive at probabilistic predictions. It is quite unnecessary, and indeed dangerous, to attribute any additional "real" meaning to the psi-function. The semantic difference between "inconsistent" and "surrealistic" is not the issue. It is the purpose of our paper to show clearly that the interpretation of the Bohm trajectory - as the real retrodicted history of the atom observed on the screen - is implausible, because this trajectory can be macroscopically at variance with the detected, actual way through the interferometer. And yes, we do have a framework to talk about path detection; it is based upon the local interaction of the atom with the photons inside a resonator, described by standard quantum theory with its short range interactions only. Perhaps it is true that it is "generally conceded that.. . [a measurement]... requires a ... device which is more or less macroscopic," but our paper disproves this notion, because it clearly shows that one degree of freedom per detector is quite sufficient. That is the progress represented by the quantum-optical whichway detectors. And certainly, it is irrelevant for all practical purposes whether "somebody looks" or not; what matters only is that the which-way information is stored somewhere so that the path through the interferometer can be known, in principle.

Nowhere did we claim that BM makes predictions that differ from those of standard quantum mechanics. The whole point of the experimentum crucis is to demonstrate that one cannot attribute reality to the Böhm trajectories, where reality is meant in the phenomenological sense. One must not forget that physics is an experimental science dealing with phenomena. If the trajectories of BM have no relation to the phenomena, in particular to the detected path of the particle, then their reality remains metaphysical, just like the reality of the ether of Maxwellian electrodynamics. Of course, the "very existence" of the Böhm trajectory is a mathematical statement to which nobody objects. We do not deny the possibility that some imaginary parameters possess a "hidden reality" endowed with the assumed power of exerting "gespenstische Fernwirkungen" (Einstein). But a physical theory should carefully avoid such concepts of no phenomenological consequence.  
B.-G. Englert, M. O. Scully, G. Süssmann, and H. Walther
received October 12, 1993  

vendredi 30 décembre 2016

Hoping and believing are different things for physicists

The monster, the second sister and Cinderella : three Magi announcing the era of direct gravitational wave astrometry

On February 11 the LIGO-Virgo collaboration announced the detection of Gravitational Waves (GW). They were emitted about one billion years ago by a Binary Black Hole (BBH) merger and reached Earth on September 14, 2015. The claim, as it appears in the ‘discovery paper’ [1] and stressed in press releases and seminars, was based on “> 5.1 σ significance.” Ironically, shortly after, on March 7 the American Statistical Association (ASA) came out (independently) with a strong statement warning scientists about interpretation and misuse of p-values [2]...
In June we have finally learned [4] that another ‘one and a half ’ gravitational waves from Binary Black Hole mergers were also observed in 2015, where by the ‘half’ I refer to the October 12 event, highly believed by the collaboration to be a gravitational wave, although having only 1.7 σ significance and therefore classified just as LVT (LIGO-Virgo Trigger) instead of GW. However, another figure of merit has been provided by the collaboration for each event, a number based on probability theory and that tells how much we must modify the relative beliefs of two alternative hypotheses in the light of the experimental information. This number, at my knowledge never even mentioned in press releases or seminars to large audiences, is the Bayes factor (BF), whose meaning is easily explained: if you considered a priori two alternative hypotheses equally likely, a BF of 100 changes your odds to 100 to 1; if instead you considered one hypothesis rather unlikely, let us say your odds were 1 to 100, a BF of 104 turns them the other way around, that is 100 to 1. You will be amazed to learn that even the “1.7 sigma” LVT151012 has a BF of the order of ≈ 1010 , considered a very strong evidence in favor of the hypothesis “Binary Black Hole merger” against the alternative hypothesis “Noise”. (Alan Turing would have called the evidence provided by such an huge ‘Bayes factor,’ or what I. J. Good would have preferred to call “Bayes-Turing factor” [5],1 100 deciban, well above the 17 deciban threshold considered by the team at Bletchley Park during World War II to be reasonably confident of having cracked the daily Enigma key [7].)...
Figure 3: The Monster (GW150914), Cinderella (LVT151012) and the third sister (GW151226), visiting us in 2015 (Fig. 1 of [4] – see text for the reason of the names). The published ‘significance’ of the three events (Table 1 of [4]) is, in the order, “> 5.3 σ”, “1.7 σ” and “> 5.3 σ”, corresponding to the following p-values: 7.5 × 10-8 , 0.045, 7.5 × 10-8. The log of the Bayes factors are instead (Table 4 of [4]) approximately 289, 23 and 60, corresponding to Bayes factors about 3 × 10125 , 1010 and 1026

... even if at a first sight it does not look dissimilar from GW151226 (but remember that the waves in Fig. 3 do not show raw data!), the October 12 event, hereafter referred as Cinderella, is not ranked as GW, but, more modestly, as LVT, for LIGO-Virgo Trigger. The reason of the downgrading is that ‘she’ cannot wear a “> 5σ’s dress” to go together with the ‘sisters’ to the ‘sumptuous ball of the Establishment.’ In fact Chance has assigned ‘her’ only a poor, unpresentable 1.7 σ ranking, usually considered in the Particle Physics community not even worth a mention in a parallel session of a minor conference by an undergraduate student. But, despite the modest ‘statistical significance’, experts are highly confident, because of physics reasons* (and of their understanding of background), that this is also a gravitational wave radiated by a BBH merger, much more than the 87% quoted in [4]. [Detecting something that has good reason to exist , because of our understanding of the Physical World (related to a network of other experimental facts and theories connecting them!), is quite different from just observing an unexpected bump, possibly due to background, even if with small probability, as already commented in footnote 15. And remember that whatever we observe in real life, if seen with high enough resolution in the N-dimensional phase space, had very small probability to occur! (imagine, as a simplified example, the pixel content of any picture you take walking on the road, in which N is equal to five, i.e two plus the RGB code of each pixel).]
Giulio D'Agostini (Submitted on 6 Sep 2016)

Will the first 5-sigma claim from LHC Run2 be a fluke?
In the meanwhile it seems that particle physicists are hard in learning the lesson and the number of graves in the Cemetery of physics ... has increased ..., the last funeral being recently celebrated in Chicago on August 5, with the following obituary for the dear departed: “The intriguing hint of a possible resonance at 750 GeV decaying into photon pairs, which caused considerable interest from the 2015 data, has not reappeared in the much larger 2016 data set and thus appears to be a statistical fluctuation” [57]. And de Rujula’s dictum gets corroborated. [If you disbelieve every result presented as having a 3 sigma, or ‘equivalently’ a 99.7% chance of being correct, you will turn out to be right 99.7% of the times. (‘Equivalently’ within quote marks is de Rujula’s original, because he knows very well that there is no equivalence at all.)] Someone would argue that this incident has happened because the sigmas were only about three and not five. But it is not a question of sigmas, but of Physics, as it can be understood by those who in 2012 incorrectly turned the 5σ into 99,99994% “discovery probability” for the Higgs [58], while in 2016 are sceptical in front of a 6σ claim (“if I have to bet, my money is on the fact that the result will not survive the verifications” [59]): the famous “du sublime au ridicule, il n’y a qu’un pas” seems really appropriate! ... 
Seriously, the question is indeed that, now that predictions of New Physics around what should have been a natural scale substantially all failed, the only ‘sure’ scale I can see seems Planck’s scale. I really hope that LHC will surprise us, but hoping and believing are different things. And, since I have the impression that are too many nervous people around, both among experimentalists and theorists, and because the number of possible histograms to look at is quite large, after the easy bets of the past years (against CDF peak and against superluminar neutrinos in 2011; in favor of the Higgs boson in 2011; against the 750 GeV di-photon in 2015, not to mention that against Supersymmetry going on since it failed to predict new phenomenology below the Z0 – or the W? – mass at LEP, thus inducing me more than twenty years ago to gave away all SUSY Monte Carlo generators I had developed in order to optimize the performances of the HERA detectors.) I can serenely bet, as I keep saying since July 2012, that the first 5-sigma claim from LHC will be a fluke. (I have instead little to comment on the sociology of the Particle Physics theory community and on the validity of ‘objective’ criteria to rank scientific value and productivity, being the situation self evident from the hundreds of references in a review paper which even had in the front page a fake PDG entry for the particle [60] and other amenities you can find on the web, like [61].)

Bayesian anatomy of the 750 GeV fluke 
The statistical anomalies at about 750 GeV in ATLAS [1, 2] and CMS [3, 4] searches for a diphoton resonance (denoted in this text as F {for digamma}) at √s = 13 TeV with about 3/fb caused considerable activity (see e.g., Ref. [5, 6, 7]). The experiments reported local significances, which incorporate a look-elsewhere effect (LEE, see e.g., Ref. [8, 9]) in the production cross section of the z, of 3.9σ and 3.4σ, respectively, and global significances, which incorporate a LEE in the production cross section, mass and width of the F, of 2.1σ and 1.6σ, respectively. There was concern, however, that an overall LEE, accounting for the numerous hypothesis tests of the SM at the LHC, cannot be incorporated, and that the plausibility of the F was difficult to gauge. 
Whilst ultimately the F was disfavoured by searches with about 15/fb [10, 11], we directly calculate the relative plausibility of the SM versus the SM plus F in light of ATLAS data available during the excitement, matching, wherever possible, parameter ranges and parameterisations in the frequentist analyses. The relative plausibility sidesteps technicalities about the LEE and the frequentist formalism required to interpret significances. We calculate the Bayes-factor (see e.g., Ref. [12]) in light of ATLAS data, 
Our main result is that we find that, at its peak, the Bayes-factor was about 7.7 in favour of the F. In other words, in light of the ATLAS 13 TeV 3.2/fb and 8 TeV 20.3/fb diphoton searches, the relative plausibility of the F versus the SM alone increased by about eight. This was “substantial” on the Jeffreys’ scale [13], lying between “not worth more than a bare mention” and “strong evidence.” For completeness, we calculated that this preference was reversed by the ATLAS 13 TeV 15.4/fb search [11], resulting in a Bayes-factor of about 0.7. Nevertheless, the interest in F models in the interim was, to some degree, supported by Bayesian and frequentist analyses. Unfortunately, CMS performed searches in numerous event categories, resulting in a proliferation of background nuisance parameters and making replication difficult without cutting corners or considerable computing power.
Andrew Fowlie  (Submitted on 22 Jul 2016 (v1), last revised 6 Dec 2016 (this version, v2))

Taking with a grain of salt the three black hole merger Magi 2016 story?
The analysis of GW150914 shows that the initial black hole masses are 36M and 29M [1], which are heavier than the previous known stellar-mass black holes[2]. In the newly announced black hole merger event, GW151226 [3], the initial black hole masses are about 14M and 8M , which fall into the known mass range of stellar black holes... It seems to make the picture of binary black hole merger and gravitational wave observation more reliable because the signals of GW150914 and GW151226 are extracted from noise by the same methods [4, 5].  
However, we notice the response of a detector to gravitational wave is a function of frequency. When the time a photon moving around in the Fabry-Perot cavities is the same order of the period of a gravitational wave, the phase-difference due to the gravitational wave should be an integral along the path. In fact, this propagation effect on Michelson detector response was addressed, for example, in [6]. Unfortunately, the propagation effect on Fabry-Perot detector response has not been considered properly. 
In the manuscript, we try to take into {account?} the propagation effect of the gravitational wave and reexamine the LIGO data. We find that when the average time a photon staying in the Fabry-Perot cavities in two arms is the same order {of as} the period of a gravitational wave, the phase-difference of a photon in the two arms due to the gravitational wave may be cancelled. In the case of observation for GW151226, the average time of a photon staying in the detector is longer than the period of the gravitational wave at maximum gravitational radiation. When the propagation effect is taken into account, the claimed signal GW151226 almost disappears
The green line in the top panel is the response of detector to the best fit template for GW151226 provided on LIGO website[9].When the propagation effect is taken into account {taking into account the Fabry-Perot detector response?}, the detector response to the gravitational wave in the template becomes the form of the blue line. The bottom panel presents the variation of frequency in time for gravitational wave.
For LIGO detectors, the lengths of Fabry-Perot cavities are L ≈ 4 km. On average, a photon travels in cavities 140 round trips [1]. It will move back and forth in cavities for about 0.0037 s. In that period, the gravitational wave with frequency 268 Hz has propagated the distance of one wavelength. Therefore, the above propagation effect should be taken into account in the analysis of GW151226 because the frequency of the peak gravitational strain is about 450 Hz (> 268 Hz). For the low-frequency gravitational wave the propagation effect is small. So the signal for GW150914 is not affected a lot. 
It should be remarked that there is the subtle difference between the effect of a gravitational wave on the light traveling in a detector and the phase variation due to the vibration of mirrors which has been used in the calibration of LIGO’s detectors[11] though both the vibration of mirrors and the incidence of a gravitational wave will modify the phase of a light travels in the cavities. The vibrations of the mirrors modify the phase of a light when the photons travel near the vibrating mirrors. The phase shift of a light beyond the vibration region will not be affected by the vibrating mirrors. In contrast, a gravitational wave affects the phase of a light at every place in the cavities. As the result, the average phase variations due to the vibrating mirrors do not vanish even when the time of a round trip of a photon in a cavity is the same as the period of the vibration of the end mirrors. But, it will vanish in the gravitational wave background when the time of a round trip for a photon is the same as the period of a gravitational wave. Therefore, the propagation effect of gravitational wave is not included in the calibration, which is calibrated with the help of the vibrating mirrors.
Zhe Chang, Chao-Guang Huang, Zhi-Chao Zhao (Submitted on 6 Dec 2016)

mercredi 9 novembre 2016

[Today the world is trumper than yesterday!, Est ce que le monde d'hier était moins trompeur qu'aujourd'hui ?]

Yesterday forecast for the 2016 american presidential election

Today projection after the vote
Beware the color labels for Trump and Clinton in the following is opposite to the last graphic!

from (November 9)

Last comment (November 19)
Will Trump victory make the USA a more obvious plutocracy?
Here are the last (final?) results :

from (November 19)

dimanche 6 novembre 2016

[There, is] plenty of room for new phases at high pressure [!,?]

No comment

Evidence for a new phase of dense hydrogen above 325 gigapascals
Philip Dalladay-Simpson, Ross T. Howie & Eugene Gregoryanz
Nature 529, 63–67 (07 January 2016)
Almost 80 years ago it was predicted that, under sufficient compression, the H–H bond in molecular hydrogen (H2) would break, forming a new, atomic, metallic, solid state of hydrogen. Reaching this predicted state experimentally has been one of the principal goals in high-pressure research for the past 30 years. Here, using in situ high-pressure Raman spectroscopy, we present evidence that at pressures greater than 325 gigapascals at 300 kelvin, H2 and hydrogen deuteride (HD) transform to a new phase—phase V. This new phase of hydrogen is characterized by substantial weakening of the vibrational Raman activity, a change in pressure dependence of the fundamental vibrational frequency and partial loss of the low-frequency excitations. We map out the domain in pressure–temperature space of the suggested phase V in H2 and HD up to 388 gigapascals at 300 kelvin, and up to 465 kelvin at 350 gigapascals; we do not observe phase V in deuterium (D2). However, we show that the transformation to phase IV′ in D2 occurs above 310 gigapascals and 300 kelvin. These values represent the largest known isotropic shift in pressure, and hence the largest possible pressure difference between the H2 and D2 phases, which implies that the appearance of phase V of D2 must occur at a pressure of above 380 gigapascals. These experimental data provide a glimpse of the physical properties of dense hydrogen above 325 gigapascals and constrain the pressure and temperature conditions at which the new phase exists. We speculate that phase V may be the precursor to the non-molecular (atomic and metallic) state of hydrogen that was predicted 80 years ago.

New low temperature phase in dense hydrogen: The phase diagram to 421 GPa
Ranga Dias, Ori Noked, Isaac F. Silvera
(Submitted on 7 Mar 2016 (v1), last revised 26 May 2016 (this version, v2))
In the quest to make metallic hydrogen at low temperatures a rich number of new phases have been found and the highest pressure ones have somewhat flat phase lines, around room temperature. We have studied hydrogen to static pressures of GPa in a diamond anvil cell and down to liquid helium temperatures, using infrared spectroscopy. We report a new phase at a pressure of GPa and T=5 K. Although we observe strong darkening of the sample in the visible, we have no evidence that this phase is metallic hydrogen.

No "Evidence for a new phase of dense hydrogen above 325 GPa"
Ranga P. Dias, Ori Noked, Isaac F. Silvera
(Submitted on 18 May 2016)
In recent years there has been intense experimental activity to observe solid metallic hydrogen. Wigner and Huntington predicted that under extreme pressures insulating molecular hydrogen would dissociate and transition to atomic metallic hydrogen. Recently Dalladay-Simpson, Howie, and Gregoryanz reported a phase transition to an insulating phase in molecular hydrogen at a pressure of 325 GPa and 300 K. Because of its scientific importance we have scrutinized their experimental evidence to determine if their claim is justified. Based on our analysis, we conclude that they have misinterpreted their data: there is no evidence for a phase transition at 325 GPa.

Nature of the Metallization Transition in Solid Hydrogen
Sam Azadi, N. D. Drummond, W. M. C. Foulkes
(Submitted on 2 Aug 2016)
Determining the metalization pressure of solid hydrogen is one of the great challenges of high-pressure physics. Since 1935, when it was predicted that molecular solid hydrogen would become a metallic atomic crystal at 25 GPa [1], compressed hydrogen has been studied intensively. Additional interest arises from the possible existence of room-temperature superconductivity [2], a metallic liquid ground state [3], and the relevance of solid hydrogen to astrophysics [4, 5].  
Early spectroscopic measurements at low temperature suggested the existence of three solid-hydrogen phases [4]. Phase I, which is stable up to 110 GPa, is a molecular solid composed of quantum rotors arranged in a hexagonal close-packed structure. Changes in the low-frequency regions of the Raman and infrared spectra imply the existence of phase II, also known as the broken-symmetry phase, above 110 GPa. The appearance of phase III at 150 GPa is accompanied by a large discontinuity in the Raman spectrum and a strong rise in the spectral weight of molecular vibrons. Phase IV, characterized by the two vibrons in its Raman spectrum, was discovered at 300 K and pressures above 230 GPa [6–8]. Another new phase has been claimed to exist at pressures above 200 GPa and higher temperatures (for example, 480 K at 255 GPa) [9]. This phase is thought to meet phases I and IV at a triple point, near which hydrogen retains its molecular character. The most recent experimental results [10] indicate that H2 and hydrogen deuteride at 300 K and pressures greater than 325 GPa transform to a new phase V, characterized by substantial weakening of the vibrational Raman activity. Other features include a change in the pressure dependence of the fundamental vibrational frequency and the partial loss of the low-frequency excitations.  
Although it is very difficult to reach the hydrostatic pressure of more than 400 GPa at which hydrogen is normally expected to metalize, some experimental results have been interpreted as indicating metalization at room temperature below 300 GPa [6]. However, other experiments show no evidence of the optical conductivity expected of a metal at any temperature up to the highest pressures explored [11]. Experimentally, it remains unclear whether or not the molecular phases III and IV are metallic, although it has been suggested that phase V may be non-molecular (atomic) [10]. Metalization is believed to occur either via the dissociation of hydrogen molecules and a structural transformation to an atomic metallic phase [6, 12], or via band-gap closure within the molecular phase [13, 14]. In this work we investigate the latter possibility using advanced computational electronic structure methods.
Structures of crystalline materials are normally determined by X-ray or neutron diffraction methods. These techniques are very challenging for low-atomic-number elements such as hydrogen [15]. Fortunately optical phonon modes disappear, appear, or experience sudden shifts in frequency when the crystal structure changes. It is therefore possible to identify the transitions between phases using optical methods.

(Submitted on 5 Oct 2016)
We have studied solid hydrogen under pressure at low temperatures. With increasing pressure we observe changes in the sample, going from transparent, to black, to a reflective metal, the latter studied at a pressure of 495 GPa. We have measured the reflectance as a function of wavelength in the visible spectrum finding values as high as 0.90 from the metallic hydrogen. We have fit the reflectance using a Drude free electron model to determine the plasma frequency of 30.1 eV at T= 5.5 K, with a corresponding electron carrier density of 6.7x1023 particles/cm3 , consistent with theoretical estimates. The properties are those of a metal. Solid metallic hydrogen has been produced in the laboratory

dimanche 2 octobre 2016

Solar neutrinos: Oscillations or (almost) No-oscillations (?)

Neutrino oscillations disentangled from adiabatic flavor conversion : always mind your terminology!
Next Tuesday will be announced the Nobel prize in physics 2016. That makes two days left to think one more time about the interesting physics from the previous year, learning some lessons from the past:
The Nobel prize in physics 2015 has been awarded "... for the discovery of neutrino oscillations which show that neutrinos have mass". While SuperKamiokande (SK), indeed, has discovered oscillations, {the} Sudbury Neutrino Observatory (SNO) observed effect of the adiabatic (almost non-oscillatory) flavor conversion of neutrinos in the matter of the Sun. Oscillations are irrelevant for solar neutrinos apart from small electron neutrino regeneration inside the Earth. Both oscillations and adiabatic conversion do not imply masses uniquely and further studies were required to show that non-zero neutrino masses are behind the SNO results. Phenomena of oscillations (phase effect) and adiabatic conversion (the Mikheïev-Smirnov-Wolfenstein (MSW) effect driven by the change of mixing in matter) are described in pedagogical way.

In {the figure above} we show graphic representations of the neutrino oscillations and adiabatic conversion which are based on analogy with the electron spin precession in the magnetic field. Neutrino polarization vector in flavor space (“spin”) is moving in the flavor space around the “eigenstate axis” (magnetic field) whose direction is determined by the mixing angle 2θm. Oscillations are equivalent to the precession of the neutrino polarization vector around fixed axis, Fig. a. Oscillation probability is determined by projection of the neutrino vector on the axis z. The direction up of the neutrino vector corresponds to the νe, direction down – to νa. Adiabatic conversion is driven by rotation of the cone itself, i.e. change of direction of the magnetic field (cone axis) according to change of the mixing angle, Fig. b. Due to adiabaticity the cone opening angle does not change and therefore the neutrino vector follow rotation of axis.
Oscillations do not need the mass. Recall that it was the subject of the classical Wolfenstein’s paper [9] to show that oscillations can proceed for massless neutrinos. This requires, however, introduction of the non-standard interactions of neutrinos which lead to non-diagonal potentials in the flavor basis and therefore produce mixing. 
In oscillations we test the dispersion relations, that is, the relations between the energy and momentum, and not masses immediately. Oscillations are induced because of difference of dispersion of neutrino components that compose a mixed state... 
It is consistency of results of many experiments in wide energy ranges and different environments: vacuum, matter with different density profiles that makes explanation of data without mass almost impossible. In this connection one may wonder which type of experiment/measurement can uniquely identify the true mass? Let us mention three possibilities:
• Kinematical measurements: distortion of the beta decay spectrum near the end point. Notice that similar effect can be produced if a degenerate sea of neutrinos exists which blocks neutrino emission near the end point.
• Detection of neutrinoless double beta decay which is the test of the Majorana neutrino mass. Here complications are related to possible contributions to the decay from new L-violating interactions.
• Cosmology is sensitive to the sum of neutrino masses, and in future it will be sensitive to even individual masses. Here the problem is with degeneracy of neutrino mass and cosmological parameters.
In January 1986 at the Moriond workshop A. Messiah (he gave the talk [16]) asked me: “why do you call effect that happens in the Sun the resonance oscillations? It has nothing to do with oscillations, I will call it the MSW effect”. My reply was “yes, I agree, we simply did know how to call it. I will explain and correct this in my future talks and publications”. Messiah’s answer was surprising: “No way..., now this confusion will stay forever”. That time I could not believe him. I have published series of papers, delivered review talks, lectures in which I was trying to explain, fix terminology, etc.. All this has been described in details in the talk at Nobel symposium [17], and for recent review see [8]. 
Ideally terminology should reflect and follow our understanding of the subject. Deeper understanding may require a change or modification of terminology. At the same time changing terminology is very delicate thing and can be done with great care. 
In conclusion, the answer to the question in the title of the paper is 
“Solar neutrinos: Almost No-oscillations”.
The SNO experiment has discovered effect of the adiabatic flavor conversion (the MSW effect). Oscillations (effect of the phase) are irrelevant. Evolution of the solar neutrinos can be considered as independent (incoherent) propagation of the produced eigenstates in matter. Flavors of these eigenstates (described by mixing angle) change according to density change. At high energies (SNO) the adiabatic conversion is close to the non-oscillatory transition which corresponds to production of single eigenstate. Oscillations with small depth occur in the matter of the Earth.
A. Yu. Smirnov (Submitted on 8 Sep 2016)

lundi 29 août 2016

High (energy physics exploration) by East-west (collaboration on heavy ion collision experiments)

There is more than potential new elementary particles to understand fundamental interactions
This short note describes the long collaborative effort between Arizona and Krak´ow, showing some of the key strangeness signatures of quarkgluon plasma. It further presents an annotated catalog of foundational questions defining the research frontiers which I believe can be addressed in the foreseeable future in the context of relativistic heavy ion collision experiments. The list includes topics that are specific to the field, and ventures towards the known-to-be-unknown that may have a better chance with ions as compared to elementary interactions. 
Some 70 years ago the development of relativistic particle accelerators heralded a new era of laboratory-based systematic exploration and study of elementary particle interactions...  
The outcomes of this long quest are on one hand the standard model (SM) of particle physics, and on another, the discovery of the primordial deconfined quark-gluon plasma (QGP). These two foundational insights arose in the context of our understanding of the models of particle production and more specifically, the in-depth understanding of strong interaction processes. To this point we recall that in the context of SM discovery we track decay products of e.g. the Higgs particle in the dense cloud of newly formed strongly interacting particles. In the context of QGP we need to understand the gas cloud of hadrons into which QGP decays and hadronizes. Hadrons are always all we see at the end. They are the messengers and we must learn to decipher the message.
Jan Rafelski   (Submitted on 25 Aug 2016)

Exotic states of nuclear matter matter too
The year 1964/65 saw the rise of several new ideas which in the following 50 years shaped the discoveries in fundamental subatomic physics: 1. The Hagedorn temperature TH ; later recognized as the melting point of hadrons into 2. Quarks as building blocks of hadrons; and, 3. The Higgs particle and field escape from the Goldstone theorem, allowing the understanding of weak interactions, the source of inertial mass of the elementary particles. The topic in this paper is Hagedorn temperature 
 and the strong interaction phenomena near to TH . I present an overview of 50 years of effort with emphasis on: a) Hot nuclear and hadronic matter; b) Critical behavior near 
 ; c) Quark-gluon plasma (QGP); d) Relativistic heavy ion (RHI) collisions1 ; e) The hadronization process of QGP; f) Abundant production of strangeness flavor... 
A report on ‘Melting Hadrons, Boiling Quarks and TH’ relates strongly to quantum chromodynamics (QCD), the theory of quarks and gluons, the building blocks of hadrons, and its lattice numerical solutions; QCD is the quantum (Q) theory of color-charged (C) quark and gluon dynamics (D); for numerical study the space-time continuum is discretized on a ‘lattice’. Telling the story of how we learned that strong interactions are a gauge theory involving two types of particles, quarks and gluons, and the working of the lattice numerical method would entirely change the contents of this article, and be beyond the expertise of the author. I recommend instead the book by Weinberg [8], which also shows the historical path to QCD... 
Our conviction that we achieved in laboratory experiments the conditions required for melting (we can also say, dissolution) of hadrons into a soup of boiling quarks and gluons became firmer in the past 15-20 years. Now we can ask, what are the ‘applications’ of the quark-gluon plasma physics? Here is a short wish list:  
1) Nucleons dominate the mass of matter by a factor 1000. The mass of the three ‘elementary’ quarks found in nucleons is about 50 times smaller than the nucleon mass. Whatever compresses and keeps the quarks within the nucleon volume is thus the source of nearly all of mass of matter. This clarifies that the Higgs field provides the mass scale to all particles that we view today as elementary. Therefore only a small %-sized fraction of the mass of matter originates directly in the Higgs field; see Section 7.1 for further discussion. The question: What is mass? can be studied by melting hadrons into quarks in RHI collisions 
2) Quarks are kept inside hadrons by the ‘vacuum’ properties which abhor the color charge of quarks. This explanation of 1) means that there must be at least two different forms of the modern æther that we call ‘vacuum’: the world around us, and the holes in it that are called hadrons. The question: Can we form arbitrarily big holes filled with almost free quarks and gluons? was and remains the existential issue for laboratory study of hot matter made of quarks and gluons, the QGP. Aficionados of the lattice-QCD should take note that the presentation of two phases of matter in numerical simulations does not answer this question as the lattice method studies the entire Universe, showing hadron properties at low temperature, and QGP properties at high temperature 
3) We all agree that QGP was the primordial Big-Bang stuff that filled the Universe before ‘normal’ matter formed. Thus any laboratory exploration of the QGP properties solidifies our models of the Big Bang and allows us to ask these questions: What are the properties of the primordial matter content of the Universe? and How does ‘normal’ matter formation in early Universe work?  
4) What is flavor? In elementary particle collisions, we deal with a few, and in most cases only one, pair of newly created 2nd, or 3rd flavor family of particles at a time. A new situation arises in the QGP formed in relativistic heavy ion collisions. QGP includes a large number of particles from the second family: the strange quarks and also, the yet heavier charmed quarks; and from the third family at the LHC we expect an appreciable abundance of bottom quarks. The novel ability to study a large number of these 2nd and 3rd generation particles offers a new opportunity to approach in an experiment the riddle of flavor 
5) In relativistic heavy ion collisions the kinetic energy of ions feeds the growth of quark population. These quarks ultimately turn into final state material particles. This means that we study experimentally the mechanisms leading to the conversion of the colliding ion kinetic energy into mass of matter. One can wonder aloud if this sheds some light on the reverse process: Is it possible to convert matter into energy in the laboratory? The last two points show the potential of ‘applications’ of QGP physics to change both our understanding of, and our place in the world. For the present we keep these questions in mind. This review will address all the other challenges listed under points 1), 2), and 3) above; however, see also thoughts along comparable foundational lines presented in Subsections 7.3 and 7.4..
(Submitted on 13 Aug 2015 (v1), last revised 16 Sep 2015 (this version, v2))

 Snapshot of two colliding lead ions just after impact (simulation).

At a special seminar on 10 February 2000, spokespersons from the experiments on CERN's Heavy Ion programme presented compelling evidence for the existence of a new state of matter in which quarks, instead of being bound up into more complex particles such as protons and neutrons, are liberated to roam freely.
Theory predicts that this state must have existed at about 10 microseconds after the Big Bang, before the formation of matter, as we know it today, but until now it had not been confirmed experimentally. Our understanding of how the universe was created, which was previously unverified theory for any point in time before the formation of ordinary atomic nuclei, about three minutes after the Big Bang, has with these results now been experimentally tested back to a point only a few microseconds after the Big Bang. (CERN Bulletin 07/00; 14 February 2000)