vendredi 24 octobre 2014

De l'art de mesurer la constante de Hubble en cherchant notre place au milieu de nulle part

La longue marche vers une "cosmologie de précision"

The plots below show the time evolution of our knoweldge of the Hubble Constant H0, the scaling between radial velocity and distance in kilometers per second per Megaparsec, since it was first determined by Lemaitre, Robertson and Hubble in the late 1920's. The first major revision to Hubble's value was made in the 1950's due to the discovery of Population II stars by W. Baade. That was followed by other corrections for confusion, etc. that pretty much dropped the accepted value down to around 100 km/s/Mpc by the early 1960's.

The last plot shows modern (post Hubble Space Telescope) determinations, including results from gravitational lensing and applications of the Sunyaev-Zeldovich effect. Note the very recent convergence to values near 65 +/- 10 km/sec/Mpc (about 13 miles per second per million light-years)... Currently, the old factor of two discrepancy in the determination of the cosmic distance scale has been reduced to a dispersion of the order of 10 km/s out of 65-70, or 15-20%. Quite an improvement!
One major additional change in the debate since the end of the 20th century has been the discovery of the accelerating universe (cf. Perlmutter et al. 1998 and Riess et al. 1998) and the development of "Concordance" Cosmology. In the early 1990's, one of the strongest arguments for a low (~50 km/s/Mpc) value of the Hubble Constant was the need to derive an expansion age of the universe that was older than, now, the oldest stars, those found in globular star clusters. The best GC ages in 1990 were in the range 16-18 Gyr. The expansion age of the Universe depends primarily on the Hubble constant but also on the value of various other cosmological parameters, most notably then the mean mass density over the closure density, ΩM. For an "empty" universe, the age is just 1/H0 or 9.7 Gyr for H0=100 km/s/Mpc and 19.4 Gyr for 50 km/s/Mpc. For a universe with ΩM=1.000, the theorist's favorite because that is what is predicted by inflation, the age is 2/3 of that for the empty universe. So if the Hubble Constant was 70 km/s/Mpc, the age of an empty universe was 13.5 Gyr, less than the GC ages, and if Ωwas 1.000 as favored by the theorists, the expansion age would only be 9 Gyr, much much less than the GC ages. Conversely if H0 was 50 km/s/Mpc, and ΩM was the observers' favorite value of 0.25, the age came out just about right. Note that this still ruled out ΩM= 1.000 though, inspiring at least one theorist to proclaim that H0 must be 35! The discovery of acceleration enabled the removal of much of this major remaining discrepancy in timescales, that between the expansion age of the Universe and the ages of the oldest stars, those in globular clusters. The introduction of a Cosmological constant, &Lambda, one of the most probable causes for acceleration, changes the computation of the Universe's expansion age. A positive ΩΛ increases the age. The Concordance model has an H0=72 km/s/Mpc, an Ω= 1.0000... made up of ΩΛ=0.73 and ΩM=0.27. Those values yield an age for the Universe of ~ 13.7 Gyr. This alone would not have solved the timescale problem, but a revision of the subdwarf distance scale based on significantly improved paralaxes to nearby subdwards from the ESA Hiparcos mission, increased the distances to galactic globular clusters and thus decreased their estimated ages. The most recent fits of observed Hertzsprung-Russel diagrams to theoretical stellar models (isochrones) by the Yale group (Demarque, Pinsonneault and others) indicates that the mean age of galactic globulars is more like 12.5 Gyr, comfortably smaller than the Expansion age.
John P. Huchra, Copyright 2008
Les derniers pas...
The recent Planck observations of the cosmic microwave background (CMB) lead to a Hubble constant of H0=67.3±1.2 km/s/Mpc for the base six-parameter ΛCDM model (Planck Collaboration 2013, hereafter P13). This value is in tension, at about the 2.5σ level, with the direct measurement of H0=73.8 ± 2.4 km/s/Mpc reported by Riess et al (2011 R11). If these numbers are taken at face value, they suggest evidence for new physics at about the 2.5σ level (for example, exotic physics in the neutrino or dark energy sectors...). The exciting possibility of discovering new physics provides strong motivation to subject both the CMB and H0 measurements to intense scrutiny. This paper presents a reanalysis of the R11 Cepheid data. The  H0 measurement from these data has the smallest error and has been used widely in combination with CMB measurements for cosmological parameter analysis (e.g. Hinshaw et al. 2012; Hou et al. 2012; Sievers et al. 2013). The study reported here was motivated by certain aspects of the R11 analysis: the R11 outlier rejection algorithm (which rejects a large fraction, ∼ 20%, of the Cepheids), the low reduced χ2 values of their fits, and the variations of some of the parameter values with different distance anchors, particularly the metallicity dependence of the period-luminosity relation... 
[The] figure [below] compares these two estimates of H0 with the P13 results from the [Planck+WP+highL (ACT+South Pole Telescope)+BAO (2dF Galaxy Redshift and SDSS redshiftsurveys)] likelihood for the base ΛCDM cosmology and some extended ΛCDM models. I show the combination of CMB and Baryon Acoustic Oscillations [BAO] data since H0 is poorly constrained for some of these extended models using CMB temperature data alone. (For reference, for this data combination H0=67.80±0.77 km/s/Mpc in the base ΛCDM model.) The combination of CMB and BAO data is certainly not prejudiced against new physics, yet the H0 values for the extended ΛCDM models shown in this figure all lie within 1σ of the best fit value for the base ΛCDM model. For example, in the models exploring new physics in the neutrino sector, the central value of H0 never exceeds 69.3 km/s/Mpc. If the true value of H0 lies closer to, say, H0=74 km/s/Mpc , the dark energy sector, which is poorly constrained by the combination of CMB and BAO data, seems a more promising place to search for new physics. In summary, the discrepancies between the Planck results and the direct H0 measurements... are not large enough to provide compelling evidence for new physics beyond the base ΛCDM cosmology.

The direct estimates (red) of H0 (together with 1σ error bars) for the NGC 4258 distance anchor  and for all three distance anchors. The remaining (blue) points show the constraints from P13 for the base ΛCDM cosmology and some extended models combining CMB data with data from baryon acoustic oscillation surveys. The extensions are as follows: mν, the mass of a single neutrino species; mν + Ωk, allowing a massive neutrino species and spatial curvature; Neff , allowing additional relativistic neutrino-like particles; Neff +msterile, adding a massive sterile neutrino and additional relativistic particles; Neff+mν, allowing a massive neutrino and additional relativistic particles; w, dark energy with a constant equation of state w = p/ρ; w + wa , dark energy with a time varying equation of state. I give the 1σ upper limit on mν and the 1σ range for Neff . 
(Submitted on 14 Nov 2013 (v1), last revised 8 Feb 2014 (this version, v2))

"cosmologie de précision" : un terme à prendre avec des pincettes 

Chercher notre place au milieu de nulle part...
Tel pourrait être le propos de la cosmologie dans une perspective anthropologique. Mais ce blog ci n'est pas le lieu pour ce genre de débat. Le blogueur préfère laisser la parole de fin à une grande dame de l'enseignement de l'astronomie en France Lucienne Gougenheim en espérant que ce qui précède illustre bien l'actualité de sa conclusion générale extraite d'un exposé pédagogique sur la constante de Hubble et l'âge de l'Univers daté de 1996

  • La distance n'est pas le seul paramètre qui conditionne la valeur de H0...
  • La nature de la chandelle standard est complexe ; même quand nous avons une bonne connaissance théorique de la propriété qui sert de critère de distance, il convient de discuter l'importance des différents paramètres dont elle dépend.
  • On ne passe de la connaissance de H0 à celle de d'âge de l'univers que dans le cadre d'un modèle cosmologique.
  • ...un problème complexe ne peut se comprendre (et en conséquence se résoudre) que par la prise en compte de l'ensemble des paramètres dont il dépend...

mardi 9 septembre 2014

Shut-up and calculate* ... or converse before speculating ?

(A message of) the last of the pioneers of particle colliders

... I may be the last still around of the first generation of pioneers that brought colliding beam machines to reality.  I have been personally involved in building and using such machines since 1957 when I became part of the very small group that started to build the first of the colliders.   While the decisions on what to do next belong to the younger generation, the perspective of one of the old guys might be useful.  I see too little effort going into long range accelerator R&D, and too little interaction of the three communities needed to choose the next step, the theorists, the experimenters, and the accelerator people.  Without some transformational developments to reduce the cost of the machines of the future, there is a danger that we will price ourselves out of the market.
Burton Richter (Stanford University and SLAC National Accelerator Laboratory)
Wed, 3 Sep 2014

The high-energy colliders may not reach to heaven (and high-luminosity ones?)
In early 2015 the LHC will begin operations again at about 13 TeV compared to the 8-TeV operations before its recent shutdown for upgrading. 
The LHC itself is an evolving machine.  Its energy at its restart next year will be 13 TeV, slowly creeping up to its design energy of 14 TeV.  It will shut down in 2018 for some upgrades to detectors, and shut down again in 2022 to increase the luminosity.  It is this high-luminosity version (HL-LHC) that has to be compared to the potential of new facilities.  There has been some talk of doubling the energy of the LHC (HE-LHC) by replacing the 8-tesla magnets of the present machine with 16-tesla magnets, which would be relatively easy compared to the even more talked about bolder step to 100 TeV for the next project.  It is not clear to me why 30-TeV LHC excites so little interest, but that is the case.  
A large fraction of the 100 TeV talk (wishes?) comes from the theoretical community which is disappointed at only finding the Higgs boson at LHC and is looking for something that will be real evidence for what is actually beyond the standard model. Regrettably, there has been little talk so far among the three communities, experimenters, theorists, and accelerator scientists, on what constraints on the next generation are imposed by the requirement that the experiments actually produce analyzable data... 
The most important choice for a new, higher energy collider is its luminosity, which determines its discovery potential.  If a new facility is to have the same potential for discovery of any kind of new particles as had the old one, the new luminosity required is very roughly proportional to the square of the energy because cross sections typically drop as E-2.  A seven-fold increase in energy from that of HL-LHC to a 100-TeV collider therefore requires a fifty-fold increase in luminosity.  If the luminosity is not increased, save money by building a lower-energy machine where the discovery potential matches the luminosity.

String theorists ideas on physics might be popularized only in science fiction magazines ;-)
If you have seen the movie Particle Fever about the discovery of the Higgs boson, you have heard the theorists saying that the only choices today are between Super-symmetry and the Landscape.  Don’t believe them.  Super-symmetry says that every fermion has a boson partner and vice versa.  That potentially introduces a huge number of new arbitrary constants which does not seem like much progress to me.  However, in its simpler variants the number of new constants is small and a problem at high energy is solved.  But, experiments at the LHC already seem to have ruled out the simplest variants.    
The Landscape surrenders to perpetual ignorance.  It says that our universe is only one of a near infinity of disconnected universes, each with its own random collection of force strengths and constants, and we can never observe or communicate with the others.  We can never go further in understanding because there is no natural law that relates the different universes.  The old dream of deriving everything from one constant and one equation is dead.  There are two problems with the landscape idea.  The first is a logic one.  You cannot prove a negative, so you cannot say that there is no more to learn.  The second is practical.  If it is all random there is no point in funding theorists, experimenters, or accelerator builders.  We don’t have to wait until we are priced out of the market, there is no reason to go on 
There is a problem here that is new, caused by the ever-increasing mathematical complexity of today’s theory.  When I received my PhD in the 1950s it was possible for an experimenter to know enough theory to do her/his own calculations and to understand much of what the theorists were doing, thereby being able to choose what was most important to work on.  Today it is nearly impossible for an experimenter to do what many of yesterday’s experimenters could do, build apparatus while doing their own calculations on the significance of what they were working on.  Nonetheless, it is necessary for experimenters and accelerator physicists to have some understanding of where theory is, and where it is going.  Not to do so makes most of us nothing but technicians for the theorists.  Perhaps only the theory phenomenologists should be allowed to publish in general readership journals or to comment in movies. 

*A propos ... 

mardi 26 août 2014

Peut-on mettre un peu d'ordre dans le processus de sélection des théories physiques?

De la cohérence observationnelle à la cohérence mathématique... 
My first point is that the conditions of theory choice should be ordered. Frequently we see the listing of criteria for theory choice given in a flat manner, where one is not given precedence over the other a priori. We see consilience, simplicity, falsifiability, naturalness, consistency, economy, all together in an unordered list of factors when judging a theory. However, consistency must take precedence over any other factors. Observational consistency is obviously central to everyone, most especially our experimental colleagues, when judging the relevance of theory for describing nature. Despite some subtleties that can be present with regards to observational consistency (There can be circumstances where a theory is observationally consistent in a vast number of observables, but in a few it does not get right, yet no other decent theory is around to replace it. In other words, observational consistency is still the top criterion, but the best theory may not be 100% consistent.) it is a criterion that all would say is at the top of the list.
Mathematical consistency, on the other hand, is not as fully appreciated... Mathematical consistency has a preeminent role right up there with ob- servational consistency, and can be just as subtle, time-consuming and difficult to establish. We have seen that in the case of effective theories it trumps other theory choice considerations such as simpleness, predictivity, testability, etc 
My second point builds on the first. Since consistency is preeminent, it must have highest priority of establishment compared to other conditions. Deep, thoughtful reflection and work to establish the underlying self-consistency of a theory takes precedence over finding ways to make it more natural or to have less parameters (i.e., simple). Highest priority must equally go into understanding all of its observational implications. A theory should not be able to get away with being fuzzy on either of these two counts, before the higher order issues of simplicity and naturalness and economy take center stage. That this effort might take considerable time and effort should not be correlated with a theory’s value, just as it is not a theory’s fault if it takes humans decades to build a collider to sufficiently high energy and luminosity to test it. 
Additionally, dedicated effort on mathematical consistency of the theory, or class of theories, can have enormous payoffs in helping us understand and interpret the implications of various theory proposals and data in broad terms. An excellent example of that in recent years is by Adams et al. [15], who showed that some theories in the infrared with a cutoff cannot be self-consistently embedded in an ultraviolet complete theory without violating standard assumptions regarding superluminality or causality. The temptation can be high to start manipulating uninteresting theories into simpler and more beautiful versions before due diligence is applied to determine if they are sick at their cores. This should not be rewarded... 
Finally, I would like to make a comment about the implications of this discussion for the LHC and other colliders that may come in the future...  
In the years since the charm quark was discovered in the mid 1970’s there has been tremendous progress experimentally and important new discoveries, including the recent discovery of a Higgs boson-like state [20], but no dramatic new discovery that can put us on a straight and narrow path beyond the SM. That may change soon at the LHC. Nevertheless, it is expensive in time and money to build higher energy colliders, our main reliable transporter into the high energy frontier. This limits the prospects for fast experimental progress. 
In the meantime though, hundreds of theories have been born and have died. Some have died due to incompatibility of new data (e.g., simplistic technicolor theories, or simpleminded no-scale supersymmetry theories), but others have died under their own self-consistency problems (e.g., some extra-dimensional models, some string phenomenology models, etc.). In both cases, it was care in establishing consistency with past data and mathematical rigor that have doomed them. In that sense, progress is made. Models come to the fore and fall under the spotlight or survive. When attempting to really explain everything, the consistency issues are stretched to the maximum. For example, it is not fully appreciated in the supersymmetry community that it may even be difficult to find a “natural” supersymmetric model that has a high enough reheat temperature to enable baryogenesis without causing problems elsewhere [21a, 21b]. There are many examples of ideas falling apart when they are pushed very hard to stand up to the full body of evidence of what we already know. 
Relatively speaking, theoretical research is inexpensive. It is natural that a shift develop in fundamental science. The code of values in theoretical research will likely alter in time, as experimental input slows. Ideas will be pursued more rigorously and analysed critically. Great ideas will always be welcome. However, soft model building tweaks for simplicity and naturalness will become less valuable than rigorous tests of mathematical consistency. Distant future experimental implications identified for theories not fully vetted will become less valuable than rigorous computations of observational consistency across the board of all currently known data. One can hope that unsparing devotion to full consistency, both observational and mathematical, will be the hallmarks of the future era.

James D. Wells (Submitted on 3 Nov 2012)

(Encore) un philosophe dans la soupe du physicien

Ne pas oublier que la physique découle de la philosophie naturelle... 
"... theoretical physics has not done great in the last decades. Why? Well, one of the reasons, I think, is that it got trapped in a wrong philosophy: the idea that you can make progress by guessing new theory and disregarding the qualitative content of previous theories. This is the physics of the “why not?” Why not studying this theory, or the other? Why not another dimension, another field, another universe? Science has never advanced in this manner in the past. Science does not advance by guessing. It advances by new data or by a deep investigation of the content and the apparent contradictions of previous empirically successful theories. "
By John Horgan | August 21, 2014

mercredi 18 juin 2014

Un philosophe (apporte son grain de sel) à la table du physicien

Le spectacle de la Nature est un banquet où la soupe phénoménologique se doit d'être riche en modèles mathématiques variés
Où le blogueur essaie d'argumenter sur la nécessité de comparer les différents modèles mathématiques proposés par les physiciens pour comprendre et explorer plus avant la réalité, en le faisant à sa manière habituelle* c'est-à-dire par une citation de texte:
From the times of Niels Bohr, many physicists, mathematicians and biologists have been attentive to philosophical aspects of our doing. Most of us are convinced that the frontier situation of our research can point to aspects of some philosophical relevance - if only the professional philosophers would take the necessary time to become familiar with our thinking. Seldom, however, we read something of the philosophers which can inspire us. The US-American philosopher Charles Sanders Peirce (1839-1914) is an admirable exception. In his semiotics and pragmaticist (he avoids the word “pragmatic”) thinking, he provides a wealth of ideas, spread over an immense life work. It seems to me that many of his ideas, comments, and concepts can shed light on the why and how of mathematization...
 The quality of a mathematical model is not how similar it is to the segment of reality under consideration, but whether it provides a flexible and goal-oriented approach, opening for doubts and indicating ways for the removal of doubts (later trivialized by Popper’s falsification claim). More precisely, Peirce claims
  •  Be aware of differences between different approaches! 
  • Try to distinguish different goals (different priorities) of modelling as precise as possible! 
  • Investigate whether different goals are mutually compatible, i.e., can be reached simultaneously!
  • Behave realistically! Don’t ask: How well does the model reflect a given segment of the world? But ask: Does this model of a given segment of the world support the wanted and possibly wider activities / goals better than other models?
I may add: we have to strike a balance between Abstraction vs. construction, Top-down vs. bottom-up, and Unification vs. specificity. We better keep aware of the variety of Modelling purposes and the multifaceted relations between Theory - model - experiment. Our admiration for the Power of mathematization, the Unreasonable effectiveness of mathematics (Wigner) should not blind us for the Staying and deepening limitations of mathematization opposite new tasks.

*Remarques transtextuelles (ou portrait du blogueur en métacognition)
Quelque part dans son Moi profond, le transcyberphysicien se rêve en soldat inconnu de la guerre épistémologique que se livrent les défenseurs des différents modèles scientifiques de la gravitation quantique (théories des supercordes, gravitation quantique à boucles, piste tensorielle, géométrie spectrale non commutative...); mais à travers son discours basé essentiellement sur un usage immodéré d'extraits de ses propres lectures, il se voit aussi comme une sorte de Sancho Panza: (son Ça en somme ;-) infidèle compagnon de route virtuel d'un célèbre blogueur de sciences polémiste (et parfois triste sire) dont il relate parfois les tribulations dans le métatexte de ce blog.

dimanche 25 mai 2014

Simple comme le nouveau (mais déjà ancien) modèle standard minimal

Rubrique : Sans commentaire //ou presque

Un modèle simple avant d'être beau...
//Début mai on évoquait l'apparente simplicité des lois de la Nature qui sont aujourd'hui presque toutes condensées dans le Modèle Standard Minimal ( MSM en anglais) de la physique des particules et la théorie de la relativité générale (voir ce billet pour un aperçu de l'expression mathématique de cette relative simplicité). On dit bien presque parce que depuis l'achèvement du MSM au début des années 70 de nouveaux faits expérimentaux ont été mis en évidence à la fin des années 90 ou au début des années 2000 qui ne sont pas prédit par ce dernier...
There exist many possible directions to go beyond the Minimal Standard Model (MSM): supersymmetry,extra dimensions, extra gauge symmetries (e.g., grand unification), etc. They are motivated to solve aesthetic and theoretical problems of the MSM, but not necessarily to address empirical problems. It is embarrassing that all currently proposed frameworks have some phenomenological problems, e.g., excessive flavor-changing effects, CP violation, too-rapid proton decay, disagreement with electroweak precision data, and unwanted cosmological relics. In this letter, we advocate a different and conservative approach to physics beyond the MSM. We include the minimal number of new degrees of freedom to accommodate convincing (e.g., > 5σ) evidence for physics beyond the MSM. We do not pay attention to aesthetic problems, such as fine-tuning, the hierarchy problem, etc. We stick to the principle of minimality seriously to write down the Lagrangian that explains everything we know. We call such a model the New Minimal Standard Model (NMSM). In fact, the MSM itself had been constructed in this spirit, and it is a useful exercise to follow through with the same logic at the advent of the major discoveries we have witnessed. Of course, we require it to be a consistent Lorentz-invariant renormalizable four-dimensional quantum field theory, the way the MSM was constructed.


Hooman Davoudiasl, Ryuichiro Kitano, Tianjun Li, Hitoshi Murayama,The New Minimal Standard Model 12/05/2004


...aux prévisions cosmologiques vieilles de 10 ans déjà...
... The spectrum index of the ϕ^2 chaotic inflation modelis predicted to be 0.96. This may be confirmed in improved  cosmic- microwave background anisotropy data, with more years of WMAP and Planck. The tensor-to-scalar ratio is 0.16.

... et toujours robustes aujourd'hui jusqu'à preuve du contraire
The Planck nominal mission temperature anisotropy measurements, combined with the WMAP large-angle polarization, constrain the scalar spectral index to ns=0.9603±0.0073.
Subtracting the various dust models and re-deriving the r constraint still results in high significance of detection. For the model which is perhaps the most likely to be close to re- ality (DDM2 cross) the maximum likelihood value shifts to r = 0.16+0.0616 −0.05 with r = 0 disfavored at 5.9σ.

Quid d(')u(ne) prochain(e extension du) nouveau modèle standard minimal?
 //Rédaction encore en cours

dimanche 18 mai 2014

(Voyage) dans la tête de Gerard 't Hooft

Rubrique : Curiositêtes (1)
Le blogueur démarre une nouvelle rubrique qui aura pour but de présenter des physiciens plus ou moins connus du public amateur de sciences, en mettant l'accent sur des travaux originaux plus ou moins reconnus par leurs pairs.

L' tête au carré
Gerardus van 't Hooft est un physicien théoricien hollandais dont le patronyme pourrait se traduire en français par "L' tête" puisqu'en hollandais la tête se dit het Hoofd). Bien qu'ayant été récompensé par le prestigieux prix Nobel il y a déjà quinze ans, il est toujours actif scientifiquement (*) et il n'hésite pas à prendre une part active à la visibilité et à la défense de ses idées sur internet:
  • ses derniers articles sont toujours déposés en libre accès sur arxiv
  • il continue à être invité dans des institutions scientifiques prestigieuses pour des séminaires comme on peut le voir ici, la toile est également riche d'autres vidéos de ses interventions, on recommande particulièrement celle-, qui s'adresse au grand publique;
  • son site personnel est une mine d'or pour l'esprit curieux qui veut entrer dans la tête d'un physicien aussi généreux dans le partage de ses travaux et ses idées qu'il est grand par : l'importance de ses contributions scientifiques et la clarté avec laquelle il expose la physique contemporaine et ses idées plus originales;
  • soulignons enfin qu'il est à notre connaissance le seul Prix Nobel de physique à avoir un compte et à intervenir sur le site public collaboratif : Physics Stack Exchange.
(* recevoir un prix Nobel n'est pas qu'une récompense, c'est aussi une charge de travail et d'obligations sociales diverses et nombreuses qui peuvent entraver la créativité et la productivité de l'heureux récipiendaire).

Le dernier des héros (du Modèle Standard et de la théorie) quantique (des champs)?
Comme le prouve son prix Nobel, on peut être sûr que G. 't Hooft a déjà sa place dans l'histoire des sciences de par sa contribution décisive à l'achèvement du Modèle Standard.
Rappelons que ce modèle, dont une partie essentielle appelée théorie d'unification électrofaible était pour l'essentielle déjà construite à la fin des années soixante (grâce aux travaux de Glashow, Salam et Weinberg en particulier), attendait encore au début des années soixante-dix une véritable reconnaissance de l'ensemble des physiciens. Or cette reconnaissance fut acquise grâce à la démonstration par 't Hooft  - encore étudiant et bien épaulé par son directeur de thèse Veltmann - de la renormalisabilité de cette théorie : propriété fondamentale qui permet de "dominer" les infinis qui apparaissent systématiquement dans les calculs de théorie quantique des champs et qui menacent sans cesse leur pouvoir prédictif et fait planer le doute sur leur cohérence interne.
On pourrait poursuivre en parlant aussi de la place de choix qu'occupe aussi 't Hooft dans l'autre secteur du Modèle Standard: celui qui porte sur l'interaction forte modélisée par la chromodynamique quantique, théorie dont il a été l'un des premiers à comprendre la nature topologique et non perturbative mais dont il ne maîtrisait peut-être pas assez tous les aspects phénoménologiques pour apprécier l'importance de ses propre résultats ainsi que la justesse du conseil d'un autre physicien lui recommandant de publier rapidement ses travaux...
I announced at that meeting my finding that the coefficient determining the running of the coupling strength, that he called β(g^2), for non-Abelian gauge theories is negative, and I wrote down Eq. (5.3) on the blackboard. [Kurt] Symanzik was surprised and skeptical. “If this is true, it will be very important, and you should publish this result quickly, and if you won’t, somebody else will,” he said. I did not follow his advice. A long calculation on quantum gravity with Veltman had to be finished first.
On voit en lisant la dernière phrase de la citation précédente que 't Hooft était déjà impliqué dans un programme de recherche encore plus vaste visant à intégrer la dernière interaction fondamentale connue: la gravitation dans le cadre de la théorie quantique des champs.

Un des fondateurs de la vision quantique des trous noirs et père du principe holographique
Quels ont donc été les apports de 't Hooft au programme d'unification de la physique fondamentale depuis lors? La transcription écrite d'une conférence en l'honneur d'Abdus Salam (autre héro du Modèle Standard) donnée en 1993 et revue et corrigée en 2009 nous donne un élément de réponse:
I am given the opportunity to contemplate some very deep questions concerning the ultimate unification that may perhaps be achieved when all aspects of quantum theory, particle theory and general relativity are combined. One of these questions is the dimensionality of space and time... When we quantize gravity perturbatively we start by postulating a Fock space in which basically free particles roam in a three plus one dimensional world. Naturally, when people discuss possible cut-off mechanisms, they think of some sort of lattice scheme either in 3+1 dimenisional Minkowski space or in 4 dimensional Euclidean space. The cut-off distance scale is then suspected to be the Planck scale. Unfortunately any such lattice scheme seems to be in conflict with local Lorentz invariance or Euclidean invariance, as the case may be, and most of all also with coordinate reparametrization invariance. It seems to be virtually impossible to recover these symmetries at large distance scales, where we want them. So the details of the cut-off are kept necessarily vague. The most direct and obvious physical cut-off does not come from non-renormalizability alone, but from the formation of microscopic black holes as soon as too much energy would be accumulated into too small a region. From a physical point of view it is the black holes that should provide for a natural cut-off all by themselves. This has been this author’s main subject of research for over a decade. 
't Hooft, Dimensional Reduction in Quantum Gravity,1993-2009

Pour contempler plus simplement la vision quantique originale qu'à 't Hooft du trou noir (une particule élémentaire comme les autres à l'échelle de Planck?) on peut se reporter à l'extrait suivant:
For an intuitive understanding of our world, the Hawking effect seems to be quite welcome. It appears to imply that black holes are just like ordinary forms of matter: they absorb and emit things, they have a finite temperature, and they have a finite lifetime. One would have to admit that there are still important aspects of their internal dynamics that are not yet quite understood, but this could perhaps be considered to be of later concern. Important conclusions could already be drawn: the Hawking effect implies that black holes come in a denumerable set of distinct quantum states. This also adds to a useful and attractive picture of what the dynamical properties of space, time and matter may be like at the Planck scale: black holes seem to be a natural extension of the spectrum of elementary physical objects, which starts from photons, neutrinos, electrons and all other elementary particles.
 't Hooft, Quantum gravity without space-time singularities or horizons, 18/09/2009

Au final ces réflexions l'ont conduit à formuler plusieurs conjectures dont la plus connue et reconnue semble-t-il est le principe holographique:
What is known for sure is that Quantum Mechanics works, that the gravitational force exists, and that General Relativity works. The approach advocated by me during the last decades is to consider in a direct way the problems that arise when one tries to combine these theories, in particular the problem of gravitational instability. These considerations have now led to what is called “the Holographic Principle”, and it in turn led to the more speculative idea of deterministic quantum gravity ... 

't Hooft, The Holographic Principle, 2000

La dernière phrase de l'extrait précédent se termine sur une autre idée beaucoup plus controversée: celle d'une théorie déterministe sous-jacente à la mécanique quantique.

(La compréhension de la physique à) l'échelle de Planck vaut bien une conjecture de 't Hooft (non orthodoxe)
Voyons donc un peu plus en détail ce qui peut conduire un spécialiste de la théorie quantique à remettre en question son interprétation standard:
It is argued that the so-called holographic principle will obstruct attempts to produce physically realistic models for the unification of general relativity with quantum mechanics, unless determinism in the latter is restored. The notion of time in GR is so different from the usual one in elementary particle physics that we believe that certain versions of hidden variable theories can – and must – be revived. A completely natural procedure is proposed, in which the dissipation of information plays an essential role.
't Hooft, Quantum Gravity as a Dissipative Deterministic System, 03-04/1999
Beneath Quantum Mechanics, there may be a deterministic theory with (local) information loss. This may lead to a sufficiently complex vacuum state, and to an apparent non-locality in the relation between the deterministic (“ontological”) states and the quantum states, of the kind needed to explain away the Bell inequalities. Theories of this kind would not only be appealing from a philosophical point of view, but may also be essential for understanding causality at Planckian distance scales.

Évidemment cette conjecture de 't Hooft suscite semble-t-il pas mal de scepticisme d'autant qu'elle remet en cause rien de moins que le programme de l'informatique quantique comme on le verra dans le prochain paragraphe. Les critiques les plus explicites s'expriment naturellement sur la blogosphère. Pour avoir aussi une idée de la réaction plus officielle de ses pairs, on peut lire cette entrevue récente de 't Hooft dans laquelle il relate une brève discussion avec le physicien John Bell; mais la rencontre date des années 80 et Bell a depuis disparu tandis que le modèle développé par 't Hooft (basé sur des automates cellulaires) s'est raffiné depuis.

Un bel exemple d'échange de réflexion scientifique en ligne sur Physics.Stack.Exchange
Heureusement internet nous offre un autre lieu virtuel d'échanges de point de vue intéressant à travers un site de questions, réponses et commentaires où l'on peut suivre un dialogue en ligne entre 't Hooft et une grande figure de l'information quantique, en l'occurrence Peter Shor (le père de l'algorithme du même nom):

The problem with these blogs is that people are inclined to start yelling at each other. (I admit, I got infected and it's difficult not to raise one's electronic voice.) I want to ask my question without an entourage of polemics.
My recent papers were greeted with scepticism. I've no problem with that. What disturbes me is the general reaction that they are "wrong". My question is summarised as follows:
Did any of these people actually read the work and can anyone tell me where a mistake was made?
... A revised version of my latest paper was now sent to the arXiv ... Thanks to you all. My conclusion did not change, but I now have more precise arguments concerning Bell's inequalities and what vacuum fluctuations can do to them.
asked Aug 15 '12 at 9:35 G. 't Hooft

Réponse: I can tell you why I don't believe in it. I think my reasons are different from most physicists' reasons, however. Regular quantum mechanics implies the existence of quantum computation. If you believe in the difficulty of factoring (and a number of other classical problems), then a deterministic underpinning for quantum mechanics would seem to imply one of the following.

  • There is a classical polynomial-time algorithm for factoring and other problems which can be solved on a quantum computer.
  • The deterministic underpinnings of quantum mechanics require 2n resources for a system of size O(n).
  • Quantum computation doesn't actually work in practice.
None of these seem at all likely to me ... For the third, I haven't seen any reasonable way to how you could make quantum computation impossible while still maintaining consistency with current experimental results.
answered Aug 17 '12 at 14:11 Peter Shor
Commentaire: @Peter Shor: I have always opted for your 3rd possibility: the "error correcting codes" will eventually fail. The quantum computer will not work perfectly (It will be beaten by a classical computer, but only if the latter would be scaled to Planckian dimensions). This certainly has not yet been contradicted by experiment. – G. 't Hooft Aug 17 '12 at 20:45

Signalons que pour le moment il semble que la physique expérimentale ne soit pas encore parvenue à réfuter la prévision de 't Hooft. 

La nature est(-elle) plus folle à l'échelle de Planck que les théoriciens des cordes peuvent l'imaginer(?) 
Voilà une formule que l'on emprunte presque littéralement à un passage de l'article fondateur de 1993 écrit par 't Hooft et cité précédemment. Le lecteur peut la voir comme un clin d'oeil à un célèbre blogueur très critique aujourd'hui comme hier envers un modèle déterministe supposé sous-tendre la mécanique quantique dont 't Hooft est l'auteur. Lubos Motl, pour ne pas le citer, profite de la prépublication d'un long article de synthèse du physicien hollandais sur ce sujet pour attaquer un de ses postulats: l'existence d'une base ontologique dans l'espace de Hilbert qui décrit les états possibles d'un système quantique. Comme à son habitude Lubos développe une argumentation qui s'appuie sur des exemples de grande valeur pédagogique pour qui veut comprendre la physique quantique; mais son analyse de la thèse qu'il critique nous semble trop superficielle pour que le lecteur se fasse une idée précise du pouvoir heuristique de cette dernière et de ses enjeux épistémologiques.
On se contentera pour notre part (pour le moment) de mettre en exergue les points suivants qui nous paraissent intéressants:
... I do find that local deterministic models reproducing quantum mechanics, do exist; they can easily be constructed. The difficulty signalled by Bell and his followers, is actually quite a subtle one. The question we do address is: where exactly is the discrepancy? If we take one of our classical models, what goes wrong in a Bell experiment with entangled particles? Were assumptions made that do not hold? Or do particles in our models refuse to get entangled? ...
The evolution is deterministic. However, this term must be used with caution. “De- terministic” cannot imply that the outcome of the evolution process can be foreseen. No human, nor even any other imaginable intelligent being, will be able to compute faster than Nature itself. The reason for this is obvious: our intelligent being would also have to employ Nature’s laws, and we have no reason to expect that Nature can duplicate its own actions more efficiently than itself. ...
... There are some difficulties with our theories that have not yet been settled. A recurring mystery is that, more often than not, we get quantum mechanics alright, but a hamiltonian emerges that is not bounded from below. In the real world there is a lower bound, so that there is a vacuum state. A theory without such a lower bound not only has no vacuum state, but it also does not allow a description of thermodynamics using statistical physics. Such a theory would not be suitable for describing our world. How serious do we have to take this difficulty? We suspect that there will be several ways to overcome it, the theory is not yet complete, but a reader strongly opposed to what we are trying to do here, may well be able to find a stick that seems suitable to destroy our ideas. Others, I hope, will be inspired to continue along this path. There are many things to be further investigated, one of them being superstring theory. This theory seems to be ideally suited for the approach we are advocating.  
G. 't Hooft, The Cellular Automaton Interpretation of Quantum Mechanics, 7/05/2014

Prophétie à propos d'une symétrie conformationnelle locale exacte de la Nature (mais spontanément brisée en deçà de l'échelle de Planck)
Au delà de ce débat sur l'interprétation de la mécanique quantique, le travail de 't Hooft offre l'occasion de voir un chercheur en action, prêt à élaborer et défendre des hypothèses audacieuses en construisant des modèles aussi précis que possible (et dans une certaine mesure réfutables) pour voir jusqu'où peuvent le guider selon son point de vue les concepts qui ont si bien servit la physique comme la causalité et la localité.
Voici pour finir une dernière de ses conjectures qui aura peut-être un plus grand avenir, telle qu'elle est évoquée sur son site personnel:
I claim to have found how to put quantum gravity back in line so as to restore quantum mechanics for pure black holes. It does not happen automatically, you need a new symmetry. It is called local conformal invariance. This symmetry is often used in superstring and supergravity theories, but very often the symmetry is broken by what we call “anomalies”. These anomalies are often looked upon as a nuisance but a fact of life. I now claim that black holes only behave as required in a consistent theory if all conformal anomalies cancel out. This is a very restrictive condition, and, very surprisingly, this condition also affects the Standard Model itself. All particles are only allowed to interact with gravity and with each other in very special ways. Conformal symmetry must be an exact  local symmetry, which is spontaneously broken by the vacuum,  exactly  like in the Higgs mechanism.

This leads to the prediction that models exist where all unknown parameters of the Standard Model, such as the finestructure constant, the proton-electron mass ratio, and in fact all other such parameters are computable. Up till now these have been freely adjustable parameters of the theory, to be determined by experiment but they were not yet predicted by any theory.
I am not able to compute these numbers today because the high energy end of the elementary particle properties is not known. There is one firm prediction: constants of Nature are truly constant. All attempts to detect possible space and time dependence of the Standard Model parameters will give negative results. This is why I am highly interested in precision measurements of possible space-time dependence of constants of Nature, such as the ones done by using a so-called "frequency comb". These are high precision comparisons between different spectral frequencies in atoms and molecules. They tell us something very special about the world we live in. 
't Hooft

//Rédaction et dernières retouches éditoriales le jeudi 22 mai 2014.