Most of the theories of dark energy face two problems. First, it is necessary to explain why the bare cosmological constant vanishes. Then one must find a dynamical mechanism imitating a small cosmological constant and explain why Qd ~ 0.7 at the present cosmological epoch.

We have the studied cosmological consequences of the simplest toy model of dark energy based on N = 8 supergravity and found that this can completely resolve the cosmological constant and coincidence problems plaguing most quintessence models. Indeed, one cannot simply add a cosmological constant to this theory. The only way to introduce something similar to the cosmological constant is to put the system close to the top of the effective potential. If the potential is very high, then it is also very curved since V"(0) = -2V(0). We have found that the Universe can live long enough only if the field 0 is initially within the Planck distance of the top, \0\ < Mp, which is reasonable, and if V(0) (which plays the role of A in this theory) does not much exceed the critical value po ~ 10_12°Mp.

We made the simplest assumption that the values of A and 0o are uniformly distributed. However, in realistic models the situation may be different. For example, as already mentioned, A1/2 is related to the 4-form flux in d = 11 supergravity from Eq. (12.3). This suggests that the probability distribution should be uniform, not with respect to A and 0o, but with respect to A1/2 and 0o. We studied this possibility and found that the numerical results change, but the qualitative features of the model remain the same.

The probability distribution for 0o may be non-uniform even if V(0) is very flat at 0< 1. First, the fields with 0 » 1 (i.e. 0 » Mp) may be forbidden or the effective potential at large 0 may blow up. This is often the case in N = 1 supergravity. Second, interactions with other fields in the early Universe may create a deep minimum, capturing the field at some time-dependent point 0< 1. This also often happens in N = 1 supergravity, which is one of the features of the cosmological moduli problem. If it does so in our model, one can ignore the region with 0o > 1 (the right part of Figs. 12.5 and 12.6) in the calculation of probabilities. This will increase the probability of living in an accelerating model with 0.5 < Qd < 0.9.

Our estimates have assumed that the Universe must live as long as 14 Gy, so that human life can appear. One could argue that the first stars and planets were formed long ago, so we may not need much more than 5-7 Gy for the development of life. This would somewhat decrease our estimate for the probability of living in an accelerating model with 0.5 < Qd < 0.9, but it would not alter our results qualitatively. On the other hand, most of the planets were probably formed very late in the history of the Universe, so one may argue that the probability of the emergence of human life becomes much greater at t> 14 Gy, especially if one keeps in mind how many other coincidences have made life possible. If one assumes that human life is extremely improbable (after all, we do not have any indications of its existence elsewhere in the Universe), then one may argue that the probability of its emergence becomes significant only if the total lifetime of the Universe can be much greater than 14 Gy. This would increase our estimate for the probability of living in an accelerating model with 0.5 < Qd < 0.9.

So far, we have not used any considerations based on the theory of galaxy formation, as developed by Weinberg [9], Efstathiou [10], Vilenkin [11], Martel [12] and Garriga [13]. If we do so, the probability of the emergence of life for A » p0 will be additionally suppressed, which will increase the probability of living in an accelerating model with 0.5 < Qd < 0.9.

To the best of my knowledge, only in models based on extended super-gravity are the relation \m2\ — H2 and the absence of freedom to add the bare cosmological constant properties of the theory rather than of a particular dynamical regime. That is why the increase of V(0) in such models entails the increase in \m2\. This, in turn, speeds up the development of the cosmological instability, which leads to anthropically unacceptable consequences.

The N = 8 theory discussed here is just a toy model. In this case, we have been able to find a complete solution to the cosmological constant and coincidence problems (explaining why A — p0 and why QD noticeably differs from both zero and unity at the present stage of cosmological evolution). This model has important advantages over many other theories of dark energy, but - to make it fully realistic - one would need to construct a complete theory of all fundamental interactions, including the dark energy sector described above. This is a very complicated task, which goes beyond the scope of the present investigation. However, most of our results are not model-specific.

It would be interesting to apply our methods to models unrelated to extended supergravity. A particularly interesting model is axion quintessence. The original version had the potential given by

where it was assumed that C = 1. The positive definiteness of the potential, and the fact that it has a minimum at V = 0, could then be motivated by global supersymmetry arguments. In supergravity and M/string theory, these arguments are no longer valid and the value of the parameter C is not specified.

In the axion model of quintessence based on M/string theory, the potential had the form V = Acos(0//) without any constant. This has a maximum at 0 = 0, V(0)=A. The Universe collapses when the field 0 rolls to the minimum of its potential, V(fn) = —A. The curvature of the effective potential at its maximum is given by m2 = —A/f2 = —3H2/f2. (12.9)

For f = MP = 1, one has m2 = —3H0 and, for f = Mp/y/2, one has m2 = —6H02, exactly as in N = 8 supergravity. Therefore the anthropic constraints on A based on the investigation of the collapse of the Universe in this model are similar to the constraints obtained in our N = 8 theory. However, in this model, unlike the ones based on extended supergravity, one can easily add or subtract any value of the cosmological constant. In order to obtain useful anthropic constraints on the cosmological constant, one should use a combination of our approach and the usual theory of galaxy formation.

In this sense, our main goal is not to replace the usual anthropic approach to the cosmological constant problem, but to enhance it. We find it very encouraging that our approach may strengthen the existing anthropic constraints on the cosmological constant in the context of the theories based on extended supergravity. One may find it hard to believe that, in order to explain the results of cosmological observations, one should consider theories with an unstable vacuum state. However, one should remember that exponential expansion of the Universe during inflation, as well as the process of galaxy formation, are themselves the result of the gravitational instability, so we should learn how to live with the idea that our world may be unstable.

While the model discussed above is quite interesting, it is only partially related to a consistent d = 10 string theory. After many unsuccessful attempts to find a dS solution in string theory, we have recently come up with a class of such solutions [2]. We next outline the construction of metastable dS vacua of type IIB string theory and discuss their relation to AR.

Our starting point is the highly warped IIB compactifications with nontrivial NS and RR 3-form fluxes.1 By incorporating known corrections to the superpotential from Euclidean D-brane instantons or gaugino condensations, one can make models with all moduli fixed, yielding a supersymmetric anti-de Sitter (AdS) vacuum. Inclusion of a small number of D3 branes in the resulting warped geometry allows one to raise the AdS minimum and make it a metastable dS ground state. The lifetime of our metastable dS vacuum is much greater than the cosmological timescale of 10 Gy. We have also proven

1 NS stands for (Neveu—Schwarz) bosonic closed string states whose left- and right-moving parts are bosonic. RR stands for (Ramond—Ramond) bosonic closed string states whose left- and right-moving parts are fermionic.

that, under certain conditions, the lifetime of dS space in string theory will always be shorter than the recurrence time.

Our basic strategy is to first freeze all the moduli present in the com-pactification, while preserving supersymmetry. We then add extra effects that break supersymmetry in a controlled way and lift the minimum of the potential to a positive value, yielding dS space. To illustrate the construction, we work in the specific context of IIB string theory compactified on a Calabi-Yau (CY) manifold in the presence of flux. Such constructions allow one to fix the complex structure moduli but not the Kahler moduli of the compactification. In particular, to leading order in a' and gs, the Lagrangian possesses a no-scale structure which does not fix the overall vol-ume.2 (Henceforth we shall assume that this is the only Kahler modulus; it is plausible that one can construct explicit models which have this property.) In order to achieve the first step of fixing all moduli, we therefore need to consider corrections which violate the no-scale structure. Here we focus on quantum non-perturbative corrections to the superpotential, which are calculable, and show that these can lead to supersymmetry-preserving AdS vacua in which the volume modulus is fixed in a controlled manner.

Having frozen all moduli, we then introduce supersymmetry-breaking by adding a few D3 branes in the compactification. The extent of supersymmetry-breaking, and the resulting cosmological constant of the dS minimum, can be varied in our construction - within certain limits - in two ways. One may vary the number of D3 branes which are introduced or one may vary the warping in the compactification (by tuning the number of flux quanta through various cycles). It is important to note that this corresponds to freedom in tuning discrete parameters, so while fine-tuning is possible, one should not expect to be able to tune to arbitrarily high precision.

Was this article helpful?

## Post a comment