The Standard and The Alternative Cosmological Models,
Distances Calculation to Galaxies without Hubble Constant
For the alternative cosmological models considered in the extended version of this entry, the distances are calculated for galaxies without using the Hubble constant. This process is mentioned in the second narrative, and described in detail in the third narration. According to the third narrative, as the density of the relativistic mass of the universe decreases while the universe expands, new matter is created by a phase transition process which results in a continuously constant ordinary density of matter. While the universe develops on the basis of this postulate of the emergence of new matter, it is "assumed that matter arises as a result of such a phase transition of dark energy into both new dark and visible matter. It is somewhat irrelevant how we describe dark energy, calling it aether, or vice versa, the changing of aether into dark energy. It should be clear to everyone that this renaming does not change the essence of this phase transition phenomena. It should be noted that, unlike all well‐known geometric models of the Euclidean space of our existence, this phase transition of dark energy into matter would accordingly be a stereographic projection of a three‐dimensional surface on to a four‐dimensional globe.
“In the beginning, there was nothing. Well, not quite nothing—more of a Nothing with Potential. A nothingness in which packets of energy fleeted in and out of existence, popping into oblivion as quickly as they appeared. One of these fluctuations had just enough energy to take off. It inflated wildly out of control—one moment infinitesimally small, moments later light-years across. All of space and time was created in that instant, and as that energy slowed, it cooled and froze into matter—protons and neutrons and photons. This baby Universe kept expanding, over billions of years, and those particles coalesced into stars and planets and eventually humans.”
Cosmological phenomena are not exactly a subject of physical science as many might think. We cannot perform experiments on the Universe. In contrast, physics is a science, where researchers can conduct experiments on various natural phenomena that can be reproduced by others in a laboratory. In cosmology, we can only look at the skies and speculate what stands behind the light reaching our telescopes.
We can predict the location of planets and stars at closer distances by applying classical Newtonian mechanics, when we use the ordering of the events on the time scale. Still, cosmology relies on numerous pictures of the Universe, aiming to shed some light on phenomena at far away distances. Of course, researchers can verify the correctness of the mathematical reasoning performed by their colleagues, but this does not bring them closer to the truth hidden in the vast expanses of the Universe.
The dark matter is an example of such speculation that is inherent in the study of the Universe. Cosmologists call something that cannot be explained as dark matter, and have even introduced the concept of dark energy. Yet, despite these many assumptions and speculations, cosmology is very interesting and useful, even if it is not an experimental science. As cosmologists also speculate about the origins of the Universe, they posit existence of some point prior to which neither time nor space existed, and refer to it as singularity problem of time. Space has a density of energy which largely determines the dynamics of cosmic objects and the Universe as a whole. Given these many assumptions, it is reasonable to speculate about the Universe dynamics, as many researchers claim that it is expanding and its density of energy is decreasing.
Space and time despite the speculations still are subjects of physical science and are defined in general terms, by presently accepted theory, as fundamental structures for coordinating objects and their states: a relationship system that reflects the coordination of coexisting objects (distance, orientation etc.), together form space, and a relationship system that determines the consistency of successive states or phenomena of flow-series events, ordering, preferences, etc., together accordingly form time. The space in which we live—the usual three-dimensional space—is a physical object bounded by a certain set of parameters, the change of which over time is described by dynamic systems.
It seems that the mathematical apparatus of dynamic systems is quite sufficient for solving problems associated with the motion of matter in the Universe. Indeed, “Theories in physics are not at all hypotheses, they are not just supported by more or less numerous facts. Theories should have consistent math, such as topology. If the physical theory does not obey the topology, it is incorrect. Topology lays the foundation for physics, not vice versa,” (Public Domain, Researchgate 2019). However, the theory of space, based on the topological principles of General Relativity (GR), brought problems related to space and time, perhaps, to a dead end, both in cosmic dynamics systems and in the attempts to form a quantum theory of gravity. In such a situation, there naturally arises the need for alternative approaches to the description of reality. Unfortunately, the choice of alternative paths is somewhat limited, and if such a path is indicated, one must first understand the situation, what it looks like at present, and then try to determine any contradictions between the observations and theory. Finally, try to offer something new, even if it is not as perfect as hoped, for example to deviate from the quest to explain all the reality and settle for an insight of a lesser magnitude. This is our motive for narratives offered to a thoughtful reader.
In the first narrative, a modern view of cosmological reality is given, as it is taken from a standard perspective. The obvious sign of the standard view is the concentration of activity, not in solving some physical problem or better explaining observed reality, but in discussing the options of “falling into black holes”, “parallel worlds”, discussing the possibility of getting into the past, and the like. All of these ideas lead to great science fiction but highly questionable science.
There are also many alternative cosmologies to the Big Bang model. Most of these are unknown to mainstream theorists since few of them were proposed by professional theorists. Indeed, most of them can be considered steady-state theories, meaning that the observable universe would look generally the same everywhere in time and therefore would be much older, or could even be infinite in age and size. Most of such models have a different explanation for galactic redshifts. One of these theories is discussed in some detail in the second narrative, as well as an extensive redshift comparison of calculated distances in the latter part of the book in the third narrative.
In the second narrative, based on factual material, a thoughtful reader will become familiar with a number of contradictions and paradoxes of the standard model of the Universe. Many researchers and theorists, in order to explain the paradoxes, try to expand the mathematical apparatus to the point of absurdity using various paradoxical mathematical constructions. Indeed, in many cases it is possible. However, it is far from common sense, which people have long used to form a theory, contemplate and explain reality. Remember Aristarchus, who with his primitive tools needed only common sense and the knowledge of trigonometry to calculate the distance from the Earth to the Sun with great accuracy, using only the dimensions of the Earth’s shadow projected onto the Moon’s surface. Aristarchus from Samos, fl. c.310 BC — c.230 BC, was a Greek astronomer and mathematician of the Alexandrian school. It is said that he was the first to propose the heliocentric theory of the Universe.
Extending the list of paradoxes by searching the public domain, we found that one amateur astronomed derived a different expansion rates of the universe based upon redshifts, when conducting measurements in different directions. He then drew attention to something even stranger: He discovered that the sky can be divided into two sets of directions. The first is a set of directions in which a lot of galaxies lie in front of more distant galaxies. The second is a set of directions in which distant galaxies are without galaxies in the foreground. We call the first group of directions in space—“region A”, the second group—“region B.” Our amateur astronomer discovered an amazing thing. If we confine ourselves to studying distant galaxies in region A , and only on the basis of these studies calculate the Hubble constant, then one constant value is obtained. If we do the same studies of region B, we get a completely different value of the constant. It turns out that the rate of expansion of the universe, according to these studies, varies depending on how and under what conditions we measure the redshifts coming from distant galaxies. If we measure them wherever there are galaxies in the foreground, there will be one result. If the foreground galaxies are missing, the result will be different. If the Universe is really expanding then how can foreground galaxies so influence the movement or redshifts of far distant galaxies behind them so as to indicate a different expansion constant?
Galaxies are at a great distance from each other, they cannot blow on each other as we blow on a balloon. Therefore, it is logical to assume that the problem lies in the riddles of redshift. That’s exactly what the amateur astronomer was thinking. He suggested that the measured redshifts of distant galaxies, on which the standard Big Bang model of Cosmology is built, are not at all related to the expansion of the Universe. Rather, that they are caused by a completely different effect. He suggested that this effect is associated with the so-called mechanism of the aging of electromagnetic radiation, approaching us from afar. Historically such ideas have been called “tired light.”
Our amateur astronomer asserted that this redshifting of light (EM radiation) happens in accordance with accepted physical laws and is remarkably similar to many other phenomena of nature. In nature, always, if something moves, then there must be something else that prevents this movement. Such impeding forces also exist in outer space. The amateur astronomer believed that as light travels through the vast distances between galaxies, the effect of redshift begins to appear. This effect he associated with the hypothesis of aging (reducing the energy) of light. He believed that it turns out that light loses energy crossing the vast expanses of space in which there are certain forces that absorb light’s energy, fe. aether. The older the light crossing space, generally the more redshifted it becomes. Therefore, the redshift of galactic light would be proportional to the distance light travels rather than any other factor. After coming to this conclusion the amateur astronomer described the Universe as a non-expanding structure. Upon coming to this conclusion, galaxies would be more or less stationary.
Too much of the above paragraph is no longer endorsed by any theorists and nearly all astronomers would scoff at the conclusions drawn in the above paragraph in light of present theory, and because the logic solely in this paragraph fails in light of present day observations. The proposed abbreviated paragraph is shown below.
As stated above, there has been many other similar hypotheses like this since the first and most famous one was first proposed in 1929 by Fritz Zwicky. Zwicky suggested that if photons lose energy over great distances through collisions with other particles in a regular way, the more distant objects would appear redder closer ones. The spectral lines of the elements that produced the initial light would become longer because of these collisions and therefore shifted toward the red spectrum, redshifted from where they started. Aside from the tired light proposal by our amateur, the regional differences in redshift remain unexplained and of the few who know of it, many of those believe the effect is too prevalent to be a coincidence. Indeed, too much of the aging idea is no longer endorsed by any theorists and nearly all astronomers would scoff at the conclusions drawn in the above view of present theory, and because the logic solely fails in light of present day observations.
Today nearly all astronomers would say that this hypothesis of tired light was worse than just unsatisfactory; they would say that it has been disproved because of the observed time dilation, the slowing of time causing an event to last longer. This is most noticeable concerning dying stars called type 1a supernova. All this type of supernova have a similar light and time profile whereby the duration of their great brightness only lasts a few days, very close to the same amount of time for relatively close events. Then after peaking, this great brightness steadily dies off in just a few days. It has been shown by a great number of these observations that the farther away these supernovae explode based upon their redshift, the longer the event lasts from our perspective. All observations occur in other galaxies since only a few per millennium are thought to occur in our galaxy. These observations are consistent with the expansion of space whereby wavelengths twice as long would last twice as long since there is nearly the same number of wavelengths per event. This is the reason why most astronomers believe that tired light has been disproved. But this is not the end of the tired light story concerning logic.
There are other versions of tired light theory, however, that can accommodate time dilation. One such hypothesis involves the interaction of light with the aether as it travels. The surrounding aether would accordingly absorb some of the EM radiation’s energy while stretching it out because of some resistance to the flow of EM radiation. This would explain what is being observed concerning both redshifts and time dilation but would get little consideration from astronomers if the word aether were used. Instead one might use the words background-field, which could mean either a physical or energy omni-present background field in all of space which could carry and dilate EM radiation.
Upon research one might see still other tired light versions which also can explain time dilation. For aged-light theory time dilation must be logically explained for any astronomer or student to read further since all have been familiarized with it via related education. But the theory of aging light, presented by our amateur astronomer, does not require radical additions to existing physical laws. He suggested that in the intergalactic space there is a kind of particles that, interacting with light, take away part of the energy of light. In the vast majority of massive objects these particles are larger than others.
Using this idea, our amateur explained, as said above, the different redshift values for regions A and B as follows: light passing through the galaxies of the foreground meets a larger number of these particles and therefore loses more energy than light that does not pass through the region of the foreground galaxies. Thus, a larger redshift will be observed in the spectrum of light crossing obstacles (areas of foreground background galaxies), and this leads to different values for the Hubble constant. The amateur astronomer also referred to additional evidence of his theories, which was obtained by experiments on objects with short-distance redshifts.
We will enumerate on a few similar versions that cannot be disproved by time dilation. In this first version light interacts with a space medium similar to the old idea of a luminiferous aether. The word aether is no longer popular so new versions of it have been called the zero-point-field, quantum foam, gravitons, dark matter, dark energy, the Higgs particle and field, and many other theorized entities. Some of these theorized entities are also real according to the standard model of cosmology. All could interact with light and thereby stretch out its wavelengths thereby redshifting it. Accordingly the farther light would travel through this medium the more it would interact with it and the more its wavelengths would be redshifted. As light becomes older with distance, it accordingly could be called “tired,” a tired light hypothesis.
In short, our amateur astronomer explained the redshifts in terms of the non-expanding Universe, in which the behavior of light is different from the idea accepted by most scientists. The amateur astronomer believes that his model of the Universe provides more accurate, realistic astronomical data than those provided by the standard model of the expanding Universe. This old model can not explain the large difference in the values obtained in calculating the Hubble constant. According to our amateur, non-high redshifts can be a global feature of the Universe. The Universe may well be static, and therefore the need for a big bang theory simply disappears.
The next hypothesis is known to be valid but not thought to be the cause of the redshifting of galactic light. It is known as gravitational redshifting. Gravity is known to bend the path of light, called gravitational lensing. If we measure the spectrum of light emanating from a star located near the disk of our Sun, then the redshift in it will be greater than in the case of a star located in a remote region of the sky. Such measurements can only be made during a total solar eclipse, when stars close to the solar disk become visible in the dark. It is necessary to take into account that we are dealing here with a gravitational redshift, which manifests itself when light passes near the Sun. Gravitational redshifts also occur in a straight line such as from us on Earth to the sun center. For instance the central solar light is slightly more redshifted than the sun’s light away from its center, proportional to its distance from the center.
The gravitational redshift considered cannot have such large values for light coming from deep space, since the gravitational effect from a distance is negligible, ed. The resistance of light to gravitational influence could not by itself explain galactic redshifting in spite the farther light travels through the universe the more gravitational resistance it would encounter. However, the bending of light through its travels might also redshift it by stretching it. These possibilities usually are not considered as tired light hypotheses but the similarity would be that older, and therefore longer traveling light, would or could be gravitationally redshifted. And there are other versions of old light redshifting that are lesser known and therefore not mentioned here.
The point of these old-light hypothesis is that there are other possible, logical explanations for galactic redshifts other than expanding space. The question being, is the principle of expanding space logical? Readers should realize that the entire Big Bang (BB) model is supported by the premise of expanding space. If another explanation for galactic redshift is valid the entire BB theory, including its formula for distance calculations falls, since that too was formulated by this premise. And what causes space to expand? Research that question on the Internet. You will find the most common answer is a very poor one. “Space can expand due to dark energy and can contract due to dark matter.” What does it mean that space can expand? We want to know how and why space expands.
And what are dark matter and dark energy anyway? You will see that no satisfying answers follow these research questions. The expansion of space, dark energy and dark matter are all unknowns, or even if any of them really exist at all. And these three elements are foundation pillars of the Big Bang model, now called the Lambda cold dark matter model. Still roughly 99% of all astronomers and cosmology theorists believe in both dark matter and dark energy.
The fourth foundation pillar of the model is called Inflation, which seemingly is an untestable hypothesis. With all four of the foundation pillars of the theory still unknown hypotheses, what is the likelihood the theory is correct and will remain standing after the James Webb goes up and tests some of its major propositions? One proposition that can be tested then is that galaxy groups and clusters at the farthest distances will contain only young galaxies. If instead galaxy groups or clusters look the same as local groups and clusters, the same as the Hubble Deep Field photograph, the Big Bang model will likely begin to fall because of its present age limitation, 13.8 billion years. There are a number of other serious problems with the Big Bang model, the more well-known of these are discussed in the second narrative in the middle of the book.
One of the problems in cosmology to be discussed is the problem of quasars. Quasars are defined as very massive extremely remote celestial objects presently thought to be at the centers of Active Galactic Nuclei. They emit exceptionally large amounts of EM radiation and typically have a star-like appearance in a telescope. Their brightness overpowers the other parts of the galaxy so no separate redshift for the galaxy can be observed, if it were different from the quasar. Present theory holds that quasars contain massive black holes and may represent a stage in the evolution of some galaxies.
Quasars would seem to be an even greater problem for steady-state cosmologies, which will be later discussed. The Big Bang model asserts that the universe has evolved and that quasars and large radio galaxies are a good example of this. Both appear to preferentially exist at distances in common. But for now we will continue to discuss the characteristics of quasars. So their existence and distribution is believed to fit the evolution of the universe according to the Big Bang model.
One of the most striking features of quasars is that their redshifts are very high compared to those galaxies in our vicinity. This is because quasars have an average distance and centrally cluster at redshifts around , with a somewhat normal average range between about to redshifts around , with a seemingly normal fall-off thereafter to a present maximum redshift of about , after which little or none can be found. This limit could be because of their relative focus problem to be explained. Quasars’ are believed to be caused by galactic black-hole centers of some of the largest elliptical galaxies usually in clusters, from which a pair of oppositely radiating polar jets emanate. These jets are almost laser-like in that their directional focuses which are very narrow. The very few of these galactic jets that are closely focused in our direct we observe as quasars, by definition, because of their tremendous relative brightness to other galaxies at the same calculated distances. According to most theorists, mainstream or otherwise, all quasars are believed to come from the centers of what are now commonly called active-galactic-nuclei (AGN’s).
While quasar redshifts in our vicinity have an average redshift of about , some of the redshifts of the most distant quasars are close to a redshift of . If we accept, like most astronomers, that redshifts are the indicators of quasar distances then quasars would be some of the most distant objects in the observable Universe. And if these redshift calculated distances are correct then these quasars are emitting millions of times more energy than galaxies of their similar type, size, and distances. Taking into account the mainstream distance formula called the Hubble formula, galaxies with a redshift of more than accordingly should allegedly be moving away from us at the speed of light, and quasars with a redshift of should be moving away from us at times the speed of light. This is explained by the BB model that when the quasar’s light that we are now observing was close enough to us that the light could now reach us, but now new light from an equally distant quasar would be beyond the possibility of its light ever getting to us because accordingly the expansion of space, the mainstream model, expands away from us at four times the speed of light for such quasars at these distances and therefore could never reach us again.
It turns out that now we have to scold Albert Einstein? Or is the initial conditions of the problem wrong and the redshift is the mathematical equivalent of processes, of which we have little idea? Mathematics is not mistaken, but it does not give an actual understanding of the processes that are taking place. For example, mathematicians have long proven the existence of additional dimensions of space, while modern science can not find them.
If quasar-calculated redshifts are accepted as caused by the ordinary expansion of space, the distances indicated are very great, but additional analysis has shown that their surrounding energy emission and energy densities are inexplicable for such distances. On the other hand, if their distances calculated by their redshifts are wrong, there are no mainstream accepted hypotheses about the mechanism by which quasar redshifts might be produced. But there are other relatively simple non-mainstream hypotheses to explain them. The famous astronomer Halton Arp was most famous for such a proposal, which was a major aspect and promotion of his fruitful, distinguished, but controversial career. Several other prominent astronomers and theorists concurred with his findings based upon the observations and reasoning he presented. By his telescopic observations, and of others, he proposed that most, or nearly all quasars are much closer than their redshifts might indicate. But he and his few followers, some well known, together, were generally dismissed by the mainstream. This is because one of the foundation pillars of modern astronomy and cosmology is the Hubble formula, and the belief that it correctly calculates galactic distances based upon their observed redshifts. Harp’s observations, and similar observations by other astronomer, were asserted to be only optical illusions, coincidental, or other seemingly possible explanations. Still his few remaining proponents still make the same assertions concerning the anomalous distances of quasars.
Halton Arp suggested that most or all quasars have an intrinsic redshift to them. This would mean that something is happening inside or immediately surrounding the galaxy which would create the extent of the observed redshift from our perspective. But what mechanism might that be? There have been several proposals but maybe the simplest logical mechanism would be gravitational redshifting discussed in the paragraphs above. The theory in the second narrative made such a proposal not discussed in this book. It asserts that because the elliptical galaxies producing the quasar, usually in the center of a cluster, are often the largest galaxies of the cluster, therefore their gravity would be very strong. Their galactic core and central galactic black hole would likely have a very strong gravitational influence on light being produced by them. A black hole will prevent any light from escaping it inside its event horizon, but if the galactic black hole is strong enough to redshift its surroundings outside its event horizon, as explained above, this redshifting effect is readily observed as we observed from our sun’s light described above. But how much would these quasar-producing Active Galactic Nuclei (AGN) redshift the EM radiation they produce? In the theory of the second narrative this quantity is easily calculated. Looking at a distribution chart of these quasars concerning their redshifts, called a histogram, it can be readily seen that it is very similar to a natural curve, a distribution formula in statistics.
Although all of these AGN galaxies are many times greater in size than the Milky Way, some are still much larger and more condensed than others. In the same way some of them could be only slightly gravitationally redshifted while others gravitationally redshifted to the maximum. If the assumption is made that quasars are instead distributed evenly like other galaxies, then one can calculate the maximum proposed gravitational redshift that the AGN’s are producing. This maximum redshift is calculated to be about , and progressively decreases thereafter down to only a slight gravitational redshift close to zero. The range of this redshifting calculates to about (the redshift). Add on top of this a normal distribution of these quasar galaxies based upon regularly distributed distances of the volumes they occupy, we come very close to the observed distribution of quasars, of course their distances would then become normally distributed as all other galaxies. For the furthest outliers beyond a redshift of 3, there are additional reasons why this most distant small group of outlying quasar distances could be under-calculated as explained in both the second and third narrative. Also many now believe that the same active galactic nuclei the produce quasars and the same ones that produce high-energy radio galaxies, but we can’t see the quasars inside most of them if they are not facing us directly.
There are primarily two major reasons why mainstream astronomers do not want to consider the possibility that quasars are intrinsically redshifted. The first is because it would complicate the picture of the universe in that galactic redshifts would not necessarily be the sole indicator of distances in some cases. If not they would have little else to go by concerning these object’s distances. However, in the third narrative of this book in case the energy density surrounding quasars might be determined, if at all possible in at least one or more cases, then, their distances can still be calculated using unique proposal as explained in the third narrative. But secondly, such a contrary determined alternative proposals would be an indicator that the Big Bang model is wrong, since on the bases of the discovery of quasars and radio-galaxies and according to similar but novel Big Bang expansion phenomenon, the universe would have been different in the past as the Big Bang model proposes. Taking these alternative proposals under consideration there would be less supporting evidence for the Big Bang model in general.
Thus, both of the alternatives available within the conventional astronomical theory face serious difficulties. If the redshift is assumed to be the usual Doppler effect due to spatial absorption, as well as spatial expansion, then the distances indicated are so huge that other properties of quasars, especially energy radiation, are inexplicable. If the redshift is not connected, or is not completely related to the speed of movement, we do have, perhaps, in the third narrative a reliable phase transition hypothesis about the mechanism by which it is produced.
Indeed, in the third narrative, we proceed with the application of our scheme to the Big Bang phenomenon. We discuss the possibility of expanding the space given a phase transition sequence of high energy cells . Any effect of this transition upon the cells in the surrounding area is then measured a posteriori. Thus, we can arrange conceivable experiments with a tiny piece of matter, using Newton's gravitational potential, taking gravity as a function responsible for the effect of high-energy cells on a piece of surrounding space. In this context, the high energy cells would refer to the likelihood of phase transition emerging from the oblivion as a phenomenon of matter. In the same manner it will be possible to determine whether the transition would have positive or negative effects on sequence in progress. However, this would necessitate changing somehow the phase transition (inclusion/exclusion) procedure and establishing how the change incurred would be evaluated.
Here it should be emphasized that this is precisely our proposal on the possibility of using the apparatus of combinatorial mathematics anywhere not yet used in topology. In fact, as our analysis of the distance to galaxies shows, we use in the third narrative the so-called apparatus of Monotone Systems borrowed from game theory and data analysis. Although the idea of Monotonic Systems implementation in highly diverse research fields of cosmology may seem unexpected, the use of stable/steady lists or topologies of Monotone System credentials (in particular case the Newton potential functions) provides a unifying perspective for conceivable experiments in calculating distances to galaxies. This is particularly beneficial when employing monotonic mappings producing so called fixed points, (G-equation) which preserve stability or equilibrium of lists/topologies of credentials despite the credentials’ dynamic nature.
Newton gravitation potential is just an example that represents high energy cells with the inverse monotone property. When a hole/bubble under the action of the “stream” of a new matter expands/inflates, the gravitation potential outside the bubble increases because the total bubble’s mass increases at a higher rate, even though “the energy density” of matter at each point inside the bubble decreases. We can thus construct once again our fixed point G-equation, finding the roots of the equation as stable points. This is particularly relevant for the so-called inflation stage of the Big Bang. The resulting equation might be parameterized by what is known in astrophysics as a relativistic energy density of energy. The density of energy, rather than time, might thus be the appropriate candidate for the scale like time-line events. Such a scale could be employed to perform our fixed point “experiments” on the Universe via geometrical modeling. Its solution exists even when the radius of topology equals zero—the point on our high energy cells scale at which density of energy is infinite. This parameter provides the opportunity to investigate the topology of the monotone systems apparatus of the Universe while the density decreases on its energy density scale from very high/extreme values to lower ones.
In conclusion, while making the connection between our Cosmological Speculation and the implementation of the monotone apparatus in the third narrative, it is important to note that the architecture of the apparatus is always nested. Really, while the high energy cells grow or decrease, the fixed points shrink in a way akin to a nested structure of subsets in the set theory sense. The solutions of our mathematical G‑equation lead exactly to similar nested phenomena of topology when applying the General Relativity theory given by a metrical quadratic form as a rod upon the 3-dimensional Euclidian space lying on the 4‑dimensional hyper globe surface.
Paying attention to the front cover of our book, it is quite possible that our thoughtful reader will understand what we mean when we talk about the nested structure of the monotonous system. Indeed, on the front cover one book is embedded in the other, and the other in turn is embedded in the third, and so on.
We must emphasize here that our nested structure established from the third narrative monotone apparatus, as roots of our fixed point equation, coincide with Planck Mission measurements of the composition of the Universe with incredible precision. They predict almost to precision the composition of dark, visible matter and dark energy in the Universe. Thus, given that the equation must be calibrated a priori using some parameters, the question is what kind of phenomenon has been created first on the energy density scale—the dark or the visible matter? Our mathematical speculation suggests that the “dark matter”—or whatever is hiding behind this mathematical phenomenon—was allegedly created first. Contrary to presently assumed, almost infinite density, it had a density of a soup in the so-called initial inflation phase of the Big Bang.
It is also thought that the Universe, according to the inflation pillar, was born in the first s of this process, based on Standard Model. Still, whether the Big Bang ever took place, or whether the density decreased to some level is irrelevant to our discussion, as our results indicate that the “visible matter” emerged “later” on the density scale, accompanying the “dark matter.” This was the best interpretation we can make from the nested structure of Monotonic System with the high energy cells in the form of Newtonian potential functions. Although it is pure speculation, the calculations that leads to such a conclusion might be interesting to follow. That was the reason for introducing the Monotone Phenomena of the Universe.
Astronomers do not observe the entire universe, but what they observe was quite enough to propose reasonable conclusions that led to another perceptive of the standard Big Bang model. Addressing the thoughtful reader, and as such we took the reader for granted, it should be noted that in fact in our narratives there are proposed two opposing, but in many respects, similar points of view on the dynamics of matter in the universe.
So, in the second narration a theoretical model of the universe based on the aether was proposed. Whether an aether, as we know, was believed to be non-existent in the Michelson–Morley experiment, it is not so important. More importantly, methodology was proposed for the creation of matter in the Universe from strings of Pan Particles comprising the aether. Again, it doesn’t matter what the particles are. The number of particles doubles over a long, albeit gigantic period of time. At the same time as these particles are very slowly reducing in size, dimensionally they proportionally increase in their numbers maintaining the particle density relative to the space they encompass. In the same way and for the same reason matter very slowly reduces in its dimensions while it proportionally increases in its numbers. This approach allowed us to calculate the distance to galaxies on the basis of Euclidean geometry based on a very simple formula. Saying in short, in the second narrative it was postulated that the average density of the formed matter will be constant, which is allegedly confirmed by observations of the cosmic neighborhood, which is not so remote from us in the cosmological sense. The formula from second narrative for calculating the distances obtained on the basis of pan postulates was in the third narration compared with the already known methods as well as with the new method of calculation without the Hubble constant.
On the other hand, it should be clear that the incredibly slow reduction in the size of matter at any point in the universe at any time could be attributed to the expansion of space rather than a reduction in the size of matter, one being a relative perspective of the other. Figuratively speaking, it all somehow comes along with Jonathan Swift’s book Gulliver’s Travels where Gulliver sails to the land of the Lilliputians, tiny humans compared to the size of Gulliver. In the same way in astronomy and astrophysics when we observe the most distant galaxies, us being the Lilliputians, we would be looking backwards in time into the land of the Giants. At the same time, neither the present day Lilliputians, nor the Giants in the past could notice changes in their size because they all use the same rods, meters, kilograms, etc., measuring sticks which change in size in direct proportion to the passage of time. From a mathematical point of view, the situation can be represented in the form of an expanding metric of space, which is filled from a controversial aether or pan particle substance, or with new mass according to the postulate of phase transition from dark energy, both of which we have no direct knowledge of their existence or essence, aside from theory or hypothesis.
Discussing a possible approach in analyzing the creation of matter in the third narration, our thoughtful reader will, perhaps, pay attention to some contradiction, namely: the density of the so-called relativistic mass of matter decreases while in the second the narration about the density of matter is presumably fixed. It seems that there is a fundamental difference in the approaches to the formation of matter. However, there is no contradiction. As before, the model in the third narrative considers the static universe in the same way as in the second narrative. For example, if we try to weight 1 kg. of metal in the form of a casting that has just come out of a blast furnace, the same kilogram again will still weight, since the metal is cooled or not cooled, exactly 1 kg. However, from the point of view of relativistic energy density (relativistic substance density), the same cooled kilogram of metal will weight less than the relativistic kilogram that was previously in a blast furnace. Consequently, from the point of view of the relativistic density of matter, it is quite reasonable to assume that the birth of a matter in the form of particles, atoms or pan particles occurred at a very high relativistic density of matter.
The density of the relativistic mass decreases, leaving the usual density of matter, which is observed by astronomers, still constant as described in the second and as in the third narration. While the universe develops on the basis of the postulate of the emergence of matter, it was assumed that matter arises as a result of a phase transition of dark energy, first into dark matter and then accompanied by visible matter. Therefore it was irrelevant how we call the dark energy, calling the energy by aether, or visa versa, changing the aether to dark energy. It should be clear to everyone that this renaming does not change neither the essence of the phase transition phenomena nor the phenomena of pan particles creation. Among other things, unlike well-known geometric models where all the events usually occur, it should be noted that in the third narrative the phase transition of dark energy into matter occur inside a three-dimensional Euclidean surface of a stereographic projection of a four-dimensional globe.
The more difficult question is due to which courses the redshift occurs. Here we have no clear answer as in the first and in the second case. The issue of redshift is allegedly associated with the expansion of the universe.
The expanding Universe model potentially has two flaws: first, the brightness of celestial objects can depend on many factors, not only on their distance. That is, the distances calculated from the apparent brightness of galaxies may be invalid. Secondly, it is quite possible that the redshift is completely unrelated to the speed of the galaxies.
Hubble continued his research and came to a certain model of the expanding universe, which resulted in the Hubble’s law. To explain it, we first recall that, according to the Big Bang model, the further the galaxy is from the epicenter of the explosion, the faster it moves.
According to the Hubble’s law, the removal rate of galaxies should be equal to the distance to the epicenter of the explosion, multiplied by a number called the Hubble constant. Using this law, astronomers calculate the distance to galaxies, based on the magnitude of the redshift, the origin of which is not completely clear to anyone. In general, the Universe was decided to be measured very simply. Find the redshift and divide by the Hubble constant, and you will get the distance to any galaxy. In the same way, modern astronomers with the help of the Hubble constant calculate the size of the universe. The reciprocal of the Hubble constant has the meaning of the characteristic time of the expansion of the Universe at the current moment. This is where the feet of the Universe are growing. Hubble never believed in an expanding universe although one can find a great many websites making this statement.
Instead this is what he believed: On these, and other grounds, he (Edwin Hubble) was inclined, thus, to reject the Doppler-interpretation (galaxies moving away from each other and an expanding universe) of the redshifts and to regard the nebulae (galaxies) as stationary—and that an undiscovered reason for the observed redshift was probably responsible, https://en.wikiquote.org/wiki/Edwin_Hubble, last visited 4/27/2019. It was only much later that the expansion of space was proposed to explain redshifts. This made sense to most theorists in that if space could warp, according to Einstein, then it seemingly could expand or contract. A consensus explanation as to why space should expand has not immerged since the expansion of space hypothesis was proposed roughly 50 years ago.
For example, in 1929, the Hubble constant was equal to 500. In 1931, it was equal to 550. In 1936, it was 520 or 526. In 1950, 260, i.e. dropped significantly. In 1956, it fell even more: to 176 or 180. In 1958, it further decreased to 75, and in 1968 jumped to 98. In 1972, its value ranged from 50 to 130. Today, the Hubble constant is considered to be between 72.4 and 67.15 depending on the method being used for its calculation. All these changes allowed one astronomer to say with humor that it would be better to call the Hubble constant the Hubble variable, which is now considered likely by many. In other words, it is believed that the Hubble constant changes over time, but the term “constant” is justified by the fact that at any given moment in time the Hubble constant would accordingly be the same at all points in the Universe. Of course, all these changes over the decades can be explained by the fact that scientists improved their methods and improved the quality of calculations. But, the question arises: what kind of calculations? We repeat once again that no one can actually verify these calculations, because a roulette (even if it was a laser) that could reach a neighboring galaxy has not been invented yet. Moreover, even with regard to distances between galaxies; all this seems incomprehensible to many considering the rationale. If the Universe expands, according to the law of proportionality, evenly, then for what reason then would so many scientists get so many different values of distances based on the same proportions of the speeds of this expansion? It turns out that these proportions of expansion as such also do not exist.
The general theory of relativity, which is accessible to an understanding of only a limited number of specialists, is based on the fruitful idea of the geometry of space, found itself in difficulty trying to explain the allegedly observed expansion of space including its added acceleration by hypothetical dark energy. The reader could get the impression that, in our speculative attempts relating to cosmology, the question of space and time is still or further mystified by the introduction of fictitious "dark matter" and "dark energy." Indeed, in cosmological reasoning, we tried to return the discussion of the problem to the level of modern physics puzzles. However, this was not our ultimate goal. The goal was, as already mentioned, to illustrate the possibilities of combinatorial mathematics by using the apparatus of differential geometry in the form of the concept of monotone systems. After all, it was much more interesting to introduce the reader into the course of the issues on an example. To some extent, this tactic rather resembles our pedagogical approach.
In conclusion, it is appropriate to make some remarks on the application of the theory of Monotone Systems (it is better to call it the Monotone Phenomena) in geometry as described in the third narration. In principle, there is no need for this purpose to deal with the problems of modern physics, especially since nothing is much more confusing in modern physics as the asserted meanings of time and space. Indeed, the examples with which we illustrated the material in the physical sense, such as the "Potential energy of the avalanche", "Super-cooled water", "The tale of the creation of the Matter", etc., are quite random. It may therefore seem that our model of stereographic projection has nothing to do with the material presented in the book. Nevertheless, it was here where the invisible side at first sight of Monotone Phenomena was hiding.
Monotonic phenomena makes it possible to performs some mappings of sub-manifolds of the Euclidian space, which makes, in turn, it possible to introduce the concept of a fixed point of the indicated mappings. In a certain respect here, we act within the framework of the Brouer’s theorem, similar to the fixed point or steady state in topology. In the general case we do not have specific procedures for calculating Brouer’s fixed points. However, the positive issue lies precisely in the possibility of calculating the fixed points of these mappings of Monotone Phenomena where the advantage of our approach is coming out. The methods of differential geometry proved to be useful because the prospect of calculating the credential functions is opened, and on their basis the equations of equilibrium are compiled. Regardless of whether the equilibrium solution has a physical meaning, it is already sufficient that we are dealing here with some a novel approach revealing the phenomena of things as a consequence of the monotonicity property.
We hope that the Monotone Phenomena scheme will be subject to more extensive research, as this will contribute to the theoretical understanding, as well as assist in developing more effective algorithms aimed at finding the best solutions. The most promising avenue to pursue going forward, in our view, is the approach of steady states, or stable sets, which have been demonstrated in the third narrative presented here. In order to discover some important phenomena hiding in plain sight, we have offered various perspectives on different subjects, in atomic or continuous form. Our motive was to collate some evidence that demonstrates the opportunities for those enthusiasts that wish to open their minds and devote their time to the promotion and advancement of science.