Home | Menu | Poem | Jokes | Games | Biography | Omss বাংলা | Celibrity Video | Dictionary

World Population Day

Sex toys information


A sex toy is an object or device that is primarily used to facilitate human sexual pleasure. The most popular sex toys are designed to resemble human genitals and may be vibrating or non-vibrating. The term can also include BDSM apparatus and sex furniture such as slings, however it is not applied to items such as birth control, pornography, or condoms.

Alternative expressions include adult toy and marital aid, although "marital aid" has a broader sense and is applied to drugs and herbs marketed to supposedly enhance or prolong sex.

The future according to the Big Bang theory

Before observations of dark energy, cosmologists considered two scenarios for the future of the Universe. If the mass density of the Universe were greater than the critical density, then the Universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state that was similar to that in which it started—a Big Crunch. Alternatively, if the density in the Universe were equal to or below the critical density, the expansion would slow down, but never stop. Star formation would cease as all the interstellar gas in each galaxy is consumed; stars would burn out leaving white dwarfs, neutron stars, and black holes. Very gradually, collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the Universe would asymptotically approach absolute zero—a Big Freeze. Moreover, if the proton were unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the Universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death.

Modern observations of accelerated expansion imply that more and more of the currently visible Universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the Universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, would remain together, and they too would be subject to heat death, as the Universe expands and cools. Other explanations of dark energy—so-called phantom energy theories—suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip.

Features, issues and problems - Big Bang

While scientists now prefer the Big Bang model over other cosmological models, the scientific community was once divided between supporters of the Big Bang and those of alternative cosmological models. Throughout the historical development of the subject, problems with the Big Bang theory were posed in the context of a scientific controversy regarding which model could best describe the cosmological observations. With the overwhelming consensus in the community today supporting the Big Bang model, many of these problems are remembered as being mainly of historical interest; the solutions to them have been obtained either through modifications to the theory or as the result of better observations.

The core ideas of the Big Bang—the expansion, the early hot state, the formation of helium, the formation of galaxies—are derived from many observations that are independent from any cosmological model; these include the abundance of light elements, the cosmic microwave background, large scale structure, and the Hubble diagram for Type Ia supernovae.

Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject to the most active laboratory investigations. Remaining issues, such as the cuspy halo problem and the dwarf galaxy problem of cold dark matter, are not fatal to the dark matter explanation as solutions to such problems exist which involve only further refinements of the theory. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible.

On the other hand, inflation and baryogenesis remain somewhat more speculative features of current Big Bang models: they explain important features of the early universe, but could be replaced by alternative ideas without affecting the rest of the theory. Discovering the correct explanations for such phenomena are some of the remaining unsolved problems in physics.

Horizon problem

The horizon problem results from the premise that information cannot travel faster than light. In a Universe of finite age, this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the Universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.

A resolution to this apparent inconsistency is offered by inflationary theory in which a homogeneous and isotropic scalar energy field dominates the Universe at some very early period (before baryogenesis). During inflation, the Universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable Universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.

Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in the Universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been accurately confirmed by measurements of the CMB.

If inflation occurred, exponential expansion would push large regions of space well beyond our observable horizon.

Flatness/oldness problem

The overall geometry of the UniverseOmega cosmological parameter is less than, equal to or greater than 1. Shown from top to bottom are a closed Universehyperbolic Universe with negative curvature and a flat Universe with zero curvature. is determined by whether the with positive curvature, a

The flatness problem (also known as the oldness problem) is an observational problem associated with a Friedmann–Lemaître–Robertson–Walker metric. The Universe may have positive, negative or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density, positive if greater, and zero at the critical density, in which case space is said to be flat. The problem is that any small departure from the critical density grows with time, and yet the Universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the Universe has reached neither a Heat Death nor a Big Crunch after billions of years requires some explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the Universe density must have been within one part in 1014 of its critical value, or it would not exist as it does today.

A resolution to this problem is offered by inflationary theory. During the inflationary period, spacetime expanded to such an extent that its curvature would have been smoothed out. Thus, it is theorized that inflation drove the Universe to a very nearly spatially flat state, with almost exactly the critical density.

Magnetic monopoles

The magnetic monopole objection was raised in the late 1970s. Grand unification theories predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early Universe, resulting in a density much higher than is consistent with observations, given that searches have never found any monopoles. This problem is also resolved by cosmic inflation, which removes all point defects from the observable Universe in the same way that it drives the geometry to flatness.

A resolution to the horizon, flatness, and magnetic monopole problems alternative to cosmic inflation is offered by the Weyl curvature hypothesis.

Baryon asymmetry

It is not yet understood why the Universe has more matter than antimatter. It is generally assumed that when the Universe was young and very hot, it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the Universe, including its most distant parts, is made almost entirely of matter. An unknown process called "baryogenesis" created the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the Universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effect is not strong enough to explain the present baryon asymmetry.

Globular cluster age

In the mid-1990s, observations of globular clusters appeared to be inconsistent with the Big Bang. Computer simulations that matched the observations of the stellar populations of globular clusters suggested that they were about 15 billion years old, which conflicted with the 13.7 billion year age of the Universe. This issue was generally resolved in the late 1990s when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters. There still remain some questions as to how accurately the ages of the clusters are measured, but it is clear that these objects are some of the oldest in the Universe.

Dark matter

A pie chart indicating the proportional composition of different energy-density components of the Universe, according to the best ΛCDM model fits – roughly 95% is in the exotic forms of dark matter and dark energy

During the 1970s and 1980s, various observations showed that there is not sufficient visible matter in the Universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the Universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the Universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the Universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter was initially controversial, it is now indicated by numerous observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters.

The evidence for dark matter comes from its gravitational influence on other matter, and no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway.

Dark energy

Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the Universe has been accelerating since the Universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the Universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy is indicated by several other lines of evidence. Measurements of the cosmic microwave background indicate that the Universe is very nearly spatially flat, and therefore according to general relativity the Universe must have almost exactly the critical density of mass/energy. But the mass density of the Universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy is also required by two geometrical measures of the overall curvature of the Universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic ruler.

Negative pressure is a property of vacuum energy, but the exact nature of dark energy remains one of the great mysteries of the Big Bang. Possible candidates include a cosmological constant and quintessence. Results from the WMAP team in 2008, which combined data from the CMB and other sources, indicate that the contributions to mass/energy density in the Universe today are approximately 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. The energy density in matter decreases with the expansion of the Universe, but the dark energy density remains constant (or nearly so) as the Universe expands. Therefore matter made up a larger fraction of the total energy of the Universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant.

In the ΛCDM, the best current model of the Big Bang, dark energy is explained by the presence of a cosmological constant in the general theory of relativity. However, the size of the constant that properly explains dark energy is surprisingly small relative to naive estimates based on ideas about quantum gravity. Distinguishing between the cosmological constant and other explanations of dark energy is an active area of current research.

Timeline of the Big Bang

Extrapolation of the expansion of the Universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This singularity signals the breakdown of general relativity. How closely we can extrapolate towards the singularity is debated—certainly no closer than the end of the Planck epoch. This singularity is sometimes called "the Big Bang", but the term can also refer to the early hot, dense phase itself, which can be considered the "birth" of our Universe. Based on measurements of the expansion using Type Ia supernovae, measurements of temperature fluctuations in the cosmic microwave background, and measurements of the correlation function of galaxies, the Universe has a calculated age of 13.75 ± 0.11 billion years. The agreement of these three independent measurements strongly supports the ΛCDM model that describes in detail the contents of the Universe.

The earliest phases of the Big Bang are subject to much speculation. In the most common models, the Universe was filled homogeneously and isotropically with an incredibly high energy density, huge temperatures and pressures, and was very rapidly expanding and cooling. Approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the Universe grew exponentially. After inflation stopped, the Universe consisted of a quark–gluon plasma, as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present Universe.

The Universe continued to grow in size and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form.[39] After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle physics experiments. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was now no longer high enough to create new proton–antiproton pairs (similarly for neutrons–antineutrons), so a mass annihilation immediately followed, leaving just one in 1010 of the original protons and neutrons, and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the Universe was dominated by photons (with a minor contribution from neutrinos).

A few minutes into the expansion, when the temperature was about a billion (one thousand million; 109; SI prefix giga-) kelvin and the density was about that of air, neutrons combined with protons to form the Universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis.[40] Most protons remained uncombined as hydrogen nuclei. As the Universe cooled, the rest mass energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years the electrons and nuclei combined into atoms (mostly hydrogen); hence the radiation decoupled from matter and continued through space largely unimpeded. This relic radiation is known as the cosmic microwave background radiation.

The Hubble Ultra Deep Field showcases galaxies from an ancient era when the Universe was younger, denser, and warmer according to the Big Bang theory.

Over a long period of time, the slightly denser regions of the nearly uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the Universe. The four possible types of matter are known as cold dark matter, warm dark matter, hot dark matter and baryonic matter. The best measurements available (from WMAP) show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold (warm dark matter is ruled out by early reionization), and is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. In an "extended model" which includes hot dark matter in the form of neutrinos, then if the "physical baryon density" Ωbh2 is estimated at about 0.023 (this is different from the 'baryon density' Ωb expressed as a fraction of the total matter/energy density, which as noted above is about 0.046), and the corresponding cold dark matter density Ωch2 is about 0.11, the corresponding neutrino density Ωvh2 is estimated to be less than 0.0062.

Independent lines of evidence from Type Ia supernovae and the CMB imply that the Universe today is dominated by a mysterious form of energy known as dark energy, which apparently permeates all of space. The observations suggest 73% of the total energy density of today's Universe is in this form. When the Universe was very young, it was likely infused with dark energy, but with less space and everything closer together, gravity had the upper hand, and it was slowly braking the expansion. But eventually, after numerous billion years of expansion, the growing abundance of dark energy caused the expansion of the Universe to slowly begin to accelerate. Dark energy in its simplest formulation takes the form of the cosmological constant term in Einstein's field equations of general relativity, but its composition and mechanism are unknown and, more generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both observationally and theoretically.

All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the ΛCDM model of cosmology, which uses the independent frameworks of quantum mechanics and Einstein's General Relativity. As noted above, there is no well-supported model describing the action prior to 10−15 seconds or so. Apparently a new unified theory of quantum gravitation is needed to break this barrier. Understanding this earliest of eras in the history of the Universe is currently one of the greatest unsolved problems in physics.

Underlying assumptions

The Big Bang theory depends on two major assumptions: the universality of physical laws, and the cosmological principle.[citation needed] The cosmological principle states that on large scales the Universe is homogeneous and isotropic.

These ideas were initially taken as postulates, but today there are efforts to test each of them. For example, the first assumption has been tested by observations showing that largest possible deviation of the fine structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the solar system and binary stars while extrapolation to cosmological scales has been validated by the empirical successes of various aspects of the Big Bang theory.

If the large-scale Universe appears isotropic as viewed from Earth, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the CMB. The Universe has been measured to be homogeneous on the largest scales at the 10% level.

FLRW metric

General relativity describes spacetime by a metric, which determines the distances that separate nearby points. The points, which can be galaxies, stars, or other objects, themselves are specified using a coordinate chart or "grid" that is laid down over all spacetime. The cosmological principle implies that the metric should be homogeneous and isotropic on large scales, which uniquely singles out the Friedmann–Lemaître–Robertson–Walker metric (FLRW metric). This metric contains a scale factor, which describes how the size of the Universe changes with time. This enables a convenient choice of a coordinate system to be made, called comoving coordinates. In this coordinate system, the grid expands along with the Universe, and objects that are moving only due to the expansion of the Universe remain at fixed points on the grid. While their coordinate distance (comoving distance) remains constant, the physical distance between two such comoving points expands proportionally with the scale factor of the Universe.

The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time everywhere and increases the physical distance between two comoving points. Because the FLRW metric assumes a uniform distribution of mass and energy, it applies to our Universe only on large scales—local concentrations of matter such as our galaxy are gravitationally bound and as such do not experience the large-scale expansion of space.

Horizons

An important feature of the Big Bang spacetime is the presence of horizons. Since the Universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not had time to reach us. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the FLRW model that describes our Universe. Our understanding of the Universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the Universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the Universe continues to accelerate, there is a future horizon as well.

Observational evidence

The earliest and most direct kinds of observational evidence are the Hubble-type expansion seen in the redshifts of galaxies, the detailed measurements of the cosmic microwave background, the abundance of light elements (see Big Bang nucleosynthesis), and today also the large scale distribution and apparent evolution of galaxies which are predicted to occur due to gravitational growth of structure in the standard theory. These are sometimes called "the four pillars of the Big Bang theory".

Hubble's law and the expansion of space

Observations of distant galaxies and quasars show that these objects are redshifted—the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission lines or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed:
v = H0D,

where

  • v is the recessional velocity of the galaxy or other distant object,
  • D is the comoving distance to the object, and
  • H0 is Hubble's constant, measured to be 70.4 +1.3
    −1.4
    km/s/Mpc by the WMAP probe.

Hubble's law has two possible explanations. Either we are at the center of an explosion of galaxies—which is untenable given the Copernican Principle—or the Universe is uniformly expanding everywhere. This universal expansion was predicted from general relativity by Alexander Friedmann in 1922 and Georges Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang theory as developed by Friedmann, Lemaître, Robertson and Walker.

The theory requires the relation v = HD to hold at all times, where D is the comoving distance, v is the recessional velocity, and v, H, and D vary as the Universe expands (hence we write H0 to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable Universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity v. However, the redshift is not a true Doppler shift, but rather the result of the expansion of the Universe between the time the light was emitted and the time that it was detected.

That space is undergoing metric expansion is shown by direct observational evidence of the Cosmological Principle and the Copernican Principle, which together with Hubble's law have no other explanation. Astronomical redshifts are extremely isotropic and homogenous, supporting the Cosmological Principle that the Universe looks the same in all directions, along with much other evidence. If the redshifts were the result of an explosion from a center distant from us, they would not be so similar in different directions.

Measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical systems in 2000 proved the Copernican Principle, that the Earth is not in a central position, on a cosmological scale. Radiation from the Big Bang was demonstrably warmer at earlier times throughout the Universe. Uniform cooling of the cosmic microwave background over billions of years is explainable only if the Universe is experiencing a metric expansion, and excludes the possibility that we are near the unique center of an explosion.

Cosmic microwave background radiation

WMAP image of the cosmic microwave background radiation

During the first few days of the Universe, the Universe was in full thermal equilibrium, with photons being continually emitted and absorbed, giving the radiation a blackbody spectrum. As the Universe expanded, it cooled to a temperature at which photons could no longer be created or destroyed. The temperature was still high enough for electrons and nuclei to remain unbound, however, and photons were constantly "reflected" from these free electrons through a process called Thomson scattering. Because of this repeated scattering, the early Universe was opaque to light.

When the temperature fell to a few thousand Kelvin, electrons and nuclei began to combine to form atoms, a process known as recombination. Since photons scatter infrequently from neutral atoms, radiation decoupled from matter when nearly all the electrons had recombined, at the epoch of last scattering, 379,000 years after the Big Bang. These photons make up the CMB that is observed today, and the observed pattern of fluctuations in the CMB is a direct picture of the Universe at this early epoch. The energy of photons was subsequently redshifted by the expansion of the Universe, which preserved the blackbody spectrum but caused its temperature to fall, meaning that the photons now fall into the microwave region of the electromagnetic spectrum. The radiation is thought to be observable at every point in the Universe, and comes from all directions with (almost) the same intensity.

In 1964, Arno Penzias and Robert Wilson accidentally discovered the cosmic background radiation while conducting diagnostic observations using a new microwave receiver owned by Bell Laboratories. Their discovery provided substantial confirmation of the general CMB predictions—the radiation was found to be isotropic and consistent with a blackbody spectrum of about 3 K—and it pitched the balance of opinion in favor of the Big Bang hypothesis. Penzias and Wilson were awarded a Nobel Prize for their discovery.

The cosmic microwave background spectrum measured by the FIRAS instrument on the COBE satellite is the most-precisely measured black body spectrum in nature. The data points and error bars on this graph are obscured by the theoretical curve.

In 1989, NASA launched the Cosmic Background Explorer satellite (COBE), and the initial findings, released in 1990, were consistent with the Big Bang's predictions regarding the CMB. COBE found a residual temperature of 2.726 K and in 1992 detected for the first time the fluctuations (anisotropies) in the CMB, at a level of about one part in 105. John C. Mather and George Smoot were awarded the Nobel Prize for their leadership in this work. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the Universe to be almost spatially flat by measuring the typical angular size (the size on the sky) of the anisotropies. (See shape of the Universe.)

In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe (WMAP) were released, yielding what were at the time the most accurate values for some of the cosmological parameters. This spacecraft also disproved several specific cosmic inflation models, but the results were consistent with the inflation theory in general, it confirms too that a sea of cosmic neutrinos permeates the Universe, a clear evidence that the first stars took more than a half-billion years to create a cosmic fog. A new space probe named Planck, with goals similar to WMAP, was launched in May 2009. It is anticipated to soon provide even more accurate measurements of the CMB anisotropies. Many other ground- and balloon-based experiments are also currently running; see Cosmic microwave background experiments.

The background radiation is exceptionally smooth, which presented a problem in that conventional expansion would mean that photons coming from opposite directions in the sky were coming from regions that had never been in contact with each other. The leading explanation for this far reaching equilibrium is that the Universe had a brief period of rapid exponential expansion, called inflation. This would have the effect of driving apart regions that had been in equilibrium, so that all the observable Universe was from the same equilibrated region.

Abundance of primordial elements

Using the Big Bang model it is possible to calculate the concentration of helium-4, helium-3, deuterium and lithium-7 in the Universe as ratios to the amount of ordinary hydrogen, H. All the abundances depend on a single parameter, the ratio of photons to baryons, which itself can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by number) are about 0.25 for 4
He
/H, about 10−3 for 2
H
/H, about 10−4 for 3
He
/H and about 10−9 for 7
Li
/H.

The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4
He
, and a factor of two off for 7
Li
; in the latter two cases there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed there is no obvious reason outside of the Big Bang that, for example, the young Universe (i.e., before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products) should have more helium than deuterium or more deuterium than 3
He
, and in constant ratios, too.

Galactic evolution and distribution

This panoramic view of the entire near-infrared sky reveals the distribution of galaxies beyond the Milky Way. The galaxies are color coded by redshift.

Detailed observations of the morphology and distribution of galaxies and quasars provide strong evidence for the Big Bang. A combination of observations and theory suggest that the first quasars and galaxies formed about a billion years after the Big Bang, and since then larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early Universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures agree well with Big Bang simulations of the formation of structure in the Universe and are helping to complete details of the theory.

Other lines of evidence

After some controversy, the age of Universe as estimated from the Hubble expansion and the CMB is now in good agreement with (i.e., slightly larger than) the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars.

The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of temperature-sensitive emission lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift; this seems to be roughly true, but unfortunately the amplitude does depend on cluster properties which do change substantially over cosmic time, so a precise test is impossible.

Motivation and development - Big Bang

Artist's depiction of the WMAP satellite gathering data to help scientists understand the Big Bang

The Big Bang theory developed from observations of the structure of the Universe and from theoretical considerations. In 1912 Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from Albert Einstein's equations of general relativity, showing that the Universe might be expanding in contrast to the static Universe model advocated by Einstein at that time. In 1924, Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the inferred recession of the nebulae was due to the expansion of the Universe.

In 1931 Lemaître went further and suggested that the evident expansion in forward time required that the Universe contracted backwards in time, and would continue to do so until it could contract no further, bringing all the mass of the Universe into a single point, a "primeval atom" where and when the fabric of time and space comes into existence.

Starting in 1924, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the 100-inch (2,500 mm) Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recession velocity—now known as Hubble's law. Lemaître had already shown that this was expected, given the Cosmological Principle.

During the 1930s other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model,[22] the oscillatory Universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard Tolman) and Fritz Zwicky's tired light hypothesis.

After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady state model, whereby new matter would be created as the Universe seemed to expand. In this model, the Universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced big bang nucleosynthesis (BBN) and whose associates, Ralph Alpher and Robert Herman, predicted the cosmic microwave background radiation (CMB). Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over Steady State. The discovery and confirmation of the cosmic microwave background radiation in 1964 secured the Big Bang as the best theory of the origin and evolution of the cosmos. Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding the physics of the Universe at earlier and earlier times, and reconciling observations with the basic theory.

Huge strides in Big Bang cosmology have been made since the late 1990s as a result of major advances in telescope technology as well as the analysis of copious data from satellites such as COBE,[30] the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the Universe appears to be accelerating.

What is Big Bang ?

The Big Bang model, or theory, is the prevailing cosmological theory of the early development of the universe.[1] The theory purports to explain some of the earliest events in the universe (but not the absolute earliest state of things, or where it comes from). Our universe was once in an extremely hot and dense state that expanded rapidly (a "Big Bang"). There is little consensus among physicists about the origins of the universe itself (i.e. just as evolution seeks to explain our past only after the origin of life, the Big Bang theory explains only what happened after the uncertain origin of the universe). What is clear is that the Big Bang caused the young universe to cool and resulted in the present diluted state that continues to expand today. Based on the best available measurements as of 2010, the original state of the universe existed around 13.7 billion years ago, which is often referred to as the time when the Big Bang occurred. The theory is the most comprehensive and accurate explanation supported by scientific evidence and observations.

Georges Lemaître proposed what became known as the Big Bang theory of the origin of the universe, he called it his "hypothesis of the primeval atom". The framework for the model relies on Albert Einstein's general relativity and on simplifying assumptions (such as homogeneity and isotropy of space). The governing equations had been formulated by Alexander Friedmann. In 1929, Edwin Hubble discovered that the distances to far away galaxies were generally proportional to their redshifts—an idea originally suggested by Lemaître in 1927. Hubble's observation was taken to indicate that all very distant galaxies and clusters have an apparent velocity directly away from our vantage point: the farther away, the higher the apparent velocity.

If the distance between galaxy clusters is increasing today, everything must have been closer together in the past. This idea has been considered in detail back in time to extreme densities and temperatures, and large particle accelerators have been built to experiment on and test such conditions, resulting in significant confirmation of this theory. On the other hand, these accelerators have limited capabilities to probe into such high energy regimes. There is little evidence regarding the absolute earliest instant of the expansion. Thus, the Big Bang theory cannot and does not provide any explanation for such an initial condition; rather, it describes and explains the general evolution of the universe going forward from that point on. The observed abundances of the light elements throughout the cosmos closely match the calculated predictions for the formation of these elements from nuclear processes in the rapidly expanding and cooling first minutes of the universe, as logically and quantitatively detailed according to Big Bang nucleosynthesis.

Fred Hoyle is credited with coining the term Big Bang during a 1949 radio broadcast. It is popularly reported that Hoyle, who favored an alternative "steady state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Hoyle later helped considerably in the effort to understand stellar nucleosynthesis, the nuclear pathway for building certain heavier elements from lighter ones. After the discovery of the cosmic microwave background radiation in 1964, and especially when its spectrum (i.e., the amount of radiation measured at each wavelength) was found to match that of thermal radiation from a black body, most scientists were fairly convinced by the evidence that some version of the Big Bang scenario must have occurred.

Rotational mouse

A rotational mouse is a type of computer mouse which attempts to expand traditional mouse functionality. The objective of rotational mice is to facilitate three degrees of freedom (3DOF) for human-computer interaction by adding a third dimensional input, yaw (or Rz), to the existing x and y dimensional inputs. There have been several attempts to develop rotating mice, using a variety of mechanisms to detect rotation.

Mechanisms using relative measures of rotation: These devices are able to detect that the mouse has rotated by so many degrees, but cannot accurately identify where the rotation started or ended, increasing their tendency to lose orientation.

• 2-balls or 2-sensors

In 1989 we saw the first mention of actually rotating objects on screen by rotating the mouse, with a US patent for a cursor display apparatus, US patent number 4,887,230. This led to a succession of refinements of the 2-ball / 2-sensor mouse concept. Notable examples include:

1. Multi-dimensional input device; US 5,298,919

2. Positioning device reporting X, Y and Yaw motion; US 5,477,237

3. Twin mouse digitizer; US 6,081,258

4. Pointing device having rotational sensing mechanisms; US 6,618,038

5. Multiple sensor device and method; US 6,847,353

Unlike the conventional mouse which senses z-axis and y-axis displacement only, these 2-ball or 2-sensor mice are also able to sense z-axis angular motion, calculated by the two sets of x-y displacement data.

• Mechanical ring & rotary encoder

Within these devices rotation is detected by a mechanical ring (US 5,936,612). This mechanism was promoted by the Canadian company Handview Inc; however it apparently never made it to production.

• Gyroscopes or accelerometers

US Patent 6,130,664, named Input Device was the first known application of gyros to a rotating mouse. However, we still have not seen this technology in a commercial mouse to date.

Mechanisms using absolute measures of rotation

• Tablet/Digitiser Puck

The patent for an Absolute position controller, US 4,814,533, is the earliest know reference to this type of input device. However, it was the patent for an orientational mouse computer input system, US 5,162,781, which suggested using a tablet with a detectable pattern or grid and sensors in the puck for computer navigation.

The Wacom Intuos 4D Mouse puck was the first commercial rotating “mouse.” The product was not a standalone mouse but rather a tablet accessory.

• Compass

The Orbita mouse is the first commercially released non-tablet rotating mouse. Licensed and commercialized by Australian company Cyber Sport, the Orbita is equipped with a patented compass mechanism which solved the problems which plagued earlier rotating mechanisms. The inbuilt compass provides the mouse with ability to detect rotation based on the Earth’s magnetic field so that it can accurately maintain orientation once the ‘up’ direction is specified. The round design makes it completely rotatable, spinning freely on ball bearings, and is usable at any angle due to the ‘push and squeeze’ button configuration encased in a silicone soft shell. The mouse reports rotation as scroll wheel commands so compatible with most applications.

Due to the round shape the Orbita mouse is commonly confused as being similar to the original, circular USB iMac mouse. However, the two mice are functionally different, primarily because the iMac's mouse is not a rotating mouse. The Orbita, unlike the Puck mouse, is designed to be ergonomic, with the round shape lending practical aid to the mouse's spinning action, and is not a purely æsthetic trait.

Mouse Rage

Mouse rage is a particular type of computer rage. It is a behavioural response provoked by failure or unexpected results observed when using a computer. In particular, this is expressed by focusing the negative emotion incurred by the event on a computer mouse.

Symptoms of mouse rage include frantic wiping of the mouse on the mousepad, thumping the mouse on the desk or even throwing it at a wall. In severe cases, mouse rage can result in the total destruction or major loss of functionality in the computer mouse.

The reaction can be provoked by hardware or software events. One common cause of mouse rage is in the use of the Internet, where the website being visited may be loading more slowly than expected, or have a confusing design, causing the user to get impatient and often give up entirely on the site. Other causes include hardware that has ceased working or cannot be configured, through to major data corruption in an ERP system. Another Major cause of Mouse Rage is when the mouse stops interacting with the computer (ie. freezes) due to a looming Blue Screen Of Death or crash or other failure. Recently, a major cause of mouse rage and in fact all types of computer rage is computer gaming. Often if a player is killed repeatedly whilst in an online game by a particular player, they become angry which is often expressed as a form of mouse rage. Lag can also induce mouse rage. Mouse rage, and in fact all types of computer rage whilst gaming are counter productive, as the player loses concentration and can damage equipment, which leaves them vulnerable to enemy attack.

Mouse rage is sometimes referred to as "Mouse Rage Syndrome" or MRS.

Footmouse And Use

A footmouse is a type of computer mouse that gives the users the ability to move the cursor and click the mousebuttons with their feet.

It is primarily used by users with disabilities or with high-back or neck problems. It is also promoted as a way to prevent such problems in the future and as a means to increase productivity by not having to move one's hand between the keyboard and mouse.

There are about ten different kinds of footmice. Not all of them are commercially available. Some specialized companies design their own foot controlled mouse for disabled persons.

The main difference between the different types of footmice is a flat pedal which slides or a pedal which tilts. A footmouse that uses sliding can be a slipper with a mouse connected to it, or a special frame in which a pedal can move around. Also a tiny magnet or other location device can be used on a tablet. A footmouse that uses a tilting pedal has one or two pedals which are able to tilt and turn in one or more directions. Buttons that are pushed by the feet are not considered footmice.


Use

Using a footmouse is slower that a normal mouse (a handmouse), since most people have less control over precise movements with their feet and legs than with their hands. If the footmouse is used together with a keyboard, the cursor can be moved around while typing, so there is no time wasted for moving the hand between the keyboard and the mouse. If a person cannot use a keyboard, a virtual keyboard on the screen can be used to type text by clicking each character on the virtual keyboard. People who also have less control over their foot and leg movement and are unable to operate a footmouse can sometimes use a few switches for setting the cursor in a certain direction.

If a footmouse is used incorrectly for a long time, it can cause muscle cramp in the legs, and can have a negative influence on lower back problems.

Mouse Use In Gaming

Mice often function as an interface for PC-based computer games and sometimes for video game consoles.

First-person shooters

Due to the cursor-like nature of the crosshairs in first-person shooters (FPS), a combination of mouse and keyboard provides a popular way to play FPS games. Players use the X-axis of the mouse for looking (or turning) left and right, leaving the Y-axis for looking up and down. Many gamers prefer this primarily in FPS games over a gamepad or joypad because it provides a higher resolution for input. This means they are able to make small, precise motions in the game more easily. The left button usually controls primary fire. If the game supports multiple fire-modes, the right button often provides secondary fire from the selected weapon. The right button may also provide bonus options for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer.

Gamers can use a scroll wheel for changing weapons, or for controlling scope-zoom magnification. On most FPS games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD, for moving forward, left, backward and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice.

An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse towards the opponent.

Games using mice for input have such a degree of popularity that many manufacturers, such as Logitech, Cyber Snipa, Razer USA Ltd and SteelSeries, make peripherals such as mice and keyboards specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable DPI.

Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration.

After id Software's Doom, the game that popularized FPS games but which did not support vertical aiming with a mouse (the y-axis served for forward/backward movement), competitor 3D Realms' Duke Nukem 3D became one of the first games that supported using the mouse to aim up and down. This and other games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users now regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released Quake, which introduced the invert feature as users now[update] know it. Other games using the Quake engine have come on the market following this standard, likely due to the overall popularity of Quake.

Home consoles

In 1988 the educational video game system, the VTech Socrates, featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. The Mario Paint game in particular used the mouse's capabilities, as did its successor on the Nintendo 64. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony Computer Entertainment released an official mouse product for the PlayStation console, and included one along with the Linux for PlayStation 2 kit. However, users can attach virtually any USB mouse to the PlayStation 2 console. In addition the PlayStation 3 also fully supports USB mice. Recently the Wii also has this latest development added on in a recent software update.

Mouse in the marketplace

Around 1981 Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use.

The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Commodore Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS). The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers.

In November 2008, Logitech built their billionth mouse.

Need You Mousepads

Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance.

The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because in order to roll smoothly, the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist.

Most optical and laser mice do not require a pad. Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface.

Mouse Buttons And Speed

Buttons

Mouse buttons are microswitches which can be pressed ("clicked") in order to select or interact with an element of a graphical user interface.

The three-button scrollmouse has become the most commonly available design. As of 2007 (and roughly since the late 1990s), users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software.

Speed

The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed incorrectly as dots per inch (DPI) – the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. Current software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. In most software this setting is named "speed", referring to "cursor precision". However, some software names this setting "acceleration", but this term is in fact incorrect. The mouse acceleration, in the majority of mouse software, refers to the setting allowing the user to modify the cursor acceleration: the change in speed of the cursor over time while the mouse movement is constant.

For simple software, when the mouse starts to move, the software will count the number of "counts" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, having a good precision. When the movement of the mouse passes the value set for "threshold", the software will start to move the cursor more quickly, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting.

Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response.

Multiple-Mouse Systems

Some systems allow two or more mice to be used at once as input devices. 16-bit era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer. The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around.

Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time generally results in seemingly random movements of the cursor. However, the advantage of this support lies not in simultaneous use, but in simultaneous availability for alternate use: for example, a laptop user editing a complex document might use a handheld mouse for drawing and manipulation of graphics, but when editing a section of text, use a built-in trackpad to allow movement of the cursor while keeping his hands on the keyboard. Windows' multiple-device support means that the second device is available for use without having to disconnect or disable the first.

DirectInput originally allowed access to multiple mice as separate devices, but Windows NT based systems could not make use of this. When Windows XP was introduced, it provided a feature called "Raw Input" that offers the ability to track multiple mice independently, allowing for programs that make use of separate mice. Though a program could, for example, draw multiple cursors if it was a fullscreen application, Windows still supports just one cursor and keyboard.

As of 2009, Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support unlimited numbers of cursors and keyboards through Multi-Pointer X.

There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications.

Connectivity and communication protocols of Mouse

A Microsoft wireless Arc mouse

To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses.

While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer.

Mouse use in DOS applications became more common after the introduction of the Microsoft mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a Microsoft compatible driver (even if the mouse hardware itself was incompatible with Microsoft's). An interesting footnote is that the Microsoft driver standard communicates mouse movements in standard units called "mickeys".

Serial interface and protocol

Standard PC mice once used the RS-232C serial port via a D-subminiature connector, which provided power to run the mouse's circuits as well as data on mouse movements. The Mouse Systems Corporation version used a five-byte protocol and supported three buttons. The Microsoft version used an incompatible three-byte protocol and only allowed for two buttons. Due to the incompatibility, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode.

PS/2 interface and protocol

With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 interface for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin connector. In default mode (called stream mode) a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format:


Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
Byte 1 YV XV YS XS 1 MB RB LB
Byte 2 X movement
Byte 3 Y movement

Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors.

In Linux, a PS/2 mouse is detected as a /dev/psaux device.

A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backwards compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five).

The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them.

Mouse vendors also use other extended formats, often without providing public documentation.

For 3-D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s Logitech created ultrasound based tracking which gave 3D input to a few millimetres accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin.

Apple Desktop Bus

Apple Macintosh Plus mice (left) Beige mouse (right) Platinum mouse 1986

In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy-chaining together of up to 16 devices, including arbitrarily many mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to computer/mouse communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when iMac joined the industry-wide switch to using USB. Beginning with the "Bronze Keyboard" PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005.

USB

The industry-standard USB protocol and its connector have become widely used for mice; it's currently among the most popular types.

Cordless or wireless

A wireless mouse made for notebook computers

Cordless or wireless mice transmit data via infrared radiation (see IrDA) or radio (including Bluetooth). The receiver is connected to the computer through a serial or USB port. The newer nano receivers were designed to be small enough to remain connected in a laptop or notebook computer during transport, while still being large enough to easily remove.

Operation

A mouse typically controls the motion of a cursor in two dimensions in a graphical user interface (GUI). Clicking or hovering (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook, and clicking while the cursor hovers this icon might cause a text editing program to open the file in a window.

Users can also employ mice gesturally; meaning that a stylized motion of the mouse cursor itself, called a "gesture", can issue a command or map to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape.

Gestural interfaces occur more rarely than plain pointing-and-clicking; and people often find them more difficult to use, because they require finer motor-control from the user. However, a few gestural conventions have become widespread, including the drag-and-drop gesture, in which:

  1. The user presses the mouse button while the mouse cursor hovers over an interface object
  2. The user moves the cursor to a different location while holding the button down
  3. The user releases the mouse button

For example, a user might drag-and-drop a picture representing a file onto a picture of a trash can, thus instructing the system to delete the file.

Other uses of the mouse's input occur commonly in special application-domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate, so that all sides can be examined.

When mice have more than one button, software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.

Different ways of operating the mouse cause specific things to happen in the GUI:

  • Click: pressing and releasing a button.
    • (left) Single-click: clicking the main button.
    • (left) Double-click: clicking the button two times in quick succession counts as a different gesture than two separate single clicks.
    • (left) Triple-click: clicking the button three times in quick succession.
    • Right-click: clicking the secondary button.
    • Middle-click: clicking the ternary button.
  • Drag: pressing and holding a button, then moving the mouse without releasing. (Use the command "drag with the right mouse button" instead of just "drag" when you instruct a user to drag an object while holding the right mouse button down instead of the more commonly used left mouse button.)
  • Button chording (a.k.a. Rocker navigation).
    • Combination of right-click then left-click.
    • Combination of left-click then right-click or keyboard letter.
    • Combination of left or right-click and the mouse wheel.
  • Clicking while holding down a modifier key.

Standard semantic gestures include:

  • Rollover
  • Selection
  • Menu traversal
  • Drag and drop
  • Pointing
  • Goal crossing

Mechanical Description of Mouse

Mouse mechanism diagram.svg
Operating an opto-mechanical mouse.

  1. moving the mouse turns the ball.
  2. X and Y rollers grip the ball and transfer movement
  3. Optical encoding disks include light holes.
  4. Infrared LEDs shine through the disks.
  5. Sensors gather light pulses to convert to X and Y vectors.

Bill English, builder of Engelbart's original mouse, invented the ball mouse in 1972 while working for Xerox PARC.

The ball-mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.

Mechanical mouse, shown with the top cover removed

The ball mouse has two freely rotating rollers. They are located 90 degrees apart. One roller detects the forward–backward motion of the mouse and other the left–right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc, however, has a pair of light beams, located so that a given beam becomes interrupted, or again starts to pass light freely, when the other beam of the pair is about halfway between changes. Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensor produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the screen.

The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately.

Hawley Mark II Mice from the Mouse House

Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975.

Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Keytronic later produced a similar product.

Modern computer mice took form at the École polytechnique fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent;" though optical mice from Mouse Systems had incorporated microprocessors by 1984.

Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug-compatible with an analog joystick. The "Color Mouse", originally marketed by Radio Shack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.

Optical and Laser mice

A wireless optical mouse on a mouse pad

Optical mice make use of one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, rather than internal moving parts as does a mechanical mouse. A Laser mouse is an optical mouse that uses coherent (Laser) light.

Inertial and gyroscopic mice

Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm". Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture.

3D mice

Also known as bats, flying mice, or wands, these devices generally function through ultrasound and provide at least three degrees of freedom. Probably the best known example would be 3DConnexion/Logitech's SpaceMouse from the early 1990s.

In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station. Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.

A recent consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk accessory lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar.

A mouse-related controller called the SpaceBall™ has a ball placed above the work surface that can easily be gripped. With spring-loaded centering, it sends both translational as well as angular displacements on all six axes, in both directions for each.

In November 2010 a German Company called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new concepts of a true 6 DOF input-device uses a ball to rotate in 3 axes without any limitations.

Tactile mice

In 2000, Logitech introduced the "tactile mouse", which contained a small actuator that made the mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf by touch requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed.