The MDL Times - Science and Tech. News on MDL

Discussion in 'Serious Discussion' started by kldpdas, Jun 30, 2011.

  1. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Hottest temps ever at LHC, and more hints about early Universe

    [h=1]An odd asymmetry that may explain why there's more matter than antimatter.[/h]by Matthew Francis

    40​


    [​IMG]Brookhaven National Laboratory
    Washington, DC—This week is the Quark Matter 2012 (QM2012) conference—the preeminent meeting for those studying high-energy collisions between heavy ions. I attended a number of talks on Monday, August 13, during which researchers announced the major new results from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and the Large Hadron Collider (LHC) at CERN. The conference offered fresh insights on the transition between ordinary matter and the soup of quarks that existed in the early Universe—including a tantalizing hint that might tell us about why the modern cosmos has more matter than antimatter.We recently ran a detailed review of heavy-ion physics; here's an executive summary. Heavy nuclei (lead at LHC, gold, copper, and uranium at RHIC) are completely stripped of electrons, leaving massive, positively charged ions. These are accelerated to well over 99 percent of the speed of light and smashed into each other. If the energy is sufficiently high, the protons and neutrons in the nuclei "melt" into their constituent quarks and gluons. The result is a substance known as the quark-gluon plasma (QGP), which theory predicts existed during the first 10 microseconds after the Big Bang.
    While the hunt for the Higgs boson has dominated press coverage of the LHC, the collider also performs heavy ion experiments using lead (Pb+Pb). In addition to the ATLAS and CMS detectors, which are used both for proton-proton and heavy ion collisions, LHC has a dedicated heavy ion detector named ALICE (A Large Ion Collider Experiment, pronounced "ahLEES"). The two active detectors at RHIC are PHENIX (Pioneering High-Energy Nuclear Interacting Experiment) and STAR (Solenoidal Tracker at RHIC). These study the products of collisions between gold ions (Au+Au); in the most recent experiments, researchers have added gold and copper asymmetric collisions (Au+Cu) and uranium (U+U). The two major colliders are complimentary in many aspects: the LHC has a larger temperature range and can reach lower density, while RHIC is able to explore much higher baryon densities.
    [h=2]To form a more perfect fluid[/h]The detectors at RHIC and LHC measure the particles produced within the QGP, and those formed in the first stages of the collision and pass through the plasma. Many of these particles are hadrons (collections of quarks), but many photons, electrons, and muons are made, some from the decay of exotic species containing strange and charm quarks.
    The distribution of the collision products reveals a lot about the interaction between the ions when they collide—including their positions within the region of overlap. Just as Fourier analysis reveals the harmonics contained in a musical note, researchers use it to determine the shape of particles coming out of the QGP. This result in turn reveals a lot about the plasma itself: if it had high viscosity, then the deformations would be damped out, just as a stiff mixture resists carrying waves over long distances. However, because the deformations are carried through, that means the QGP has low viscosity.
    The "melting" analogy I used above appears to be appropriate for talking about the transition from stable nuclei to the QGP. While we are often used to thinking of plasmas as gaseous (like they are in stars), the QGP is actually a liquid that flows with nearly zero viscosity. Viscosity is the resistance to flow: water has relatively low viscosity, but the QGP has much less. Theoretical models predict that viscosity can never be zero due to quantum fluctuations, but as Jurgen Schukraft of CERN described it, the QGP's viscosity is very close to the predicted minimum.
    Several speakers emphasized how new that result is: three years ago, nobody suspected the deformations could be studied in that much detail, but today they are some of the most powerful data coming from detectors.
    [​IMG]The phase diagram of QCD matter, showing the possible states quarks have. Ordinary matter lies at the bottom left corner, where temperature and density of matter are both relatively low. RHIC and LHC explore much higher energies; currently RHIC researchers are hunting for the possible existence of a critical point, where stable hadrons may mingle with the quark-gluon plasma.
    Brookhaven National Laboratory
    Much of the recent work at RHIC and LHC is an attempt to map the phase diagram of the dense hot matter formed in these collisions (called "QCD matter" in reference to quantum chromodynamics, which describes its behavior). The people running the accelerators are trying to map the transitions between the QGP, ordinary matter, and other phases. Ordinary phases of matter include solid, liquid, and gas; physicists add many more, such as various magnetic and superconducting phases.Water provides an analogous situation to QCD matter: at low temperatures and moderate pressures, it exists as ice, while at higher temperatures it may melt into liquid or boil. However, at higher temperatures and pressure, water reaches a critical point: a place where it is a fluid, but there is no longer a distinction between the liquid and vapor state.
    For QCD matter, the relevant quantities are temperature (a proxy for energy) and density of matter. At low temperatures and high density, the result is stable hadrons. At higher temperatures, hadrons melt into the QGP. (At relatively low temperature but extremely high densities, matter forms yet another phase we don't understand very well: the substance that comprises neutron stars.)
    Experiments at RHIC have begun probing the phase transition between stable hadrons and the QGP—and looking for a possible critical point. While this critical point hasn't been found yet, there are good theoretical and experimental reasons to think it exists. At QM2012, Steve Vigdor (associate laboratory director for nuclear and particle physics at Brookhaven) pointed out that STAR has seen signs of a transition between QGP and stable hadrons, where the strange quark production seen at higher energies dropped precipitously.
    [h=2]Golden hints about the early Universe[/h]Earlier low-energy RHIC experiments using gold (Au+Au) found electric polarization—a small separation of positive and negative charges—in the overlap region where the ions collided. This effect was potentially worrying, since quantum chromodynamics requires chiral (or mirror) symmetry: there should be no inherent left- or right-handedness resulting from strong force interactions. However, QCD does allow violations of this symmetry at higher temperatures, such as those in the QGP, as long as they average out over all collisions. These violations may have played a role in the early Universe, when more matter was produced than antimatter, according to BNL's Vigdor.
    Au+Au collisions result in strong magnetic fields, which complicate analysis. The uranium-uranium collisions have effectively ruled out the possibility that the charge separation is due to the specific shape of the overlap region in the Au+Au interaction. This test was possible because uranium nuclei are highly non-spherical, having nearly the shape of an American football. RHIC researchers collided uranium ions both along their long sides and their narrow ends. The end-to-end collisions resulted in very high energy density, said Stony Brook physicist Barbara Jacak, while the side-to-side collisions produced charge separation without magnetic field effects.
    Similarly, the asymmetric gold-copper collisions may hold clues to whether the symmetry violation is a fundamental result, or an artifact of the experimental setup. However, it's too early to draw strong conclusions (Jacak told me the Au+Cu collision results are only a few weeks old), so further analysis will need to happen before we can definitively say if the charge separation has anything important to say about QCD or the QGP.
    [h=2]More exotic quark matter[/h]With much higher energies available at LHC, researchers are using it to examine a high-temperature region of the QGP: the strongly correlated QGP (sQGP). ALICE found that particles passing through the sQGP experienced a great deal of energy loss, suggesting the interactions within it are different from those present at lower energies. The sQGP is still poorly understood from a theoretical point of view, according to CERN theorist Urs Achim Wiedemann.
    Analogous systems exist in condensed matter physics (which are very low temperature), but theparticle-like excitations that drive so much of the interesting behavior in materials don't seem to exist in the QGP. However, both the theoretical and experimental studies of the sQGP are in very early stages.
    To give a sense of how preliminary they are: CERN announced yesterday that the LHC has achieved the highest human-made temperatures yet, but hasn't been able to determine exactly what that temperature is. It's about 38 percent larger than the previous record from RHIC, which was about 4 trillion degrees Celsius. Such high energies should help make clear the structure of the sQGP.
    Listing image by Brookhaven National Laboratory

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  2. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    The Star That Should Not Exist

    31 August 2011
    [​IMG]imageimageimageimage
    A team of European astronomers has used ESO’s Very Large Telescope (VLT) to track down a star in the Milky Way that many thought was impossible. They discovered that this star is composed almost entirely of hydrogen and helium, with only remarkably small amounts of other chemical elements in it. This intriguing composition places it in the “forbidden zone” of a widely accepted theory of star formation, meaning that it should never have come into existence in the first place. The results will appear in the 1 September 2011 issue of the journal Nature.
    A faint star in the constellation of Leo (The Lion), called SDSS J102915+172927 [1], has been found to have the lowest amount of elements heavier than helium (what astronomers call “metals”) of all stars yet studied. It has a mass smaller than that of the Sun and is probably more than 13 billion years old.
    A widely accepted theory predicts that stars like this, with low mass and extremely low quantities of metals, shouldn’t exist because the clouds of material from which they formed could never have condensed,[2] said Elisabetta Caffau (Zentrum für Astronomie der Universität Heidelberg, Germany and Observatoire de Paris, France), lead author of the paper. “It was surprising to find, for the first time, a star in this ‘forbidden zone’, and it means we may have to revisit some of the star formation models.
    The team analysed the properties of the star using the X-shooter and UVES instruments on the VLT [3]. This allowed them to measure how abundant the various chemical elements were in the star. They found that the proportion of metals in SDSS J102915+172927 is more than 20 000 times smaller than that of the Sun [4][5].
    The star is faint, and so metal-poor that we could only detect the signature of one element heavier than helium — calcium — in our first observations,” said Piercarlo Bonifacio (Observatoire de Paris, France), who supervised the project. “We had to ask for additional telescope time from ESO’s Director General to study the star’s light in even more detail, and with a long exposure time, to try to find other metals.”
    Cosmologists believe that the lightest chemical elements — hydrogen and helium — were created shortly after the Big Bang, together with some lithium [6], while almost all other elements were formed later in stars. Supernova explosions spread the stellar material into the interstellar medium, making it richer in metals. New stars form from this enriched medium so they have higher amounts of metals in their composition than the older stars. Therefore, the proportion of metals in a star tells us how old it is.
    The star we have studied is extremely metal-poor, meaning it is very primitive. It could be one of the oldest stars ever found,” adds Lorenzo Monaco (ESO, Chile), also involved in the study.
    Also very surprising was the lack of lithium in SDSS J102915+172927. Such an old star should have a composition similar to that of the Universe shortly after the Big Bang, with a few more metals in it. But the team found that the proportion of lithium in the star was at least fifty times less than expected in the material produced by the Big Bang.
    It is a mystery how the lithium that formed just after the beginning of the Universe was destroyed in this star.” Bonifacio added.
    The researchers also point out that this freakish star is probably not unique. “We have identified several more candidate stars that might have metal levels similar to, or even lower than, those in SDSS J102915+172927. We are now planning to observe them with the VLT to see if this is the case,” concludes Caffau.
    [h=3]Notes[/h][1] The star is catalogued in the Sloan Digital Sky Survey or SDSS. The numbers refer to the object’s position in the sky.
    [2] Widely accepted star formation theories state that stars with a mass as low as SDSS J102915+172927 (about 0.8 solar masses or less) could only have formed after supernova explosions enriched the interstellar medium above a critical value. This is because the heavier elements act as “cooling agents”, helping to radiate away the heat of gas clouds in this medium, which can then collapse to form stars. Without these metals, the pressure due to heating would be too strong, and the gravity of the cloud would be too weak to overcome it and make the cloud collapse. One theory in particular identifies carbon and oxygen as the main cooling agents, and in SDSS J102915+172927 the amount of carbon is lower than the minimum deemed necessary for this cooling to be effective.
    [3] X-shooter and UVES are VLT spectrographs — instruments used to separate the light from celestial objects into its component colours and allow detailed analysis of the chemical composition. X-shooter can capture a very wide range of wavelengths in the spectrum of an object in one shot (from the ultraviolet to the near-infrared). UVES is the Ultraviolet and Visual Echelle Spectrograph, a high-resolution optical instrument.
    [4] The star HE 1327-2326, discovered in 2005, has the lowest known iron abundance, but it is rich in carbon. The star now analysed has the lowest proportion of metals when all chemical elements heavier than helium are considered.
    [5] ESO telescopes have been deeply involved in many of the discoveries of the most metal-poor stars. Some of the earlier results were reported in eso0228 and eso0723 and the new discovery shows that observations with ESO telescopes have let astronomers make a further step closer to finding the first generation of stars.
    [6] Primordial nucleosynthesis refers to the production of chemical elements with more than one proton a few moments after the Big Bang. This production happened in a very short time, allowing only hydrogen, helium and lithium to form, but no heavier elements. The Big Bang theory predicts, and observations confirm, that the primordial matter was composed of about 75% (by mass) of hydrogen, 25% of helium, and trace amounts of lithium.
    [h=3]More information[/h]This research was presented in a paper, “An extremely primitive halo star“, by Caffau et al. to appear in the 1 September 2011 issue of the journal Nature.
    The team is composed of Elisabetta Caffau (Zentrum für Astronomie der Universität Heidelberg [ZAH], Germany and GEPI — Observatoire de Paris, Université Paris Diderot, CNRS, France [GEPI]), Piercarlo Bonifacio (GEPI), Patrick François (GEPI and Université de Picardie Jules Verne, Amiens, France), Luca Sbordone (ZAH, Max-Planck Institut für Astrophysik, Garching, Germany, and GEPI), Lorenzo Monaco (ESO, Chile), Monique Spite (GEPI), François Spite (GEPI), Hans-G. Ludwig (ZAH and GEPI), Roger Cayrel (GEPI), Simone Zaggia (INAF, Osservatorio Astronomico di Padova, Italy), François Hammer (GEPI), Sofia Randich (INAF, Osservatorio Astrofisico di Arcetri, Firenze, Italy), Paolo Molaro (INAF, Osservatorio Astronomico di Trieste, Italy), and Vanessa Hill (Université de Nice-Sophia Antipolis, Observatoire de la Côte d’Azur, CNRS, Laboratoire Cassiopée, Nice, France).
    ESO, the European Southern Observatory, is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive astronomical observatory. It is supported by 15 countries: Austria, Belgium, Brazil, the Czech Republic, Denmark, France, Finland, Germany, Italy, the Netherlands, Portugal, Spain, Sweden, Switzerland and the United Kingdom. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope, the world’s most advanced visible-light astronomical observatory and two survey telescopes. VISTA works in the infrared and is the world’s largest survey telescope and the VLT Survey Telescope is the largest telescope designed to exclusively survey the skies in visible light. ESO is the European partner of a revolutionary astronomical telescope ALMA, the largest astronomical project in existence. ESO is currently planning a 40-metre-class European Extremely Large optical/near-infrared Telescope, the E-ELT, which will become “the world’s biggest eye on the sky”.
    [h=3]Links[/h]
    [h=3]Contacts[/h]Dr Elisabetta Caffau
    Zentrum für Astronomie der Universität Heidelberg / Observatoire de Paris, Université Paris Diderot, CNRS
    Heidelberg / Paris, Germany / France
    Tel: +49 6221 54 1787 or +33 1 4507 7873
    Email: Elisabetta.Caffau@obspm.fr

    Dr Piercarlo Bonifacio
    Observatoire de Paris, Université Paris Diderot, CNRS
    Paris, France
    Tel: +33 1 4507 7998 or +33 1 4047 8031
    Cell: +33 645 380 509
    Email: Piercarlo.Bonifacio@obspm.fr

    Dr Lorenzo Monaco
    ESO
    Santiago, Chile
    Tel: +56 2 463 3022
    Email: lmonaco@eso.org

    Richard Hook
    ESO, La Silla, Paranal, E-ELT and Survey Telescopes Public Information Officer
    Garching bei München, Germany
    Tel: +49 89 3200 6655
    Email: rhook@eso.org

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Trogloraptor Bigfoot Spider Found In Caves In The Pacific Northwest

    [​IMG]
    Image Caption: This is a male Trogloraptor photographed in the lab. Credit: Griswold CE, Audisio T, Ledford JM
    Michael Harper for redOrbit.com – Your Universe Online
    Just when you thought it was safe to go stomping ‘round in the caves of Oregon, a new family of spider has been discovered, complete with frighteningly large legs and a terrifying name to boot. Meet: Trogloraptor, the arachnid which has already been called the “Spider Version of Bigfoot.”
    The Pacific Northwest is often a place of beauty and exotic species, such as the coastal redwoods or the fabled Sasquatch, but now it may also be an area of sheer fright with the discovery of the “cave robber” spider. This relatively large spider got its name from the place which it lives and the way researchers believe it feeds on its prey.
    “They live in caves, they make a few strands of silk from which they suspend themselves from the cave ceiling, and we think they simply hang their legs in the air, in the dark, and wait for prey to come by,” said Charles Griswold, Curator of Arachnology at the California Academy of Sciences in an interview with BBC News.
    Though these feeding patterns have never been officially observed and recorded, Griswold and his colleagues suspect that once the Trogloraptor has its prey within its reach, it then snatches it up with its “remarkable claws and feet.” These claws and feet, say Griswold, resemble switchblade-like knives or hooks, and are used to “snap and trap” their prey.
    This discovery is a historic one for the field of Arachnology: While there are many different species of spiders which exist in the world, this is the first time in 12 years that a new family of spiders has been called for. In 2000, a new family was needed to identify a newly discovered arachnid in South America, according to Griswold. As for North America, it’s been well over 100 years since a new family has been warranted to identify a new type of spider. Now, after 122 years of assuming we knew all we needed to know about creepy cave spiders, the Trogloraptoridae has been discovered and introduced into the world.
    A team of “citizen scientists” from the Western Cave Conservatory, together with archeologists from the California Academy of Sciences and San Diego State University are credited with finding this new family of spider in the caves of southwestern Oregon, though these spiders have also been found elsewhere in the Western United States. Researchers from San Diego State University found even more of these spiders in old-growth redwood forests.
    Together with Griswold, Joel Ledford, postdoctoral researcher, and Tracy Audisio, graduate student — each with the California Academy of Science — collected, analyzed and described the new Trogloraptoridae family.
    With its half-dollar size and sharp, raptor-like claws, these scientists believe the Trogloraptor may have evolved in a very different way than most spiders. According to these scientists, the way the Trogloraptor is built suggests that it is a highly refined and specialized predator. In fact, these arachnologists suggest this new family of spider could have been considered part of the Goblin family of spiders. The new family’s set of ancient and evolutionary novelties, however, are what prompted the scientists to give this fierce, predatory spider a brand new family name.
    The fact that this spider has eluded generations upon generations of scientists and researchers is a fact not lost on these arachnology teams. As such, they are still unclear as to how these spiders are distributed among the Pacific Northwest. Currently, they suspect more of these spiders could be lurking in the darkened caves of North America.
    A study of the new family and its evolutionary and conservation significance is published in the open access journal Zookeys.
    Image 2 (below): These are the remarkable, raptor-like claws of Trogloraptor. Credit: Griswold CE, Audisio T, Ledford JM


    Source: Michael Harper for redOrbit.com – Your Universe Online

    [​IMG]
    Topics: Zoology, Arachnids, Spider, Arachnology, Charles Griswold
    SOURCE

     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Have Three Little Photons Broken Theoretical Physics?

    by Natalie Wolchover, Life's Little Mysteries Staff WriterDate: 31 August 2012 Time: 10:32 AM ET

    Seven billion years ago, three cosmic travelers set out together on an epic journey to Earth. They just arrived, and the trio has a surprising tale to tell about the structure of the universe. Their story could overturn decades of work by theoretical physicists.

    But first, an introduction: Scientists have long wondered about the nature of space and time. Albert Einstein envisioned the two concepts as an interwoven fabric that extends smoothly and continuously throughout the universe, warping under the weight of the matter it contains. The smoothness of this stretchy "space-time" fabric means that no matter how closely one inspects it, no underlying structure emerges. The fabric is completely pure even at infinitesimal scales.
    The snag in this picture of a space-time fabric is that it doesn't jive with quantum mechanics, the set of laws describing the bizarre behavior of subatomic particles. To explain gravitational interactions between planets and stars, Einstein's theory works beautifully; but try to describe quarks or electrons zipping about on a fabric with no elemental structure, and the equations turn to nonsense.

    Modern "theories of everything" try to reconcile Einstein's big picture view of the universe, built of space-time, with the small-scale picture of the universe described by quantum mechanics. Most of these theories, collectively called "quantum gravity," posit that space-time must not be smooth after all, but must instead be comprised of discrete, invisibly small building blocks — sort of like 3D pixels, or what scientists have dubbed a "foam."
    But real or not, such space-time pixels seemed to be permanently out of human reach. For reasons having to do with the uncertainty that exists in the locations of particles, theories suggest the pixels should measure the size of the "Planck length," or about a billionth of a billionth of the diameter of an electron. With the key evidence for quantum gravity buried at such an inaccessible scale, physicists were at a loss for how to confirm or refute their ideas.
    Then, a paper published 15 years ago in the journal Nature proposed an ingenious method of detecting space-time pixels. Giovanni Amelino-Camelia, a theoretical physicist at Sapienza University in Rome, and colleagues said the building blocks of space-time could be discovered indirectly by observing the way light of different colors disperses as it travels through the pixels on its journey across the universe, just as light spreads into its component wavelengths when it passes through the crystalline structure of a prism. As long as one is sure all the photons, or particles of light, left their source at exactly the same time, measuring how much photons of different wavelengths spread out during their commute to Earth would reveal the presence, and size, of the pixels they passed through.
    Such studies hadn't been feasible, until now.
    "Very few of us were suggesting that the structure of space-time could be detected, and now 15 years later facts are proving us right," Amelino-Camelia told Life's Little Mysteries, a partner site to SPACE.com. [Top 10 Strangest Things in Space]
    Burst of light
    Seven billion years ago, 7 billion light-years away, a gamma-ray burst sent a blitz of photons tearing into space. Some of them headed for Earth.
    Gamma-ray bursts occur when an extremely massive, rotating star collapses in on itself, unleashing in less than a minute as much energy as our sun will radiate in its entire 10-billion-year lifetime. These shockwaves of gamma rays and other energetic photons are the brightest events in the universe. When gamma ray bursts have occurred in the Milky Way galaxy, scientists speculate that they might have altered Earth's climate and induced mass extinctions. Thankfully, the bursts are so rare that they typically occur a safe distance away — far enough that only a light mist of photons reaches our planet. NASA's Fermi Gamma-ray Space Telescope was launched into orbit in 2008 to scan the skies for these mists of shockwaves past.
    Robert Nemiroff, an astrophysicist at Michigan Technological University, and colleagues recently took a look at data from a gamma-ray burst detected by the Fermi telescope in May 2009.
    "Originally we were looking for something else, but were struck when two of the highest energy photons from this detected gamma-ray burst appeared within a single millisecond," Nemiroff told Life's Little Mysteries. When the physicists looked at the data more closely, they found a third gamma ray photon within a millisecond of the other two.
    Computer models showed it was very unlikely that the photons would have been emitted by different gamma ray bursts, or the same burst at different times. Consequently, "it seemed very likely to us that these three photons traveled across much of the universe together without dispersing," Nemiroff said. Despite having slightly different energies (and thus, different wavelengths), the three photons stayed in extremely close company for the duration of their marathon trek to Earth.
    Many things — e.g. stars, interstellar dust — could have dispersed the photons. "But nothing that we know can un-disperse gamma-ray photons," Nemiroff said. "So we then conclude that these photons were not dispersed. So if they were not dispersed, then the universe left them alone. So if the universe was made of Planck-scale quantum foam, according to some theories, it would not have left these photons alone. So those types of Planck-scale quantum foams don't exist."
    In other words, the photons' near-simultaneous arrival indicates that space-time is smooth as Einstein suggested, rather than pixilated as modern theories require — at least down to slightly below the scale of the Planck length, a smaller scale than has ever been probed previously. The finding "comes close to proving [that space-time is smooth] for some range of parameters," Nemiroff said.
    The finding, published in June in the journal Physical Review Letters, threatens to set theoretical physicists back several decades by scrapping a whole class of theories that attempt to reconcile Einstein's theory with quantum mechanics. But not everyone is ready to jettison quantum gravity. [Top 3 Questions People Ask an Astrophysicist (and Answers)]
    Other effects?
    "The analysis Nemiroff et al. are reporting is very nice and a striking confirmation that these studies of Planck-scale structure of space-time can be done, as some of us suggested long ago," said Amelino-Camelia, an originator of the idea that gamma rays could reveal the building blocks of space-time. "But the claim that their analysis is proving that space-time is 'smooth with Planck-scale accuracy' is rather naive."
    To prove that Planck-scale pixels don't exist, the researchers would have to rule out the possibility that the pixels dispersed the photons in ways that don't depend in a straightforward way on the photons' wavelengths, he said. The pixels could exert more subtle "quadratic" influences, for example, or could have an effect called birefringence that depends on the polarization of the light particles. Nemiroff and his colleagues would have to rule out those and other possibilities. To prove the photon trio wasn't a fluke, the results would then require independent confirmation; a second set of simultaneous gamma-ray photons with properties similar to the first must be observed.
    If all this is accomplished, Amelino-Camelia said, "at least for some approaches to the quantum-gravity problem, it will indeed be a case of going back to the drawing board."
    This story was provided by Life's Little Mysteries, a sister site to SPACE.com. Follow Natalie Wolchover on Twitter @nattyover or Life's Little Mysteries @llmysteries. We're also on Facebook & Google+.

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Elusive Dark Energy Is Real, Study Says

    Dark energy, the mysterious substance thought to be accelerating the expansion of the universe, almost certainly exists despite some astronomers' doubts, a new study says.
    After a two-year study, an international team of researchers concludes that the probability of dark energy being real stands at 99.996 percent. But the scientists still don't know what the stuff is.
    "Dark energy is one of the great scientific mysteries of our time, so it isn’t surprising that so many researchers question its existence," co-author Bob Nichol, of the University of Portsmouth in Engalnd, said in a statement. "But with our new work we’re more confident than ever that this exotic component of the universe is real — even if we still have no idea what it consists of."
    The roots of dark energy
    Scientists have known since the 1920s that the universe is expanding. Most assumed that gravity would slow this expansion gradually, or even cause the universe to begin contracting one day. [8 Baffling Astronomy Mysteries]
    But in 1998, two separate teams of researchers discovered that the universe's expansion is actually speeding up. In the wake of this shocking find — which earned three of the discoverers the Nobel Prize in Physics in 2011 — researchers proposed the existence of dark energy, an enigmatic force pushing the cosmos apart.
    Dark energy is thought to make up 73 percent of the universe, though no one can say exactly what it is. (Twenty-three percent of the universe is similarly strange dark matter, scientists say, while the remaining 4 percent is "normal" matter that we can see and feel.)
    Still, not all astronomers are convinced that dark energy is real, and many have been trying to confirm its existence for the past decade or so.
    Hunting for dark energy
    One of the best lines of evidence for the existence of dark energy comes from something called the Integrated Sachs Wolfe effect, researchers said.
    In 1967, astronomers Rainer Sachs and Arthur Wolfe proposed that light from the cosmic microwave background (CMB) radiation — the thermal imprint left by the Big Bang that created our universe — should become slightly bluer as it passes through the gravitational fields of lumps of matter.
    Three decades later, other researchers ran with the idea, suggesting astronomers could look for these small changes in the light's energy by comparing the temperature of the distant CMB radiation with maps of nearby galaxies.
    If dark energy doesn't exist, there should be no correspondence between the two maps. But if dark energy is real, then, strangely, the CMB light should be seen to gain energy as it moves through large lumps of mass, researchers said.
    This latter scenario is known as the Integrated Sachs Wolfe effect, and it was first detected in 2003. However, the signal is relatively weak, and some astronomers have questioned if it's really strong evidence for dark energy after all.
    Re-examining the data
    In the new study, the researchers re-examine the arguments against the Integrated Sachs Wolfe detection, and they update the maps used in the original work.
    In the end, the team determined that there is a 99.996 percent chance that dark energy is responsible for the hotter parts of the CMB maps, researchers said.
    "This work also tells us about possible modifications to Einstein’s theory of general relativity," said lead author Tommaso Giannantonio, of Ludwig-Maximilian University of Munich in Germany.
    "The next generation of cosmic microwave background and galaxy surveys should provide the definitive measurement, either confirming general relativity, including dark energy, or even more intriguingly, demanding a completely new understanding of how gravity works," Giannantonio added.
    The team's findings have been published in the journal Monthly Notices of the Royal Astronomical Society.
    SOURCE

    Dark Energy Mystery Illuminated By Cosmic Lens

    By peering at the distant reaches of the universe through a galactic magnifying lens, astronomers may have found a way to better understand mysterious dark energy, which is thought to be speeding up the expansion of the cosmos.
    Though scientists don't know what dark energy is ? nor have they proven definitively that it exists ? they think it is the force causing galaxies to stray away from each other at an ever-quickening pace. Dark energy is the name given to whatever stuff is permeating the universe and causing this surprising accelerated expansion..
    In the new study, astronomers used a massive galaxy cluster called Abell 1689 as a giant cosmic lens to study how mass warps space and time around it. When light from even more distant galaxies passes near the cluster on its way to our telescopes on Earth, the light appears magnified and distorted because of this effect. [Photo of the cosmic lens around Abell 1689]
    The researchers examined 34 pictures of these far-away galaxies, taken by the Hubble Space Telescope and ground-based observatories, to study the geometry of space-time. This property is thought to be influenced by dark energy, which makes up about 72 percent of all the mass and energy in the universe, scientists think.
    "The geometry, the content and the fate of the universe are all intricately linked," said researcher Priyamvada Natarajan of Yale University in a statement. "If you know two, you can deduce the third. We already have a pretty good knowledge of the universe's mass-energy content, so if we can get a handle on its geometry then we will be able to work out exactly what the fate of the universe will be."
    The researchers combined their measurements of the bent light ? a phenomenon known as gravitational lensing ? with previous calculations of the universe's geometry based on observing supernovas, galaxy clusters and other heavenly objects. Together, these clues helped narrow down estimates of dark energy's properties.
    "Using our unique method in conjunction with others, we were able to come up with results that were far more precise than any achieved before," said co-researcher Jean-Paul Kneib of the Laboratoire d'Astrophysique de Marseille in France.
    Ultimately, the researchers were able to refine estimates for dark energy's so-called equation-of-state parameter, called w, which relates to how dark energy shapes the universe. They were able to reduce the uncertainty in this value by about 30 percent.
    The new findings are detailed in a paper published in the August 20 issue of the journal Science.
    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. Myrrh

    Myrrh MDL Expert

    Nov 26, 2008
    1,496
    596
    60
    Damn. Sheldon Cooper is gonna be pissed.
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    W3C announces plan to deliver HTML 5 by 2014, HTML 5.1 in 2016

    [h=2]Breaking the spec up into smaller pieces will allow swifter standardization.[/h]by Peter Bright - Sept 20 2012, 8:05pm SAWST

    The World Wide Web Consortium (W3C), the group that manages development of the main specifications used by the Web, has proposed a new plan that would see the HTML 5 spec positioned as a Recommendation—which in W3C's lingo represents a complete, finished standard—by the end of 2014. The group plans a follow-up, HTML 5.1, for the end of 2016.
    Under the new plan, the HTML Working Group will produce an HTML 5.0 Candidate Recommendation by the end of 2012 that includes only those features that are specified, stable, and implemented in real browsers. Anything controversial or unstable will be excluded from this specification. The group will also remove anything known to have interoperability problems between existing implementations. This Candidate Recommendation will form the basis of the 5.0 specification.
    In tandem, a draft of HTML 5.1 will be developed. This will include everything from the HTML 5.0 Candidate Recommendation, plus all the unstable features that were excluded. In 2014, this will undergo a similar process. Anything unstable will be taken out, to produce the HTML 5.1 Candidate Recommendation, and an HTML 5.2 draft will emerge, with the unstable parts left in.
    This will then continue, for HTML 5.3, 5.4, and beyond.
    Previously, HTML 5 wasn't due to be completed until 2022 (yes—a decade from now). The Candidate Recommendation was due to be delivered around now, with much of the next ten years spent developing an extensive test suite to allow conformance testing of implementations. The new HTML 5.1 will be smaller as a number of technologies (such as Web Workers and WebSockets) were once under the HTML 5 umbrella but have now been broken out into separate specifications. It will also have less stringent testing requirements. Portions of the specification where interoperability has been demonstrated "in the wild" will not need new tests, and instead testing will focus on new features.
    HTML 5's standardization has been a fractious process, with many arguments and squabbles as different groups with different priorities struggled to find common ground. The new plan notes that the "negative tone of discussion has been an ongoing problem" and says that the Working Group will need to be better to combat anti-social behavior. The proposed plan was, however, not universally welcomed. Some Working Group members were unhappy with the proposed treatment of their particular areas of expertise.
    For Web developers, the impact of the new plan may be limited; developers are already used to working from draft specifications on a day-to-day basis. The most immediate consequence is those pieces deemed stable enough for inclusion in version 5.0 should acquire a richer test suite. In turn, that will help browser developers track down (and, with luck, remedy) any remaining bugs and incompatibilities.

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Reading and writing quantum bits on a single electron spin

    by Matthew Francis

    [​IMG]LBL.gov
    The spin of an electron is in many ways the archetypical quantum system. If you measure the spin, it will take one of two values: spin up or spin down. It's a perfect binary, which makes it good for quantum computing systems, as well as in the nascent field of spintronics. However, implementing things based on spin has proven to be far more complicated: single electron spins in atoms interact with the environment so that they forget their original state. Measurement of the spin state brings its own difficulties.
    Researchers have now made some progress by manipulating the electronic spin state in a phosphorous atom embedded in silicon. Jarryd J. Pla and colleagues exploited the properties of both atom types to isolate the spin of a single electron in the phosophorous atom. At 0.3 Kelvin (0.3°C above absolute zero) the spin state stayed stable for relatively long amounts of time. While single electron spins are insufficient to built quantum devices, this experiment is a reasonable proof-of-principle, and shows how multi-spin systems could be developed.
    The spin of an atom is mostly determined by the electrons that orbit furthest from the nucleus, since they have a higher response to external stimuli (spin states of atomic nuclei are more stable but more difficult to work with, though another experiment made some progress along those lines). As a result, if the atom has the proper electronic configuration, the spin state of an atom is often the same as the spin state of a single electron.
    Phosphorous turns out to be useful in that sense, having a highly exposed electron, while silicon's electrons are less easily manipulated. (Phosphorous is sometimes used as a "dopant" atom in semiconductor devices, since it donates its electron to enhance conduction properties.) By embedding a phosphorous atom in silicon, the researchers used this contrast to isolate the properties of a single electron orbiting the atom. They massaged the electron spin into a particular state using microwaves to drive the system. Varying the microwave pulses, the researchers could cause the electron spin to flip in a controlled fashion.
    The entire system was kept at very low temperatures, and subjected to strong magnetic fields. The researchers measured several coherent oscillations of the spin orientation before interactions with the electrons in the silicon damped them out. They found the electron spin maintained its coherent behavior for as much as 0.2 milliseconds—short on human terms, but long enough for electronic devices.
    One appealing aspect to this system is that it's silicon based, so it bears strong resemblance to the common semiconductor devices that are the backbone of modern electronics. In other words, quantum spin devices could be constructed that from the same materials as familiar transistors and logic circuits.
    The researchers also noted that a particular isotope of silicon, 28Si, has zero nuclear spin. If the substrate for the device contains a higher fraction of this isotope, earlier experiments have shown the phosphorous electron can maintain its spin coherence for longer than a second. They propose future experiments to use "enriched" silicon samples in spin control systems.
    Exciting as these results are, they comprise only one spin—one qubit. Real devices require multiple qubits, and the transfer of quantum states between them in a coherent manner. Based on its simplicity and the relative ease of control, the use of single phosphorous atoms seems as promising as any for spin-based quantum computing.
    Nature, 2012. DOI: 10.1038/nature11449 (About DOIs).

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Warp Drive Like That On 'Star Trek' May Be Feasible After All, Physicists Say

    By: Clara Moskowitz
    Published: 09/17/2012 12:13 PM EDT on SPACE.com

    HOUSTON — A warp drive to achieve faster-than-light travel — a concept popularized in television's Star Trek — may not be as unrealistic as once thought, scientists say.
    A warp drive would manipulate space-time itself to move a starship, taking advantage of a loophole in the laws of physics that prevent anything from moving faster than light. A concept for a real-life warp drive was suggested in 1994 by Mexican physicist Miguel Alcubierre, however subsequent calculations found that such a device would require prohibitive amounts of energy.
    Now physicists say that adjustments can be made to the proposed warp drive that would enable it to run on significantly less energy, potentially bringing the idea back from the realm of science fiction into science.
    "There is hope," Harold "Sonny" White of NASA's Johnson Space Center said here Friday (Sept. 14) at the 100 Year Starship Symposium, a meeting to discuss the challenges of interstellar spaceflight.
    Warping space-time
    An Alcubierre warp drive would involve a football-shape spacecraft attached to a large ring encircling it. This ring, potentially made of exotic matter, would cause space-time to warp around the starship, creating a region of contracted space in front of it and expanded space behind. [Star Trek's Warp Drive: Are We There Yet? | Video]
    Meanwhile, the starship itself would stay inside a bubble of flat space-time that wasn't being warped at all.
    "Everything within space is restricted by the speed of light," explained Richard Obousy, president of Icarus Interstellar, a non-profit group of scientists and engineers devoted to pursuing interstellar spaceflight. "But the really cool thing is space-time, the fabric of space, is not limited by the speed of light."
    With this concept, the spacecraft would be able to achieve an effective speed of about 10 times the speed of light, all without breaking the cosmic speed limit.
    The only problem is, previous studies estimated the warp drive would require a minimum amount of energy about equal to the mass-energy of the planet Jupiter.
    But recently White calculated what would happen if the shape of the ring encircling the spacecraft was adjusted into more of a rounded donut, as opposed to a flat ring. He found in that case, the warp drive could be powered by a mass about the size of a spacecraft like the Voyager 1 probe NASA launched in 1977.
    Furthermore, if the intensity of the space warps can be oscillated over time, the energy required is reduced even more, White found.
    "The findings I presented today change it from impractical to plausible and worth further investigation," White told SPACE.com. "The additional energy reduction realized by oscillating the bubble intensity is an interesting conjecture that we will enjoy looking at in the lab."
    Laboratory tests
    White and his colleagues have begun experimenting with a mini version of the warp drive in their laboratory.
    They set up what they call the White-Juday Warp Field Interferometer at the Johnson Space Center, essentially creating a laser interferometer that instigates micro versions of space-time warps.
    "We're trying to see if we can generate a very tiny instance of this in a tabletop experiment, to try to perturb space-time by one part in 10 million," White said.
    He called the project a "humble experiment" compared to what would be needed for a real warp drive, but said it represents a promising first step.
    And other scientists stressed that even outlandish-sounding ideas, such as the warp drive, need to be considered if humanity is serious about traveling to other stars.
    "If we're ever going to become a true spacefaring civilization, we're going to have to think outside the box a little bit, were going to have to be a little bit audacious," Obousy said.

    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    IBM prepares for end of process shrinks with carbon nanotube transistors

    by John Timmer

    [​IMG]
    Carbon nanotubes sit on top of features etched in silicon.IBM Research

    The shrinking size of features on modern processors is slowly approaching a limit where the wiring on chips will only be a few atoms across. As this point approaches, both making these features and controlling the flow of current through them becomes a serious challenge, one that bumps up against basic limits of materials.
    During my visit to IBM's Watson Research Center, it was clear that people in the company are already thinking about what to do when they run into these limits. For at least some of them, the answer would involve a radical departure from traditional chipmaking approaches, switching from traditional semiconductors to carbon nanotubes. And, while I was there, the team was preparing a paper (now released by Nature Nanotechnology) that would report some significant progress: a chip with 10,000 working transistors made from nanotubes, formed at a density that's two orders of magnitude higher than any previously reported effort.
    During my visit to Watson, I spoke with George Tulevski, who is working on the nanotube project, and is one of the authors of the recent paper. Tulevski described nanotubes as a radical rethinking of how you build a chip. "Silicon is a solid you carve down," he told Ars, "while nanotubes are something you have to build up." In other words, you can't start with a sheet of nanotubes and etch them until you're left with the wiring you want.
    One possible alternative is to use graphene, a sheet of carbon a bit like an unrolled nanotube, which can potentially be etched into distinct features. The problem, according to Tulevski, is that graphene doesn't naturally have a bandgap, the feature of semiconductors that makes them useful for circuitry. It's possible to manipulate graphene so that it develops a bandgap, but that adds to the complexity of manufacturing devices. Some carbon nanotubes, in contrast, are semiconductors without the need for any manipulation (others are metals).
    [​IMG]
    Enlarge / One of the furnaces IBM uses to grow nanotubes. That red is the glow of its heat.

    This still left IBM with a choice, Tulevski said. You could potentially attempt to grow a pure population of carbon nanotubes in place, on your chip. We've gotten much better at controlling the growth of nanotubes, and IBM has equipment in house that be been used to produce them. The problem there is that, if anything goes wrong with just one of the tubes, the whole chip would be lost. We may have gotten much better, but we've not gotten that good.
    So, Tulevski's group is taking a different approach: buy off-the-shelf nanotubes, isolate the ones they want, and then assemble them on a chip.
    [​IMG]
    Enlarge / The present and future? A collection of silicon wafers (case) sits next to a bottle of carbon nanotubes, purchased from a supplier.J. Timmer

    The first couple of steps of this is easier than it sounds. Tulevski showed off a large jar, obtained from a chemical manufacturer, that contained a mixture of carbon nanotubes. As it turns out, the two types of nanotubes, metals and semiconductors, interact differently with a standard column of the type commonly used in chemistry and biochemistry labs. Simply run a mixture down the column, and it's possible to separate out a relatively pure population of one type. To make matters even more convenient, the two populations are slightly different colors.
    [​IMG]
    Enlarge / Two populations of carbon nanotubes are separated on a column. The dark band on top is a population of metallic nanotubes; the reddish one at the bottom are the semiconductors. J. Timmer

    Once you have a collection of semiconducting nanotubes, you have to build circuits out of these. If we're ever going to make processors out of them, this has to be done quickly, cheaply, and consistently. As Tulevsky described it in referring to purifying the right nanotubes, a one part-per-billion error rate just isn't good enough when you consider the number of transistors on a modern chip.
    The team at Watson is working on solution processing, which is where the new paper comes in. The idea is to pre-pattern the needed circuitry onto a chip using IBM's existing foundry experience. Once that's in place, a solution containing carbon nanotubes can be washed across the chip, at which point they'll drop out of solution and attach to the chip based on the pattern.
    In the paper, the pattern was set up by etching away silicon dioxide to reveal an underlying layer of hafnium dioxide. The HfO2 layer could interact with a charged organic molecule (4-(N-hydroxycarboxamido)-1-methylpyridinium iodide), creating a charged surface. The carbon nanotubes could then be floated across this surface while encased in a coating of an organic molecule with the opposite charge. A simple ion exchange reaction locks the nanotube in place above the hafnium layer.
    With the hafnium features at 70nm wide, this process was used to create field-effect transistors (FETs), and it worked with an efficiency of over 90 percent. The density of these FETs was 109 per square centimeter, 100 times higher than the previous reported best. And these devices could be made en masse: the researchers were able to test over 10,000 of the FETs on a single chip, and found over 7,000 functional ones (most of the rest ended up with a metallic nanotube instead of a semiconductor).
    This still isn't ready for chip manufacture. But it's a lot closer than most previous efforts, and gives IBM's team some obvious things to troubleshoot if they want to boost the efficiency further.

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Going boldly: Behind the scenes at NASA's hallowed Mission Control Center

    HOUSTON, TEXAS—Astronauts have been saying "Houston" into their radios since 1965. The callsign refers in general to the Johnson Space Center in Texas, and the people who answer to it sit in the Mission Control Center, located in Building 30 near the south end of the The Lyndon B. Johnson Space Center (JSC) campus. "Mission Control" has been the subject of movies, television shows, and documentaries for decades. It's usually depicted as a bustling room filled with serious folks in short-sleeved white shirts and skinny black ties who shout dramatically about damaged spaceships while frantically pressing buttons on chunky 1960s control consoles. What is it really like, though, to sit at one of those consoles? What do all of those buttons do? .... more (5 page article)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. ancestor(v)

    ancestor(v) Admin
    Staff Member

    Jun 26, 2007
    2,829
    5,538
    90
    Measurements retroactively force photons to be both wave and particle

    One of the stranger features of the quantum world is that light—even individual photons—can behave as a wave or a particle, depending on how you measure it. But, according to papers released by Science today, the quantum weirdness doesn't end there. Researchers have now found a way to put a photon in a quantum superposition where it is both a wave and a particle at the same time. Worse still, one setup allows them to determine the photon's nature as a wave or particle after it has gone through an apparatus where it must act as one or the other.

    Got that? Didn't think so, so let's go through it in more detail.

    click here for source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  13. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Our Souls Are In Our Eyes, Psychologists Claim

    By: Natalie Wolchover, Life's Little Mysteries Staff Writer

    As the cheesy pickup line suggests, your eyes may really be the window to your soul. According to a new study by Yale University psychologists, most people intuitively feel as if their "self" — otherwise known as their soul, or ego — exists in or near their eyes.
    In three experiments, the researchers probed preschoolers' and adults' intuitions about the precise location of the self in the body. The participants were shown pictures of cartoon characters, and in each picture a small object (a buzzing fly or snowflake) was positioned near a different section of the character's body (face or torso or feet, etc.), always at the same distance away.
    The study participants were then asked which pictures showed the object closest to the body, the hypothesis being that people would interpret the object as closest when it was near what they intuitively believed to be the soul's location.
    As reported earlier this month in the journal Cognition, the vast majority of the 4-year-olds and adults in the study thought the object was closest to the character when it was near the character's eyes. This was true even when the cartoon character was a green-skinned alien whose eyes were on its chest rather than in its head – suggesting that it was the eyes, rather than the brain, that seemed most closely tied to the soul.
    According to lead researcher Christina Starmans of the Mind and Development Lab at Yale, she and study co-author Paul Bloom designed their experiment after a conversation in which they discussed intuitively feeling as if their consciousnesses were "located" near their eyes, and that objects seemed closest to them when near their eyes. "We set out to test whether this was a universally shared intuition," Starmans told Life's Little Mysteries.
    As it turned out, it was — even among young children. [Take the test]
    "The indirect nature of our method, and the fact that these judgments are shared by adults and preschoolers, suggests that our results do not reflect a culturally learned understanding … but might instead be rooted in a more intuitive or phenomenological sense of where in our bodies we reside," the authors concluded.
    However, experts disagreed about the implications of the research. Neurologist Robert Burton, author of numerous books and articles on the mind-body connection, thinks the results don't rule out the possibility that Westerners' sense that we exist in our eyes is culturally indoctrinated.
    Burton, former chief of the division of neurology at University of California, San Francisco-Mount Zion Hospital, said the most interesting result of the study seems to have been brushed under the rug by the researchers: It is that the 4-year-olds and adults didn't actually give the same responses during the experiment with the alien cartoon character. Almost as many children thought the buzzing fly was closest to the alien when it was near his eyeless head than when it was near his eye-bearing chest. Meanwhile, the adults almost unanimously selected the chest-eyes. "This suggests that something has transpired during the time between age 4 and adulthood that affects our understanding of the identity of other people," Burton said.
    In other words, it seems we learn to associate identity with eyes, rather than doing it innately from birth. Perhaps, for example, eyes take on more importance as we develop awareness of the social cues that other people convey with their eyes. Or, perhaps it's because adults have learned that it's good etiquette to make eye-contact.
    Furthermore, the study participants may not have interpreted the idea of the buzzing fly and snowflake being "closer" to a cartoon characters as meaning that they were closer to its soul or self. Objects look bigger when they are nearer one's eyes, and this may have confused the participants into labeling them as "closer." [Gallery: The Most Amazing Optical Illusions]
    Georg Northoff, a neuropsychiatrist at the University of Ottawa, agrees that the authors' interpretation of their experimental results is "far-fetched." The issues with this particular study aside, Northoff said a large body of evidence suggests most people do have a sense of self that physically manifests itself in their bodies. "We always have the tendency to locate something and materialize it in the body as mind or as soul," he wrote in an email. "That seems to be predisposed by the way our brain works, though the mechanisms remain unclear."
    It is also worth noting that the part of the brain in which self-awareness is thought to arise, called the ventromedial prefrontal cortex, happens to be located behind the eyes. It is possible, Burton said, that we may "feel" as if we are physically located near our eyes because our identity emerges in the neurons there.

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    #114 R29k, Jan 17, 2013
    Last edited: Jan 17, 2013
    Moon Illusion

    The ancients knew the moon looks bigger near the horizon but no theory convincingly explains the illusion. Now a new idea aims to settle the debate once and for all


    [​IMG]
    One of the classic optical illusions involves the Moon, which appears larger near the horizon than overhead. This illusion has been known and discussed for centuries and yet its explanation is still hotly contested.
    Today, the debate is set to reignite thanks to the work of Joseph Antonides and Toshiro Kubota at Susquehanna University in Pennsylvania. These guys have a new theory that the illusion occurs because of a contradiction between the way the brain compares distance cues from its perceptual model of the world and cues from binocular vision.
    That the illusion exists is uncontested. One easily accessible proof comes from photographic evidence that the size of the moon remains constant as it crosses the sky. So the question of why it appears larger near the horizon has been studied by many people from a variety of disciplines.
    Perhaps the most well known explanation is the Size-Contrast theory. This states that the perceived angular size of the moon is proportional to the perceived angular size of objects around it.
    Near the horizon, the moon is close to objects of a size that we know, such as trees, buildings and so on. And since it is comparable in size to these familiar objects, it appears larger.
    This is related to the famous Ebbinghaus illusion in which the apparent size of a circle depends on the size of circles near by.
    Antonides and Kubota say there are two problems with this theory. The first is that it does not explain the degree of expansion. Some observers report the moon appearing twice as large near the horizon and yet in experiments with the Ebbinghaus illusion, observers typically report an increase of only about 10 per cent.
    The second is that it does not explain why the effect disappears in photographs and videos. By contrast the Ebbinghaus illusion is easy to reproduce.
    The new theory is based on the idea that the brain judges distance in two different ways. The first is with binocular vision. When the image from each eye is the same, an object must be distant.
    The second is our built-in model of the world in which we perceive the sky to be a certain finite distance away and the Sun, moon and stars to be in front of it (rather than appearing through a hole, for example).
    This results in a contradiction. Our perceptual model of the world suggests the moon is closer than the sky while our binocular vision suggest it is not.
    Antonides and Kubota’s theory is that the illusion is the result of the way the brain handles this contradiction. “We hypothesize that the brain resolves this contradiction by distorting the visual projections of the moon resulting in an increase in angular size,” they say.
    They point out that the distortion depends crucially on the perceived distance to the sky. This is heavily influenced by distance cues on the ground, which make the sky, and therefore also the moon, look closer. Similarly, when these cues are absent, when the moon is high in the sky, both the moon and the sky seem further away.
    That’s an interesting idea that should stimulate debate. Antonides and Kubota say they want to explore the idea further by experimenting with the illusion. For example, they want to measure changes in the apparent expansion of the moon with different distance cues–from an open field, a valley, mountain, inner city landscape and so on.
    It might also be interesting to see how (and even if) the illusion arises in people who lack binocular vision.
    Then there is the question of why the illusion reportedly disappears when the world is viewed upside down, ie standing on your head. Having not tried this, I cannot vouch for its veracity. But come the next full moon, I fully intend to test it. So don’t be surprised to see some observers of the next full moon acting rather strangely.
    Ref: arxiv.org/abs/1301.2715: Binocular Disparity as an Explanation for the Moon Illusion

    Source

    [​IMG]
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Stanford Scientists Use Copy/Paste Method to Make Cells HIV Resistant

    The method stopped HIV from entering healthy immune cells

    Stanford scientists have found a way to protect the immune system from HIV by placing resistant genes into T cells.

    Researchers from the Stanford University School of Medicine, led by Matthew Porteus, MD, have used a cut and paste method where HIV-resistant genes were coupled with T cells to deny the virus' entry into healthy immune cells.

    The HIV virus typically enters immune cells by binding to one of two surface proteins: CCR5 and CXCR4. However, some people have a mutation in CCR5 that makes them resistant to HIV.

    Porteus and his team used this idea to create a method for making this protein inactive. They used a protein, called a zinc finger nuclease, that finds and attaches to the CCR5 receptor gene and modifies it to imitate the mutated, inactive versions. It does this by breaking up pieces of DNA.

    In addition to breaking a sequence in the CCR5 receptor's DNA, the team pasted three genes that are resistant to HIV. They help protect the cells via both the CCR5 and CXCR4 receptors. This technique is called stacking, where multiple layers of protection are used to protect the cells.

    In tests, T cells with single, double and triple gene modifications were protected against HIV. However, as expected, the triplets were much more resistant to infection. In fact, they had 1,200-fold protection against HIV carrying the CCR5 receptor and 1,700-fold protection against those carrying the CXCR4 receptor.

    T cells without any protection were infected within 25 days.

    "We inactivated one of the receptors that HIV uses to gain entry and added new genes to protect against HIV, so we have multiple layers of protection -- what we call stacking," said Porteus. "We can use this strategy to make cells that are resistant to both major types of HIV."

    There are two issues that the team has to work out, though. First, the zinc finger nuclease could cause a break elsewhere in the DNA and cause cancer. Second, the cells may not accept the genetic change.

    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. Yen

    Yen Admin
    Staff Member

    May 6, 2007
    12,669
    13,337
    340
    We are living in a virtual reality.

    Scientists come closer to that what 'the wise people' already know and what's already found in Hinduism and Tibetan mystique. They called it Maya Sanskrit for illusion. And the 'absolute' Brahman, the reality which is ever now.

    Full article:

    http://www.matrixwissen.de/index.ph...nt&catid=125:quantenphysik&Itemid=105&lang=en

    (If it should be too much Physics you may jump to 3. Thomas Campbell's model of a virtual reality directly.....) :)



    My 2 cents.....
    Finally scientists dare to evaluate results which are usually swept under the table. The movie The Matrix 1999 picks this up as well.....

    And the most impressing thing is. This truth can be experienced without study. You just need to leave the present as it is and keep the mind out of it.

    Here he's quite good, but still misses the point. This 'additional meta layer information' is actually the only thing that exists and is neither inside nor outside. The mind abstracts everbody's own virtual reality from 'it'.

    There is actually no difference of 'you' and the 'additional meta layer information'.....both are just 2 separate ideas of the same reality. The idea of time is also a (human) product of it.


     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  17. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Scientists propose destroying asteroids with sun-powered laser array

    This past Friday was not a good day for asteroid-human relations withasteroid 2012 DA14 passing a mere 27,700 km (17,200 miles) from the Earth just a few hours after a meteor exploded over the Russian city of Chelyabinsk, damaging hundreds of buildings and injuring thousands. Scientists have been quick to point out that both of these events – a meteor exploding over a populated area and a large asteroid passing through Earth's geosynchronous orbit – are quite rare, but when the worst case scenario is the complete annihilation of all life on Earth, it's probably best to be prepared. That's why researchers in California recently proposed DE-STAR – a system which could potentially harness the sun's energy to dissolve wayward space rocks up to ten times larger than 2012 DA14 with a vaporizing laser.
    Over the past few years, scientists have been exploring several methods to prevent a cataclysmic asteroid impact on Earth, including launching aspacecraft to study asteroid collisions, arranging a series of satellites to monitor asteroid activity and even deflecting them with paint balls. The looming question remains unanswered though: what's the best way to actually stop an asteroid from striking the Earth?
    Philip M. Lubin from UC Santa Barbara and Gary B. Hughes from Polytechnic State University may have an answer with DE-STAR (short for "Directed Energy Solar Targeting of Asteroids and exploRation"). According to the researchers, DE-STAR would consist of satellites designed to gather energy from the sun and convert it into an enormous phased array of lasers powerful enough to disintegrate an asteroid.
    It's still all theoretical at this point, but Lubin and Hughes insist the technology for such a system already exists, just not at the correct scale needed to affect a chunk of rock hurtling through space. Their proposal includes rough outlines for DE-STAR models at different diameters, ranging from one about the size of a tabletop to another that would be 10 kilometers (about 6 miles) across. A greater size would mean a more powerful laser.
    [​IMG]
    If you're imagining a laser blast like the one that took out the Death Star inStar Wars though, think again. Hughes and Lubin say that a DE-STAR system 100 meters (about 328 feet) across would just be able to slowly push comets and asteroids out of orbit, away from Earth. A system measuring 10 kilometers (6 miles) in diameter could produce 1.4 megatons of energy per day, enough to completely erode an asteroid measuring 500 meters (about 1640 feet) wide, but it would take about one year.
    The researchers also claim the laser array could have other uses besides asteroid protection. For example, DE-STAR could help in simply studying an asteroid's composition and possibly provide a new propulsion system for spacecraft, all while simultaneously defending the Earth from asteroids.
    Even if Lubin and Hughes are correct in their calculations, scaling up the proper technology for their proposed DE-STAR system would be no easy task. By their own admission, there are many variables in place that would need to be worked out first, but it's better than having no plan at all. After the events last Friday though, we might start seeing some projects like this receive the support they need to get off the ground.
    Source: UC Santa Barbara

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    The building blocks for a quantum internet

    Physicists at the University of Innsbruck in Austria have succeeded in transferring quantum information from an atom to a photon, a process considered vital in the construction of a quantum computer. The discovery would allow quantum computers to exchange data at the speed of light along optical fibres.Lead researcher on the project Tracy Northup told Humans Invent, “What we’ve done is to show that you can map quantum information faithfully from an ion onto a photon.” Northup’s team used an “ion trap” to produce a single photon from a trapped calcium ion with its quantum state intact using mirrors and lasers.
    Humans Invent spoke with Tracy Northup about her work in quantum computing and the possibility of a quantum internet.
    [h=2]What is a quantum computer and why would we need them?[/h]A classical computer encodes information in one of two states, 0 or 1. A quantum computer replaces this bit with a quantum bit or qubit; in a qubit, 0 and 1 are states of a quantum-mechanical system, such as the energy of an atom’s outermost electron, or the polarization of light. A quantum bit can be either 0 or 1, but it can also be in a superposition of 0 and 1 at the same time, so it offers a very different way to store and process information.
    A quantum computer replaces the bit for a qubit
    There are certain types of problems that are very hard on a classical computer that could be solved much more efficiently on a quantum computer. One famous example is factoring large numbers because the difficulty of this problem is the basis for RSA encryption (an algorithm for public-key cryptography).
    [h=2]Why does quantum information have to be transmitted by optical channels?[/h]It doesn’t have to be, but that’s the idea that seems to make the most sense. Photons travel at the speed of light and they are also well isolated from their environment, so it’s highly likely that the information you put in will come out the other end and not get lost along the way. Ions are great for quantum computation but no one wants to think about transporting ions over long distances because that would be very slow and cumbersome.
    [h=2]What can quantum computers do that today’s computers can’t?[/h]If we had quantum computers with thousands of quantum bits, they could do specific tasks, like factoring large numbers or a database search, faster. Since factoring is the basis for today’s encryption that generates a lot of interest.
    But currently in the lab, the state of the art is tens of quantum bits, not thousands. So what many people are excited about as an intermediate goal is quantum simulation, that is, building not an all-purpose quantum computer but a very specific device that is tailored to simulate another quantum system, such as molecular structure, or superconductivity. It’s quite computationally intensive to simulate quantum systems on classical computers and so it may be possible to do simulations that are beyond the reach of classical computers.
    [h=2]What would be the benefits of a network of quantum computers or a quantum internet?[/h]One benefit is quantum cryptography. Over a quantum network, you could distribute secure information. In fact, quantum key distribution systems are already available commercially.
    You could use quantum teleportation protocols to transmit data from one site to the other
    Another benefit would be distributed quantum computing. By linking together smaller computers, you could carry out larger, more powerful computations.
    [h=2]Could you also transmit data using quantum entanglement?[/h]Yes. Instead of mapping information from atom to photon to atom, you could also entangle each of two remote atoms with a photon and then, based on measurements of those photons, generate entanglement between the atoms. Finally, you could use quantum teleportation protocols to transmit data from one site to the other.
    [h=2]How far are we from developing quantum computers for everyday use?[/h]Well, on one hand, quantum computers would probably be for very specialized tasks rather than everyday use, although I know they said that about classical computers!
    In a nutshell, no one really knows. Of course, we’d love to have quantum computers, but I think what drives most researchers in the field is not necessarily this far-off goal but the kinds of strange things you learn along the way. Scott Aaronson expressed this really nicely in an essay he wrote for the NY Times a few years ago.

    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  19. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    How Self-Healing Microchips Recover

    Caltech engineers have constructed a new kind of microchip that can learn to heal its own information pathways
    By Marshall Honorof and TechNewsDaily

    Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.Image: Creative Commons Attribution-Share Alike 3.0 Unported | Zephyris
    One of the reasons why robots and artificial intelligence programs — even very sophisticated ones — do not qualify as living things is because they lack the capacity for self-repair. Fry a machine's circuits, and it can do nothing except wait for a human to repair it. A team of researchers at the California Institute of Technology (Caltech) has taken some of the first steps toward making a self-healing machine by creating a computer chip that can learn to heal its own information pathways.
    The chip comes from the High-Speed Integrated Circuits laboratory, which specializes in microchip technology. There are thousands of pathways by which information can travel through a microchip, but because each one is very specialized, a single fault traditionally renders the whole system inoperative.
    Each chip contains more than 100,000 transistors (the most basic component in a microchip), which don't all function simultaneously. Rather, the researchers burned vast swaths of transistors out of the chip with a laser, then allowed the systems to recalibrate. As long as the blast did not catch any data caches in its crosshairs, the chip could seek out alternate routes and continue to function. With the help of an application-specific integrated-circuit (ASIC) processor on each chip, the system could "learn" which pathways were broken and adjust accordingly.
    If a traditional microchip is comparable to an electric circuit (remove one piece and the entire system collapses), this newtechnology is more similar to a human brain. If one pathway becomes inaccessible, the brain will discover novel ways to relay information. Of course, it is possible to inflict catastrophic damage on a system (be it brain or microchip) from which it cannot recover, but with more than 100,000 methods of delivery, these microchips could prove to be extremely robust.
    The self-healing chips are an intriguing step in machine evolution, but they do lack one crucial feature of actual living things: the ability to regenerate over time. While the Caltech microchips can withstand extensive damage and figure out ways to work around it, a section fried by lasers will still be fried years later. Unlike biological tissue, which repairs itself over time, each chip has a limited shelf life. [See also: Bionic Humans: Top 10 Technologies]
    Still, the fact that the microchip is not yet completely analogous to living things should not take away from the novelty of the invention or its potential usefulness. Right now, if a microchip in your computer or cell phone burns out, the whole system is essentially out of commission until you can replace the chip, to say nothing of any data you had stored on it. Implementing these devices in consumer tech could save countless hours and dollars in tech support.
    "Bringing this type of electronic immune system to integrated-circuit chips opens up a world of possibilities," said Ali Hajimiri, a Caltech engineering professor. "[Microchips] can now both diagnose and fix their own problems without any human intervention, moving one step closer to indestructible circuits."
    That's all well and good when a flood leaves your next PC intact, but might be less desirable when the robot armies rise up to take vengeance upon humanity. Hopefully, the former is a likelier option.

    Source

     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  20. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Study: early birds had four wings

    [h=2]These birds were from the Cretaceous period and probably did not catch the worm.[/h]by Philippa Warr, Wired.co.uk

    The ancestors of modern birds probably had four wings rather than two, according to a study of fossils found in a Chinese museum.
    The four-winged early birds had been identified from fossilised remains a number of years ago, but it was unclear whether the creatures were precursors to modern birds or whether they represented an evolutionary cul-de-sac and had simply died out.
    However, eleven skeletons of primitive birds discovered at the Shandong Tianyu Museum of Nature feature evidence of having large feathers on their hind limbs. The remains date from the early Cretaceous period (around 120 million years ago) and, according to the study, "provide solid evidence for the existence of enlarged leg feathers on a variety of basal birds".
    Today's two-winged situation could then be the result of a gradual reduction in feathering of these hind limbs, probably as a result of the birds living on the ground and needing to walk around unencumbered.
    "If an animal has big feathers on its legs and feet, it's definitely something that's not good for fast running," said Xing Xu from Linyi University in Shandong province in an interview with New Scientist.
    The fossil finds help bolster the case for four-winged early birds, however the evidence is not definitive. As a result, Xu and his fellow researchers intend to look to other remains in the museum's collection as well as investigating whether the feathers and wings would have been capable of flight.

    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...