The MDL Times - Science and Tech. News on MDL

Discussion in 'Serious Discussion' started by kldpdas, Jun 30, 2011.

  1. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    #81 R29k, Jan 15, 2012
    Last edited by a moderator: Apr 20, 2017
    IBM smashes Moore's Law, cuts bit size to 12 atoms

    Article Link here

     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  2. bludgard

    bludgard MDL Member

    Jan 4, 2011
    211
    54
    10
    It's all well and good, but the way things are going we wont need to have all this storage space: What will we fill it with?;)
    SOPA/PITA bull****!:weeping:

    :biggrin5:
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  3. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    The Great Disk Drive in the Sky: How Web giants store big—and we mean big—data

    By Sean Gallagher
    [​IMG]
    Google technicians test hard drives at their data center in Moncks Corner, South Carolina
    Consider the tech it takes to back the search box on Google's home page: behind the algorithms, the cached search terms, and the other features that spring to life as you type in a query sits a data store that essentially contains a full-text snapshot of most of the Web. While you and thousands of other people are simultaneously submitting searches, that snapshot is constantly being updated with a firehose of changes. At the same time, the data is being processed by thousands of individual server processes, each doing everything from figuring out which contextual ads you will be served to determining in what order to cough up search results.
    The storage system backing Google's search engine has to be able to serve millions of data reads and writes daily from thousands of individual processes running on thousands of servers, can almost never be down for a backup or maintenance, and has to perpetually grow to accommodate the ever-expanding number of pages added by Google's Web-crawling robots. In total, Google processes over 20 petabytes of data per day.
    That's not something that Google could pull off with an off-the-shelf storage architecture. And the same goes for other Web and cloud computing giants running hyper-scale data centers, such as Amazon and Facebook. While most data centers have addressed scaling up storage by adding more disk capacity on a storage area network, more storage servers, and often more database servers, these approaches fail to scale because of performance constraints in a cloud environment. In the cloud, there can be potentially thousands of active users of data at any moment, and the data being read and written at any given moment reaches into the thousands of terabytes.
    The problem isn't simply an issue of disk read and write speeds. With data flows at these volumes, the main problem is storage network throughput; even with the best of switches and storage servers, traditional SAN architectures can become a performance bottleneck for data processing.
    Then there's the cost of scaling up storage conventionally. Given the rate that hyper-scale web companies add capacity (Amazon, for example, adds as much capacity to its data centers each day as the whole company ran on in 2001, according to Amazon Vice President James Hamilton), the cost required to properly roll out needed storage in the same way most data centers do would be huge in terms of required management, hardware, and software costs. That cost goes up even higher when relational databases are added to the mix, depending on how an organization approaches segmenting and replicating them.
    The need for this kind of perpetually scalable, durable storage has driven the giants of the Web—Google, Amazon, Facebook, Microsoft, and others—to adopt a different sort of storage solution: distributed file systems based on object-based storage. These systems were at least in part inspired by other distributed and clustered filesystems such as Red Hat's Global File System and IBM's General Parallel Filesystem.
    The architecture of distributed file systems separates the metadata about content from the data itself, allowing for high volumes of parallel reading and writing of data across multiple replicas, and tossing concepts like "file locking" out the window.
    The impact of these distributed file systems extends far beyond the walls of the hyper-scale data centers they were built for— they have a direct impact on how those who use public cloud services such as Amazon's EC2, Google's AppEngine, and Microsoft's Azure develop and deploy applications. And companies, universities, and government agencies looking for a way to rapidly store and provide access to huge volumes of data are increasingly turning to a whole new class of data storage systems inspired by the systems built by cloud giants. So it's worth understanding the history of their development, and the engineering compromises that were made in the process.
    [h=3]Google File System[/h]Google was among the first of the major Web players to face the storage scalability problem head-on. And the answer arrived at by Google's engineers in 2003 was to build a distributed file system custom-fit to Google's data center strategy—Google File System (GFS).
    GFS is the basis for nearly all of the company's cloud services. It handles data storage, including the company's BigTable database and the data store for Google's AppEngine platform-as-a-service, and it provides the data feed for Google's search engine and other applications. The design decisions Google made in creating GFS have driven much of the software engineering behind its cloud architecture, and vice-versa. Google tends to store data for applications in enormous files, and it uses files as "producer-consumer queues," where hundreds of machines collecting data may all be writing to the same file. That file might be processed by another application that merges or analyzes the data—perhaps even while the data is still being written.
    "Some of those servers are bound to fail—so GFS is designed to be tolerant of that without losing (too much) data"
    Google keeps most technical details of GFS to itself, for obvious reasons. But as described by Google research fellow Sanjay Ghemawat, principal engineer Howard Gobioff, and senior staff engineer Shun-Tak Leungin in a paper first published in 2003, GFS was designed with some very specific priorities in mind: Google wanted to turn large numbers of cheap servers and hard drives into a reliable data store for hundreds of terabytes of data that could manage itself around failures and errors. And it needed to be designed for Google's way of gathering and reading data, allowing multiple applications to append data to the system simultaneously in large volumes and to access it at high speeds.
    Much in the way that a RAID 5 storage array "stripes" data across multiple disks to gain protection from failures, GFS distributes files in fixed-size chunks which are replicated across a cluster of servers. Because they're cheap computers using cheap hard drives, some of those servers are bound to fail at one point or another—so GFS is designed to be tolerant of that without losing (too much) data.
    But the similarities between RAID and GFS end there, because those servers can be distributed across the network—either within a single physical data center or spread over different data centers, depending on the purpose of the data. GFS is designed primarily for bulk processing of lots of data. Reading data at high speed is what's important, not the speed of access to a particular section of a file, or the speed at which data is written to the file system. GFS provides that high output at the expense of more fine-grained reads and writes to files and more rapid writing of data to disk. As Ghemawat and company put it in their paper, "small writes at arbitrary positions in a file are supported, but do not have to be efficient."
    This distributed nature, along with the sheer volume of data GFS handles—millions of files, most of them larger than 100 megabytes and generally ranging into gigabytes—requires some trade-offs that make GFS very much unlike the sort of file system you'd normally mount on a single server. Because hundreds of individual processes might be writing to or reading from a file simultaneously, GFS needs to supports "atomicity" of data—rolling back writes that fail without impacting other applications. And it needs to maintain data integrity with a very low synchronization overhead to avoid dragging down performance.
    GFS consists of three layers: a GFS client, which handles requests for data from applications; a master, which uses an in-memory index to track the names of data files and the location of their chunks; and the "chunk servers" themselves. Originally, for the sake of simplicity, GFS used a single master for each cluster, so the system was designed to get the master out of the way of data access as much as possible. Google has since developed a distributed master system that can handle hundreds of masters, each of which can handle about 100 million files.
    When the GFS client gets a request for a specific data file, it requests the location of the data from the master server. The master server provides the location of one of the replicas, and the client then communicates directly with that chunk server for reads and writes during the rest of that particular session. The master doesn't get involved again unless there's a failure.
    To ensure that the data firehose is highly available, GFS trades off some other things—like consistency across replicas. GFS does enforce data's atomicity—it will return an error if a write fails, then rolls the write back in metadata and promotes a replica of the old data, for example. But the master's lack of involvement in data writes means that as data gets written to the system, it doesn't immediately get replicated across the whole GFS cluster. The system follows what Google calls a "relaxed consistency model" out of the necessities of dealing with simultaneous access to data and the limits of the network.
    This means that GFS is entirely okay with serving up stale data from an old replica if that's what's the most available at the moment—so long as the data eventually gets updated. The master tracks changes, or "mutations," of data within chunks using version numbers to indicate when the changes happened. As some of the replicas get left behind (or grow "stale"), the GFS master makes sure those chunks aren't served up to clients until they're first brought up-to-date.
    But that doesn't necessarily happen with sessions already connected to those chunks. The metadata about changes doesn't become visible until the master has processed changes and reflected them in its metadata. That metadata also needs to be replicated in multiple locations in case the master fails—because otherwise the whole file system is lost. And if there's a failure at the master in the middle of a write, the changes are effectively lost as well. This isn't a big problem because of the way that Google deals with data: the vast majority of data used by its applications rarely changes, and when it does data is usually appended rather than modified in place.
    While GFS was designed for the apps Google ran in 2003, it wasn't long before Google started running into scalability issues. Even before the company bought YouTube, GFS was starting to hit the wall—largely because the new applications Google was adding didn't work well with the ideal 64-megabyte file size. To get around that, Google turned to Bigtable, a table-based data store that vaguely resembles a database and sits atop GFS. Like GFS below it, Bigtable is mostly write-once, so changes are stored as appends to the table—which Google uses in applications like Google Docs to handle versioning, for example.
    The foregoing is mostly academic if you don't work at Google (though it may help users of AppEngine, Google Cloud Storage and other Google services to understand what's going on under the hood a bit better). While Google Cloud Storage provides a public way to store and access objects stored in GFS through a Web interface, the exact interfaces and tools used to drive GFS within Google haven't been made public. But the paper describing GFS led to the development of a more widely used distributed file system that behaves a lot like it: the Hadoop Distributed File System.
    Image courtesy of Google Datacenter Video

    Source

     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  4. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Converting light to sound in cold quantum systems

    [​IMG]
    Toroidal optical whispering gallery, containing a tiny optical cavity
    Oscillators lie at the core of many precision quantum experiments. The oscillations can exist in atomic clocks used for accurate timing, laser and masers, or a variety of other devices, but the regular cycling of quantum oscillators play an essential role in modern science and engineering. However, most uses have been confined to the electromagnetic regime, where the vibrations exhibit as photons; the quantum states of mechanical oscillators, where the vibrations are sound waves, have proven more difficult to control.
    However, researchers in Switzerland and Germany have built a special cavity where the electromagnetic quantum states resonate with the natural vibrations of the atoms. In doing so, E. Verhagen, S. Delégliese, S. Weis, A. Schliesser, and T.J. Klippenberg managed to couple a photon-based oscillator to a mechanical oscillator, controlling the mechanical quantum states with visible light. The result is a prototype of a quantum transducer, a device that converts light energy into mechanical energy.
    Just as electromagnetic energy travels in discrete bundles known as photons, mechanical energy in solids and other dense systems is carried in packets known as phonons. Mechanical oscillators have the advantage of low energy dissipation: phonons in a given physical system don't readily disperse their energy into their environment.
    Because of the typical natural frequencies of mechanical vibration, quantum control of these oscillators has only been achieved using microwave-frequency light and at very cold temperatures, much colder than can be achieved in most laboratories. For higher temperatures (though still less than 1 Kelvin), obtaining coherent mechanical oscillations has proven difficult. These factors limit the usefulness of controlled mechanical systems.
    To generate the controlling mechanism, the researchers constructed a toroidal cavity known as an optical whispering gallery. The name comes by analogy to certain large buildings (such as Statuary Hall in the United States Capitol) where the acoustic properties of the room ensure that even a low-amplitude sound like a whisper can be heard at certain distant points. In an optical whispering gallery, it's possible to produce low-intensity coherent standing waves of light. In this particular experiment, the researchers used optical-wavelength photons from a laser.
    The standing-wave mode is different than the wavelength of the laser, and was tuned to produce resonant mechanical vibrations in the cavity. The phonons in these vibrations are induced by radiation pressure: the mechanical push that results from the photons in the standing wave. This process also works in reverse, so that the sound waves transfer energy back to the light.
    The researchers carefully constructed their system to minimize dissipation of the energy from both sources into the environment, partly by keeping the system at 0.65 Kelvins (0.65° Celsius above absolute zero) to reduce thermal excitations.
    Optical systems are highly tunable and easily controlled. Now that we have an quantum transducer that converts electromagnetic energy into mechanical oscillations, we can start thinking about optical control of solid-state devices or spin system. This in turn opens up a potential new class of hybrid optical/mechanical quantum devices, with potential applications in quantum computing.
    Nature, 2012. DOI: 10.1038/nature10787 (About DOIs).
    Photograph by Ewold Verhagen and Tobias Kippenberg, used by kind permission

    Source

     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  5. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    #85 R29k, Feb 19, 2012
    Last edited by a moderator: Apr 20, 2017
    A Brief History of Time


    Link :plane:
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  6. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    400-Plus Foot Asteroid Could Hit Earth In 2040

    NASA officials have identified a large asteroid that is currently on course to hit Earth in approximately three decades time, leading experts to begin discussing possible ways to change its course, various media outlets reported on Tuesday.
    The asteroid is 460 feet wide and could hit our planet on February 5, 2040, Rob Waugh of the Daily Mail wrote on Tuesday. Scientists believe there is a one in 625 chance that it will hit Earth.
    According to UPI reports, scientists at the 49th meeting of the Scientific and Technical Subcommittee of the United Nations (UN) Committee on the Peaceful Uses of Outer Space in Vienna said that they would be closely following the asteroid identified as 2011 AG5.
    The asteroid was discovered last January, and researchers at the session said that the odds of an impact are high enough that they should begin working on possible ways to deflect it.
    “2011 AG5 is the object which currently has the highest chance of impacting the Earth … in 2040. However, we have only observed it for about half an orbit, thus the confidence in these calculations is still not very high,” Detlef Koschny of the European Space Agency (ESA) Solar System Missions Division told SPACE.com, according to the UPI report.
    “We thus concluded that it not necessarily can be called a ‘real’ threat,” he added. “To do that, ideally, we should have at least one, if not two, full orbits observed.”
    Members of the UN Action Team on Near-Earth Objects (NEOs) are hoping to learn more about the asteroid’s course between 2013 and 2016, when they will be able to monitor it from the ground, said Telegraph reporter Rosa Prince.
    Despite the fact that they haven’t been able to learn a whole lot about 2011 AG5, they are nonetheless considering various ways to combat the potential threat, including the use of nuclear weapons to break it into smaller, less threatening rocks or sending a probe to the asteroid to alter its course.
    Waugh claims that the asteroid — one of approximately 19,000 mid-sized ones within 120 million miles of Earth, according to NASA — “has the potential to wipe out millions of lives if it landed on a city,” but notes that it is “far smaller than the nine mile wide asteroid which is believed to have led to the extinction of the dinosaurs 65 million years ago.”

    Source
    Impact Risk
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  7. ancestor(v)

    ancestor(v) Admin
    Staff Member

    Jun 26, 2007
    2,829
    5,538
    90
    Thank you R29K for all the articles/sources - always interesting to read :cool:
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Multiverse = Many Worlds, Say Physicists

    Two of the most bizarre ideas in modern physics are different sides of the same coin, say string theorists
    [​IMG]
    The many worlds interpretation of quantum mechanics is the idea that all possible alternate histories of the universe actually exist. At every point in time, the universe splits into a multitude of existences in which every possible outcome of each quantum process actually happens.
    So in this universe you are sitting in front of your computer reading this story, in another you are reading a different story, in yet another you are about to be run over by a truck. In many, you don't exist at all.
    This implies that there are an infinite number of universes, or at least a very large number of them.
    That's weird but it is a small price to pay, say quantum physicists, for the sanity the many worlds interpretation brings to the otherwise crazy notion of quantum mechanics. The reason many physicists love the many worlds idea is that it explains away all the strange paradoxes of quantum mechanics.
    For example, the paradox of Schrodinger's cat--trapped in a box in which a quantum process may or may not have killed it-- is that an observer can only tell whether the cat is alive or dead by opening the box.
    But before this, the quantum process that may or may not kill it is in a superposition of states, so the cat must be in a superposition too: both alive and dead at the same time.
    That's clearly bizarre but in the many worlds interpretation, the paradox disappears: the cat dies in one universe and lives in another.
    Let's put the many world interpretation aside for a moment and look at another strange idea in modern physics. This is the idea that our universe was born along with a large, possibly infinite, number of other universes. So our cosmos is just one tiny corner of a much larger multiverse.
    Today, Leonard Susskind at Stanford University in Palo Alto and Raphael Bousso at the University of California, Berkeley, put forward the idea that the multiverse and the many worlds interpretation of quantum mechanics are formally equivalent.
    But there is a caveat. The equivalence only holds if both quantum mechanics and the multiverse take special forms.
    Let's take quantum mechanics first. Susskind and Bousso propose that it is possible to verify the predictions of quantum mechanics exactly.
    At one time, such an idea would have been heresy. But in theory, it could be done if an observer could perform an infinite number of experiments and observe the outcome of them all.
    But that's impossible, right? Nobody can do an infinite number of experiments. Relativity places an important practical limit on this because some experiments would fall outside the causal horizon of others. And that would mean that they couldn't all be observed.
    But Susskind and Bousso say there is a special formulation of the universe in which this is possible. This is known as the supersymmetric multiverse with vanishing cosmological constant.
    If the universe takes this form, then it is possible to carry out an infinite number of experiments within the causal horizon of each other.
    Now here's the key point: this is exactly what happens in the many worlds interpretation. At each instant in time, an infinite (or very large) number of experiments take place within the causal horizon of each other. As observers, we are capable of seeing the outcome of any of these experiments but we actually follow only one.
    Bousso and Susskind argue that since the many worlds interpretation is possible only in their supersymmetric multiverse, they must be equivalent. "We argue that the global multiverse is a representation of the many-worlds in a single geometry," they say.
    They call this new idea the multiverse interpretation of quantum mechanics.
    That's something worth pondering for a moment. Bousso and Susskind are two of the world's leading string theorists (Susskind is credited as the father of the field), so their ideas have an impeccable pedigree.
    But what this idea lacks is a testable prediction that would help physicists distinguish it experimentally from other theories of the universe. And without this crucial element, the multiverse interpretation of quantum mechanics is little more than philosophy.
    That may not worry too many physicists, since few of the other interpretations of quantum mechanics have testable predictions either (that's why they're called interpretations).
    Still, what this new approach does have is a satisfying simplicity-- it's neat and elegant that the many worlds and the multiverse are equivalent. William of Ockham would certainly be pleased and no doubt, many modern physicists will be too.
    Ref: arxiv.org/abs/1105.3796: The Multiverse Interpretation of Quantum Mechanics
    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  9. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Holey chip! IBM drills holes into optical chip for terabit-per-second speed

    By Jon Brodkin
    [​IMG]
    IBM researchers have built a prototype optical chip that can transfer a terabit of data per second, using an innovative design requiring 48 tiny holes drilled into a standard CMOS chip, facilitating the movement of light. Much faster and more power-efficient than today's optics, the so-called "Holey Optochip" technology could enhance the power of supercomputers.
    Optical chips, which move data with light instead of electrons, are commonly used for interconnects in today's supercomputers and can be found in IBM systems such as Power 775 and Blue Gene. Optical technology is favored over electrical for transmitting high-bandwidth data over longer distances, which is why it's used for telecommunications networks, said IBM Optical Links Group manager Clint Schow.
    As speed and efficiency improve, optical technology has become more viable in smaller settings. "I think the number one supercomputer ten years ago had no optics in it whatsoever, and now you're seeing large scale deployments, mostly for rack-to-rack interconnects within supercomputers," Schow told Ars. "It's making its way deeper into the system and getting closer and closer to the actual processor."
    With the Holey Optochip, Schow said "our target is the bandwidth that interconnects different processors in the system—not the processor talking to its memory, but a processor talking to another processor in a large parallel system."
    The Holey Optochip uses 4.7 watts in delivering nearly one trillion bits per second, enough to download 500 HD movies. At 5.2 mm by 5.8 mm, it's about one-eighth the size of a dime.
    IBM built the chip using standard parts so it can make its way to market relatively quickly. "The heart of the chip is a single CMOS, plain-Jane unmodified process chip," Schow said. "That base chip has all the electronic circuit functions to complete the optical link. So it's got drivers that modulate vertical cavity lasers and receiver circuits that convert photocurrent from a detector into a usable electrical signal."
    Drilling holes into the chip lets IBM use industry-standard, 850-nanometer vertical cavity surface emitting lasers (VCSEL), and photodiode arrays, both soldered on to the chip. The holes allow optical access through the back of the chip to the transmitter and receiver channels, making it more compact.
    "You need the holes because if you have the silicon substrate the chip is made out of, the light can't go through it," Schow said. "You need to make a hole to let the light pass through." An IBM spokesperson further explains that "the optical devices are directly soldered to the front of the CMOS IC (integrated circuit) and the emission/detection of the optical signals is pointed toward the back of the chip. The holes are etched through the chip, one under each laser and detector to allow the optical signals to pass through the chip itself."
    [​IMG]
    Photomicrograph of the back of the Holey Optochip with lasers and photodetectors visible through substrate holes.
    A standard optical chip today includes 12 channels (the links between transmitters and receivers), each moving 10 Gigabits per second, he said. The IBM Holey Optochip has 48 channels, each moving 20 gigabits per second, for a total of 960 gigabits, just below a terabit. IBM is unveiling the prototype chip today at theOptical Fiber Communication Conference in Los Angeles, calling it "the first parallel optical transceiver to transfer one trillion bits of information per second."
    [​IMG]
    Photomicrograph of Holey Optochip, with 48 holes allowing optical access through the back of the chip to receiver and transmitter channels.
    "That's four times as many channels running twice as fast, and the power efficiency is better by at least a factor of four," Schow said. The whole chip uses more power than current ones, but transmits much more data, resulting in better efficiency as measured by watts per bit.
    The speed of each channel itself isn't breaking any records, given that IBM built the prototype chips using standard components. Schow noted that "there's development now that will push channel data rates to 25 gigabits per second in the near future." What's impressive about the Holey Optochip is the design, allowing optimization of density, power, and bandwidth all in one little package.
    "You can go really fast if you don't care about power, and you can be really power-efficient if you don't care about speed," Schow said. Getting both facets right can bring an order-of-magnitude improvement to overall performance, he said. This is the second generation of the holey prototype—the first produced speeds of 300 gigabits per second 2010. Back in 2007, Ars reported on a previous, 160Gbps optical networking chip from Big Blue.
    Although IBM itself won't be mass-producing the chips, Schow said they could become commercially available within a year or two. Price points could be in the $100 to $200 range, he speculated.
    "We're in a group within IBM Research, looking at communications technologies we'll need for future computers, particularly for crunching big data, and analytics applications when you have to have tons of bandwidth in the system," he said. "Our mission is to prototype technologies and show what's possible, to drive the industry to commercial solutions that we can then procure and put into our systems."
    IBM researchers also recently made a breakthrough in quantum computing, which could eventually lead to computers exponentially more powerful than today's, as our friends at Wired reported.

    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  10. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Runaway Planets At 30 Million Miles Per Hour Possible

    [​IMG]
    Harvard-Smithsonian Center for Astrophysics researchers have determined that some planets are flying around in space at 30 million miles per hour.
    These hypervelocity planets are produced in the same way as the hypervelocity star that was found seven years ago traveling around the Milky Way Galaxy at 1.5 million miles per hour.
    “These warp-speed planets would be some of the fastest objects in our Galaxy,” astrophysicist Avi Loeb of the Harvard-Smithsonian Center for Astrophysics said in a recent statement. “If you lived on one of them, you’d be in for a wild ride from the center of the galaxy to the Universe at large.”
    A hypervelocity star forms as a double-star system wanders too close to the supermassive black hole at the center of the galaxy. Strong gravitational forces rip the stars apart from each other, sending one away at high speeds while the other orbits around the black hole.
    The researchers in the study simulated what would happen if each of the stars had a planet or two orbiting it.
    They found that the star that is ejected outward could carry its planets along for the ride, and the star sucked in to the black hole’s orbit could have its planets torn away and tossed into interstellar space at tremendous speeds.
    A typical hypervelocity planet would shoot outward at 7 to 10 million miles per hour, but the researchers found that a small fraction of them could gain speeds of up to 30 million miles per hour.
    “Other than subatomic particles, I don’t know of anything leaving our galaxy as fast as these runaway planets,” lead author Idan Ginsburg of Dartmouth College said.
    Astronomers do not currently have the instruments to detect a lone hypervelocity planet because they are so dim, distant and rare. However, they do have a chance to spot a hypervelocity planet still orbiting around its hypervelocity star.
    The researchers found that the chances of spotting a hypervelocity planet orbiting a star would be around 50 percent.
    “With one-in-two odds of seeing a transit, if a hypervelocity star had a planet, it makes a lot of sense to watch for them,” said Ginsburg.
    The researchers will be publishing their findings in the Monthly Notices of the Royal Astronomical Society.

    Image Caption: In this artist’s conception, a runaway planet zooms through interstellar space. New research suggests that the supermassive black hole at our galaxy’s center can fling planets outward at relativistic speeds. Eventually, such worlds will escape the Milky Way and travel through the lonely intergalactic void. In this illustration, a glowing volcano on the planet’s surface hints at active plate tectonics that may keep the planet warm. Credit: David A. Aguilar (CfA) [ High-res Image ]

    Source: redOrbit (http://s.tt/17WUe)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Study Claims Human Predecessors Used Fire 1 Million Years Ago

    An international team of researchers say that they have identified one-million year old archaeological evidence that human ancestors used and controlled fire, suggesting that our predecessors may have mastered flame approximately 300,000 years earlier than previously thought.
    The study, which was led by researchers from the University of Toronto and Hebrew University of Jerusalem and published in the journal Proceedings of the National Academy of Sciences (PNAS), resulted in the discovery of microscopic traces of wood ash alongside animal bones and stone tools at a cave on the edge of the Kalahari in South Africa, the Canadian university said in an April 2 press release.
    In an interview with the CBC‘s Emily Chung, University of Toronto archeologist and project co-leader Michael Chazan said that in their research, he and his colleagues used materials excavated from Wonderwerk Cave.
    The materials were encased in plastic, cut into thin slices, and then examined with microscopes. During their examination, Chazan told Chung that they discovered ash from grass, leaves, and brush, as well as the charred bone fragments and stone tools which they had had shown signs of exposure to flame.
    He also told the CBC that the formation of the ash showed them that it had to have originated from within the cave, because the edges were angular, and had they blown in from outside, those edges would have been rounded and showed signs of wear from the environmental conditions.
    In order to date the materials, the researchers were forced to use a pair of different geological methods, because according to Chung, they were too old to use radiocarbon dating. The results, she said, “were consistent with the type of stone tools found with the ash, which were known to be made by Homo erectus.”
    According to Carolyn Y. Johnson of the Boston Globe, the materials were analyzed by researchers at a Boston University laboratory, including research assistant professor Francesco Berna.
    Johnson said that Berna and his associated “weren’t looking for evidence of fire” while investigating the find, and that the discovery “was so unexpected” that Berna “found himself trying to poke holes in his provocative observation.”
    Further study allowed them to rule out other causes, such as the fire being created by spontaneously combusting bat droppings, and ultimately Berna said that he and his team realized that “scientifically speaking,” it became obvious that “there was fire burning inside the cave of plant material… while humans were dropping tools and bones. It’s not one episode,” according to what he told the Boston Globe.
    Along with Chazan and Berna, other individuals involved with the research include Paul Goldberg of Boston University; James Brink and Sharon Holt of the National Museum, Bloemfontein; Marion Bamford of the University of Witwatersrand; and Liora Kolska Horwitz, Ari Matmon, and Hagai Ron of Hebrew University. Funding for the study was provided by the Social Sciences and Humanities Research Council of Canada, the National Science Foundation (NSF) and The Wenner-Gren Foundation.
    “The control of fire would have been a major turning point in human evolution,” Chazan said in a statement. “The impact of cooking food is well documented, but the impact of control over fire would have touched all elements of human society. Socializing around a camp fire might actually be an essential aspect of what makes us human.”

    Source: RedOrbit Staff & Wire Reports
    Source: redOrbit (http://s.tt/18Jug)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  12. WIKIMACK

    WIKIMACK MDL Expert

    Nov 10, 2011
    1,535
    1,006
    60
    Hi thanks
    Off shure, Very interesting :vertag:
     
  13. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Elusive Majorana fermions may be lurking in a cold nanowire

    By Matthew Francis
    [​IMG]
    A nanowire (silver color) is attached to a gold electrode and rests against a superconductor (blue). The combination produces quasiparticles that may be Majorana fermions.
    Inside materials, the interactions between groups of electrons and atoms in the crystal lattice can give rise to a variety of interesting phenomena. Their collective behavior, especially at low temperatures, can give rise to quasiparticles: particle-like excitations that have strikingly different properties than the electrons that form them. Quasiparticles have been discovered that have behaviors predicted by particle physics, but have not been observed in particle collidors.
    Researchers in the Netherlands have now produced quasiparticles that act like Majorana fermions: electrically-neutral particles that are their own antiparticles, such that if two collide, they annihilate. The existence of Majorana fermions was first predicted in the 1930s, but no individual particles are known to behave that way. V. Mourik et al. found a quasiparticle version by constructing a very thin wire—a nanowire—of semiconductor material and connected it to a superconductor. The specific electronic properties of the hybrid system gave rise to a pair of zero-velocity quasiparticles at two positions in the nanowire, and these showed behavior consistent with Majorana fermions. Some researchers suggest that quasiparticles of this type would be very useful in quantum computing applications.
    [h=3]Fermions vs. Bosons[/h]Particles and quasiparticles come in two basic types, fermions and bosons, depending on the type of spin they have. The elementary particles of matter (electrons, quarks, and neutrinos) are fermions, while photons and other force carriers are bosons. Particles are paired with antiparticles—antimatter electrons are positrons, etc.—but photons are their own antiparticles. To annihilate, particles and antiparticles must have opposite charge, so Majorana fermions, which are their own antiparticles, need to be electrically-neutral. At present, no fermion is known to be its own antiparticle, although neutrinos may have this property (we don't yet know).
    Theorists predicted the existence of Majorana fermion quasiparticles in a materials known as topological superconductors, in which the interior of the material has zero electrical resistance, but the outside behaves like an ordinary conductor. To create a topological superconductor, Mourik et al. connected a semiconducting indium-antimony nanowire (InSb) between a gold electrode and the edge of a superconductor (NbTiN). They deposited the whole system onto a silicon substrate, which itself was printed with set of logic circuits that read the electronic properties of the wire.
    By measuring the relationship between current and voltage at various positions along the nanowire, the researchers found a strong response at two points where the Majorana fermions are expected to appear. These quasiparticles didn't move under the influence of either a magnetic field or an additional current, indicating that they are electrically neutral and trapped in place.
    This effect was strongest at 60 millikelvins (60 mK, which is 0.06 degrees above absolute zero) and vanished entirely at temperatures higher than 300 mK. Additionally, Mourik et al. confirmed that these Majorana quasiparticles failed to appear when the superconductor was replaced with another gold electrode, showing that the combination of the nanowire with the superconductor was necessary to create the fermions.
    As the researchers themselves note, these results are consistent with Majorana fermions, but they have not been able to test for the presence of some of the predicted properties. Specifically, while the quasiparticles in the nanowire are electrically neutral and trapped at the expected positions, they should also behave in a certain way if their positions are swapped. While that can't be directly tested in this device, this fundamental property of Majorana fermions can be tested using a superconducting device known as a Josephson junction, a standard technique.
    Since the quantum states of Majorana quasiparticles in topological superconductors are not independent of each other, the total system represents a qubit (quantum bit), which has been proposed as another way to achieve working quantum computers (although that may be overselling them). Apart from that, from a pure physics point of view, this result is very important: if these quasiparticles indeed turn out to be Majorana fermions, that will be the first confirmed detection in any physical system.
    Science, 2012. DOI: 10.1126/science.1222360 (About DOIs).
    Photograph by kouwenhovenlab.tudelft.nl

    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Plans afoot to tap Iceland's geothermal energy with 745-mile cable By James Holloway

    [​IMG]

    Nesjavellir Geothermal Power Station: Iceland's second largest geothermal power station

    A proposed high voltage electrical cable running across the floor of the North Atlantic Ocean to tap Iceland's surplus volcanic geothermal energy would become the world's longest underwater electrical cable, if it goes ahead. The cable would be a significant step towards a pan-European super grid, which may one day tap renewable sources as far afield as Scandinavia, North Africa and the Middle East. It's argued that such a grid would be able to widely transmit energy surpluses from active renewable sources, thereby alleviating the need for countries to use (or build) back-up fossil fuel power stations to cater for peaks in demand when more local renewable sources aren't particularly productive.
    If a European super grid comes to fruition, energy surpluses will be big business. So it's hardly surprising that both Germany and the United Kingdom are jostling for position at the other end of the Icelandic cable, with Norway and the Netherlands also having been mooted as potential connectees. That would necessitate a cable at least 745 miles (1198 km) in length, making it easily the longest electrical cable in the world.
    The scheme, first proposed March of last year by Iceland's largest energy producer Landsvirkjun, would aim to export five billion kilowatt-hours of energy per year for an estimated $350 to $448 million return. A feasibility study subsequently carried out has failed to find any terminal difficulties with the idea, and UK energy minister Charles Hendry is set to fly to Iceland in May to woo the relevant authorities.
    An electrical link to Iceland is one of several international interconnectors either proposed or in progress in Europe, in addition to the fifteen or so routes that exist already (existing and planned connections can be seen on this map). Norway is a focal point for many of the confirmed forthcoming interconnectors which, unlike the proposed Iceland link, would see a two-way exchange of energy designed to further boost its energy security and that of its neighbors. The country is already linked via four North Sea interconnectors to Denmark, Germany, the UK, and the Netherlands—the latter being the current world record holder for longest submarine power link at 360 miles (580 km).
    More ambitious are the proposed DESERTEC and Medgrid schemes to to interconnect countries and renewable energy sources on both the European and African sides of the Mediterranean Sea. German in origin, DESERTEC would involve the investment of more than $500 billion dollars by 2050, into 6500 square miles (nearly 17,000 sq km) of solar thermal collectors (plus a bit of wind) around the edge the Sahara Desert. The scheme could, it's suggested, supply 15 percent of mainland Europe's energy needs. Facts and figures for the French Medgrid scheme (conceptually very similar to DESERTEC) are rather more elusive, and interpretation varies as to whether the two schemes are complementary or in competition.
    [​IMG]
    Conceptual sketch of the proposed DESERTEC energy system.
    the Trans-Mediterranean Renewable Energy Cooperation (TREC)
    A problem inherent to all long-distance electrical transmission: energy loss due to the resistance of the cables. Thanks to Joule's first law, the problem is minimized by stepping up voltage, with a ten-fold increase resulting in a 100-fold loss reduction. The Norway-Netherlands link transmits AC at 300,000 and 400,000 volts.
    Even the proposed Iceland interconnector, accounting for the worst case scenario of a 930-mile (1500-km) cable, falls well within the bounds of profitability according to the findings of a 1980s study which calculated the longest cost-effective distances for electrical transmission to be 2500 miles (4000 km) for AC and 4300 miles (7000 km) for DC. Official costs are yet to be tabled for the project.
    The exportation of renewable energy is a logical next step for Iceland, which has done a grand job of getting its own house in order. The country currently meets 81 percent of its energy needs with domestic renewable sources—thanks in no small part to the country's tremendous geothermal assets, sitting as it does on the Mid-Atlantic Ridge (which can have occasional less welcome consequences). The country plans to be free of fossil fuels in the near future.
    Photograph by ThinkGeoEnergy

    Source

     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  15. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Superoscillatory lens captures evanescent waves for super images By Chris Lee

    [​IMG]
    Details like the cellular components of this Tetrahymena may come into sharper focus with a superoscillatory lens
    People think I'm compensating, but I'm not. I just happen to like seeing tiny objects in exquisite detail. So my obsession—one that I inflict on others as often as possible—continues to grow. My microscopy obsession isn't all personal, though. The truth is that images are powerful. They explain, they inspire, and they help us cope with scales that would otherwise be incomprehensible. In short, images and imaging devices are awesome.
    Making images better is perhaps the only thing more awesome than the awesomeness of images themselves. When a paper on the first functioning superoscillatory lens was published in Nature Materials, it proved irresistible to me.
    [h=3]I can see that perfectly, why can't you?[/h]Before we get to the actual point of the article, let me entertain you by going on at length about the relationship between a lens and the smallest features we can see with that lens. The concept we need here is spatial frequencies. I think most of us are familiar with the idea of temporal frequencies. The notes on a piano are all at a well-defined set of temporal frequencies. The frequency corresponds to the time it takes for the pressure to go from high to low and back to high again. Higher frequencies correspond to a shorter time to complete a cycle.
    We can do the same thing in space as well. If we freeze a light wave in time, then the distance between the peaks of the electric field of the wave provide a consistent spatial period, which we can turn into a spatial frequency. Unlike time, however, we are able to perceive three spatial dimensions—depending on how we look at it, a light wave can have three spatial frequencies. These frequencies will change depending on our point of view, but no matter which way we look at the wave, the frequencies must add up to the same maximum value.
    To get an idea of how this limits the detail in an image, imagine a beam of light hitting an object, and collecting the scattered light with a lens. The scattered light carries the details of the object in its spatial frequencies. But from the point of view of the lens, the light waves with the very highest spatial frequencies are those that don't travel towards the lens; they travel parallel to it instead. The very highest spatial frequency information that is transmitted by the lens is given by those light waves that just barely pass through its edges.
    Unfortunately, the very finest details of the object you're imaging are the ones that possess high spatial frequencies. In other words, to see detail, we need to collect high frequency information, but the lens cuts off the highest frequencies, blurring the image.
    Even if we had a magic lens that collected all the scattered light, that maximum value—a value given by the wavelength of the light used to illuminate the object—would still limit the details we could perceive. Or at least under ordinary circumstances.
    [h=3]Effervescent evanescent waves[/h]I need to confess something at this point: I lied. The maximum value for the spatial frequencies? That doesn't exist. Yet everything I said above is also true. So what happens to those light waves with spatial frequencies higher than the maximum value? These waves, called evanescent waves, simply don't propagate. Instead, their amplitude falls off exponentially with distance from the object. If you stuck your lens so close to the object (a distance of about one wavelength of light), then you would collect the evanescent waves and be able to perceive far more detail.
    This isn't a very convenient way to image. Yet, collecting the contributions of evanescent waves is what a superoscillatory lens does.
    It can do this because, although the amplitude of the evanescent wave drops very rapidly, it never quite reaches zero. For any normal lens, the contribution from evanescent waves is swamped by everything else. The job of the superoscillatory lens is to separate the contribution from these high frequency components so that they can be detected separately.
    [h=3]Do you remember how to superoscillate?[/h]The potential for superoscillations to provide a high resolution imaging tool was pointed out by theoreticiansBerry and Popescu. The trick, it seems, is to create a lens that gets all the contributions from the evanescent waves to add up in phase so that they produce an arbitrarily small spot.
    But this spot will be surrounded by a very intense halo that corresponds to the light we normally image with. To make matters worse, the smaller that central spot, the weaker it becomes, making it harder and harder to implement a useful lens. But if the halo and the central spot can be separated, it may not matter how weak the central spot is.
    And, in a sense, that is what the new paper is about. The superoscillatory lens consists of a series of rings milled into a piece of glass coated with aluminum. The light passing through the rings is scattered, and the interference between the light from the different rings produces a bright spot at a central focus, and a broad halo around the central focus. This focus is 10 micrometers from the lens, so the individual evanescent waves that make up that spot have an amplitude that is some 10 million times weaker than all the rest of the light. By adding them all in phase, the central spot isn't much weaker than the surrounding halo.
    The researchers demonstrated the lens' imaging capabilities by snapping pictures of features that they couldn't resolve with conventional light-based microscopy. In the end, their lens topped out at about 100nm, which is a factor of three better than a normal microscope.
    The best part is that this is just the beginning. Superoscillations were first proposed in 2006. The first evidence for the existence of superoscillations turned up in 2007. It has only taken six years to go from theory to the first lens that could conceivably be used in a device. I imagine that smaller and more detailed things are on the way.
    Nature Materials, DOI: 10.1038/nmat3280
    Photograph by ucdenver.edu


    Source :eek:
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  16. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Flame Virus ‘Most Sophisticated’ Cyber Weapon Ever Used

    Researchers at Kaspersky Lab announced on Monday that they had uncovered the “most sophisticated cyber weapon” ever unleashed.
    The malware, dubbed Flame, is a highly complex malicious program with vast espionage capabilities that are actively targeting sensitive information across the Middle East.
    The sophistication and functionality of the virus exceed those of all other cyber weapons known to date, Kaspersky said.
    “Flame can easily be described as one of the most complex threats ever discovered. It’s big and incredibly sophisticated,” wrote Alexander Gostev, Kaspersky Lab’s head of global research and analysis, in a blog post describing the cyber weapon.
    “It pretty much redefines the notion of cyberwar and cyberespionage.”
    Flame came to the attention of Kaspersky Lab after the UN’s International Telecommunication Union sought the company’s help in finding an unknown piece of malware that was deleting sensitive information across the Middle East. While searching for that code, nicknamed Wiper, Kaspersky uncovered the new malware, codenamed Worm.Win32.Flame.
    The researchers at Kaspersky describe Flame as a sophisticated attack toolkit — a backdoor Trojan with worm-like features that allows the virus to replicate in a local network and on removable media when commanded by its master.
    Once deployed, Flame begins “a complex set of operations,” and can sniff network traffic, gather data files, obtain screenshots, record audio conversations, remotely change settings on computers, copy instant messaging chats, intercept a keyboard and much more, Kaspersky said. This data is then available via Flame’s command-and-control servers.
    Operators can also choose to upload further modules that expand Flame’s functionality.
    There are about 20 modules in total, Kaspersky said, and the purpose of most of them is still being explored.
    Flame differs from other backdoor Trojans by its use of the LUA programming language, which is uncommon in malware. It is also remarkable for its large size — about 100 times that of most malicious software. Modern malware is typically small, and written in compact programming languages that make it easy to conceal. In fact, the practice of concealment through large amounts of code is one of the specific new features in Flame, Kaspersky said.
    The completeness of Flame’s audio data recording capabilities, which allow the virus to steal data in many different ways, is also fairly new, Kaspersky said.
    Experts said the worm is 20 times more powerful than any other known cyber warfare program — including the Stuxnet virus that attacked Iranian nuclear systems in 2010 — and could only have been created by a state.
    Kaspersky made the 20-gigabyte virus available to other researchers, saying it did not fully understand its scope.
    Flame is the third cyber attack weapon targeting systems in the Middle East to be exposed in recent years. The Russian security firm said the program appeared to have been released five years ago, and had infected machines in Iran, Israel, Sudan, Syria, Lebanon, Saudi Arabia and Egypt.
    “If Flame went on undiscovered for five years, the only logical conclusion is that there are other operations ongoing that we don’t know about,” Kaspersky senior security researcher Roel Schouwenberg told The Telegraph‘s Damien McElroy and Christopher Williams.
    Iran ordered an emergency review of its official computer systems upon news of Flame’s discovery.
    Mr. Schouwenberg said there was evidence to suggest the malware was commissioned by the same nation or nations that were behind Stuxnet.
    Iran’s Computer Emergency Response Team said Flame was “a close relation” of Stuxnet, and that organizations had been given software to detect and remove the malware earlier this month.
    Flame does not spread itself automatically, but only when hidden controllers permit it to do so. The malware’s unprecedented layers of software allow it to penetrate remote computer networks undetected.
    The virus infects Microsoft Windows machines, has five encryption algorithms and sophisticated data storage formats.
    Components of Flame enable those behind it, who use a network of rapidly-shifting “command and control” servers, to direct the virus to turn microphones into listening devices, steal documents and log keystrokes.
    Once a machine is infected, additional modules can be added to the system allowing the machine to undertake specific tracking projects.
    “It took us 6 months to analyze Stuxnet. [This] is 20 times more complicated,” said Eugene Kaspersky, the founder of Kaspersky Lab.
    Researchers at Kaspersky Lab said they would share a full list of the files and traces with technology professionals in the coming weeks.


    Source: RedOrbit Staff & Wire Reports
    Source: redOrbit (http://s.tt/1cUm9)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  17. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Apple 1 motherboard auctioned off for $374,500

    A letter from Steve Jobs also found a buyer at $27,000.

    by Megan Geuss - June 15 2012, 3:27pm AST
    10​


    [​IMG]
    The exposed Apple 1 motherboard sold today.

    Today Sotheby’s auctioned off one of the world’s last functioning Apple 1 motherboards for $374,500, more than double the high estimate—$180,000—that auction house had made for the early Apple computer.
    Apple only made about 200 of its first computers, which sold in 1976 for $666.66. Each motherboard was assembled by hand by Steve Wozniak and the handful of original Apple employees, and did not include a monitor or keyboard. Sotheby’sestimated that only 50 Apple 1s survive, and only six are known to be in working condition.
    The Apple 1 came with 8KB of RAM, and a MOS 6502 8-bit, 1MHz CPU. It also included the cassette interface, the operation manual for the Apple 1, as well as a preliminary Apple BASIC user manual, according to Tom’s Hardware. A similar Apple 1 was sold in 2010 for $213,600, but that unit came with the original invoice, so Sotheby’s estimated today’s motherboard would sell for less. They clearly underestimated the value of a relic from a company that has grown from a guy's garage to be one of the most powerful companies in the world.
    The BBC reported that “Sotheby's said there was a battle between two parties for the item which also included the original manuals. A set of bids were executed by the auctioneer on behalf of an absentee collector, but a telephone bidder proved more persistent and eventually clinched the sale.”
    Sotheby’s also auctioned off a four-page note hand-written by Steve Jobs when he was at Atari in 1974. The auction house estimated it would go for $15,000, but a buyer picked it up for $27,000 today. In it, Jobs discussed how to improve Atari’s World Cup soccer game, and ended the letter with a Buddhist mantra that translates as, "Going, going, going on beyond, always going on beyond, always becoming Buddha."
    Sotheby’s has not revealed the identities of the winners of either auction.

    Source :eek:
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  18. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    Dinosaur bones reveal evidence against cold blooded creatures

    Modern mammals show similar growth patterns in their bones.

    by Scott K. Johnson - June 28 2012, 4:20pm AST

    [​IMG]
    Alpine red deer bone showing dark lines of arrested growth.
    Meike Köhler

    Who can get enough of dinosaurs? We’re curious about what color they were, how fast they moved, and whether they could really spit venom at Newman from Seinfeld. One of the most fundamental questions is whether they were cold-blooded or warm-blooded. Just because some of them looked like fearsome, giant lizards doesn’t mean they had to bask to raise their body temperature. After all, birds, likely their lone surviving descendants, are warm-blooded.
    Researchers have looked at this question from many different angles (including temperature measurements from teeth, as we reported last year). One intriguing line of evidence has come from the microscale structure of their bones. Cross sections through fossils from most groups of dinosaurs (except sauropods) reveal cycles in growth, including dark lines where growth temporarily ceased.
    This has long been cited as strong evidence in favor of cold-bloodedness, as the bones of modern cold-blooded species also show annual cycles. Since their body temperature is at the whim of the seasons, their growth slows during non-ideal conditions. Warm-blooded animals, on the other hand, keep their body temperature constant, and so their bone growth, too, remains constant. Or so the story went.
    Some argued that the dinosaur bones actually showed signs of very high rates of growth in between the cyclical lulls. The high metabolism required to do so is more characteristic of warm-blooded animals, they said. But the cold-blooded camp maintained that only cold-blooded animals showed the alternating growth patterns.
    As it turns out, that well-ordered house was built on sand. The evidence for constant bone growth in warm-blooded organisms was lacking. A paper published in the journal Nature describes a large review of ruminants (mammals that chew cud) and comes to the opposite conclusion—dinosaur bone growth looks more like warm-blooded organisms than cold-blooded ones.
    The researchers examined femurs from over 100 African and European ruminants spanning climate zones from the tropics to the arctic. They found patterns of high bone growth rates that correlated with the growing season and hiatuses in growth during the dry or cold season.
    To dig into the mechanisms driving this pattern, they used physiological data collected from Svalbard reindeer and alpine red deer. These studies measured changes in things like hormones and body temperature throughout the year.
    The data showed that these species save energy by slowing their metabolism (and growth) when food is scarce, even reducing body temperature by a small amount (less than 1°C). During the best part of the growing season, metabolic activity kicks into high gear. The progress comes in tying that physiological strategy to the fine-scale bone structure, and showing that it’s pervasive across such a large group of warm-blooded animals.
    The researchers argue this work not only "debunks the key argument from bone histology in support of" cold-bloodedness, it also corrals the bone patterns under the umbrella of evidence for warm-bloodedness. They think there’s a good chance that dinosaurs had similar metabolic schemes as these modern ruminants, staying in tune with the seasonal availability of food.
    We’ll see if the rest of the paleontology community agrees. Odds are, some will have a bone to pick.
    Nature, 2012. DOI: 10.1038/nature11264 (About DOIs).

    Source
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  19. R29k

    R29k MDL GLaDOS

    Feb 13, 2011
    5,042
    4,654
    180
    CERN celebrates as Higgs signal reaches significance

    A strong signal emerges as the number of collisions pile up.

    by John Timmer - July 4 2012, 8:36am AST


    [​IMG]
    A four-lepton decay, a possible sign of the Higgs, seen by the ATLAS detector.
    CERN
    Today, in two seminars held at CERN, the European center for physics, announced evidence that the elusive Higgs particle has finally been discovered.
    Physics' Standard Model describes the fundamental particles that make up all matter, like quarks and electrons, as well as the particles that mediate their interactions through forces like electromagnetism and the weak force. Back in the 1960s, theorists extended the model to incorporate what has become known as the Higgs mechanism, which provides many of the particles with mass. One consequence of the Standard Model's version of the Higgs is that there should be a force-carrying particle, called a boson, associated with the Higgs field.
    For decades, physicists have been sifting through the output of colliders like the Tevatron and LEP, looking for an indication that the Higgs was present in the spray of exotic particles they detected. The closest they got was a hint of a signal that didn't rise far enough above the background. Now, in less than two years of operation, the Large Hadron Collider's detectors have found clear evidence of a particle that looks a lot like the Higgs.
    Finding the Higgs was always a matter of probability. We can't detect the particle directly, but the Standard Model tells us what its decay pathways will look like, provided we feed the equations a specific mass. So, for example, we can calculate that a Higgs boson weighing in between 115 and 135GeV (the range suggested by the Tevatron data) should decay into two photons with some frequency; two Z bosons with a different frequency, and other combinations of particles with additional probabilities.
    The challenge comes from the fact that the Standard Model also predicts that processes that don't involve a Higgs will also produce similar looking patterns of particles. So, we're left with probabilities. Do we see an excess of these events that can't be accounted for by non-Higgs decays? How statistically significance is that excess?
    Particle physicists have settled on a specific measure of significance called five sigma (or five standard deviations) before they're willing to accept that we've spotted a new particle. When the LHC wrapped up last year, its detectors both saw a signal near 125GeV that reached nearly three sigma—tantalizing, but not enough to claim discovery. At the time, CERN's director basically said "wait until next year," when the hardware would gather far more collisions, enough to provide a greater degree of statistical certainty. To make sure that next year was worth waiting for, the LHC operators planned on running the machine both with a high number of proton bunches (which increases the total number of collisions) and at a slightly higher energy (which increases the probability that a collision will produce a heavy particle).
    The hardware performed brilliantly, as the LHC reached its planned luminosity quickly and started pumping out the data. By somewhere in June, it had already produced as many collisions as it had in all of last year, and should double the available data again before this year's run is over.
    But the huge number of collisions created its own problems. At times, up to 30 collisions were taking place nearly simultaneously, and the computer systems had to reconstruct which signals came from what collisions and trigger the system to save the data if something looked interesting—all within a fraction of a second. According to the presentations at CERN, the software triggers were improved, the code reconstructed events faster, and the computing grid was given more sophisticated analysis tools to identify events that could come from a Higgs decay. The net result was today's announcement (and yesterday's accidental pre-announcement).
    Where do we now stand? There are a lot of ways to look at it. One is basically the probability of finding the Higgs at a specific mass. If we assume the Higgs is 125GeV, we see a signal that's a specific sigma above background. But there's no particular reason to assume 125GeV and not, say, 135GeV, and the statistics need to compensate for this (called the "look elsewhere effect"). Then there are multiple channels thanks to the different decay pathways, and two different detectors. So, for the CMS detector, the two-photon channel produces a local Higgs signal that's 4.5 sigma, but that drops to 2.5 sigma when the look elsewhere effect is considered. It's only by combining all its channels that CMS reaches a 4.9 sigma, and the data from both detectors had to be combined to be able to push things over five sigma and declare discovery.
    Using the standard way of displaying the data where green indicates one sigma and yellow two (hence the nickname "Brazil plots"), the peak looks both clean and enormous.
    [​IMG]
    That sure looks like a significant signal to me.
    CERN
    There are a number of reasons to be confident in this result. As we mentioned above, the Higgs at this mass has several different pathways that it might use to decay (two photons, two Z particles, etc.). A signal was seen in several of these channels, indicating it's not just an artifact of a specific analysis. In addition, this mass is consistent with a weaker signal seen in the Tevatron data, which not only has distinct detectors, but also collides different particles (protons and their antimatter equivalent instead of the LHC's two protons).
    The other nice thing about the expanded data is that they got rid of something that was a bit awkward in last year's data. The two detectors, ATLAS and CMS, both saw signals near 125GeV, but the peaks were on opposite sides: CMS at 124GeV, ATLAS at 127GeV. With more data, that apparent discrepancy seems to have gone away, and everyone is now saying 126GeV. (Someone noted that's roughly comparable in mass to an iodine atom.)
    So, what's next? We know we have a boson thanks to its decay pathways, and it's behaving largely as the standard model would predict if it were the Higgs. But the LHC should be able to produce many more of these, which will push the individual decay channels up to five sigma territory. At that point, the numbers should tell us if there's something odd about individual decay pathways—do we see an excess of two photon decays? Fewer four lepton results than predicted? This will provide fine-scale tests of the Standard Model.
    In addition, we'll get a better grip on the particle's mass. Some of the decay channels we're using involve the production of neutrinos and, since we don't know how much they weigh, we can't tell how much mass and energy they carry away when a Higgs decays. That helps broaden out the mass peak. More data, particularly from those channels that don't involve neutrinos, will narrow that down.
    Further into the future, the LHC will go into a long shutdown at the end of this year, so that its hardware can be upgraded to operate at its full potential, reaching energies of 14TeV. When it comes back on line in a few years, the focus will shift to seeing if there's anything out there that the Standard Model doesn't predict.
    UPDATE: CERN has indicated it will extend this year's LHC run by several months in order to get enough data to know more things about the newly discovered boson. This is the last chance they'll get before the extended shutdown for upgrades, and they probably have some sense of what it will take to push key measurements into statistical significance now.

    SOURCE
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  20. dabits

    dabits Guest

    #100 dabits, Aug 12, 2012
    Last edited by a moderator: Apr 20, 2017
    Ramesh Raskar: Imaging at a trillion frames per second