Saturday, 24 January 2015

Chandra Celebrates the International Year of Light:: image of SNR 0519-69.0




The year of 2015 has been declared the International Year of Light (IYL) by the United Nations. Organizations, institutions, and individuals involved in the science and applications of light will be joining together for this yearlong celebration to help spread the word about the wonders of light.

NASA’s Chandra X-ray Observatory explores the universe in X-rays, a high-energy form of light.  By studying X-ray data and comparing them with observations in other types of light, scientists can develop a better understanding of objects likes stars and galaxies that generate temperatures of millions of degrees and produce X-rays.

To recognize the start of IYL, the Chandra X-ray Center is releasing a set of images that combine data from telescopes tuned to different wavelengths of light. From a distant galaxy to the relatively nearby debris field of an exploded star, these images demonstrate the myriad ways that information about the universe is communicated to us through light.
In this image, an expanding shell of debris called SNR 0519-69.0 is left behind after a massive star exploded in the Large Magellanic Cloud, a satellite galaxy to the Milky Way. Multimillion degree gas is seen in X-rays from Chandra, in blue. The outer edge of the explosion (red) and stars in the field of view are seen in visible light from the Hubble Space Telescope.

Image Credit: NASA/CXC/SAO

Tuesday, 20 January 2015

Unexplained cosmic burst of radio waves deep in space


A strange phenomenon has been observed by astronomers as it was happening – a ‘fast radio burst’. The eruption is described as an extremely short, sharp flash of radio waves from an unknown source in the universe. The results have been published in the Monthly Notices of the Royal Astronomical Society.
Parkes Radio Telescope in Eastern Australia
Over the past few years, astronomers have observed a new phenomenon, a brief burst of radio waves, lasting only a few milliseconds.
It was first seen by chance in 2007, when astronomers went through archival data from the Parkes Radio Telescope in Eastern Australia.
Since then we have seen six more such bursts in the Parkes telescope’s data and a seventh burst was found in the data from the Arecibo telescope in Puerto Rico. They were almost all discovered long after they had occurred, but then astronomers began to look specifically for them right as they happen.

Radio-, X-ray- and visible light

A team of astronomers in Australia developed a technique to search for these ‘Fast Radio Bursts’, so they could look for the bursts in real time. The technique worked and now a group of astronomers, led by Emily Petroff (Swinburne University of Technology), have succeeded in observing the first ‘live’ burst with the Parkes telescope. The characteristics of the event indicated that the source of the burst was up to 5.5 billion light years from Earth.
The electromagnetic spectrum. There are many more colors that our eyes can see: electromagnetic radiation is defined by its wavelength. Visible light is only a small part of the existing spectrum. For the study of fast radio bursts, scientists looked for radio waves with the Parkes telescope, then X-rays with the Swift satellite, and then optical light with the Nordic Optical Telescope, among others.
Now that they had the burst location and as soon as it was observed, a number of other telescopes around the world were alerted – on both ground and in space, in order to make follow-up observations on other wavelengths.
“Using the Swift space telescope we can observe light in the X-ray region and we saw two X-ray sources at that position,” explains Daniele Malesani, astrophysicist at the Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen.
Then the two X-ray sources were observed using the Nordic Optical Telescope on La Palma. “We observed in visible light and we could see that there were two quasars, that is to say, active black holes. They had nothing to do with the radio wave bursts, but just happen to be located in the same direction,” explains astrophysicist Giorgos Leloudas, Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen and Weizmann Institute, Israel.
The intensity profile of the fast radio burst, showing how quickly it evolved in time, last only a few milliseconds. Before and after the burst, only noise from the sky was detected. (Credit: Swinburne Astronomy Productions)

Further investigation

So now what? Even though they captured the radio wave burst while it was happening and could immediately make follow-up observations at other wavelengths ranging from infrared light, visible light, ultraviolet light and X-ray waves, they found nothing. But did they discover anything?
“We found out what it wasn’t. The burst could have hurled out as much energy in a few milliseconds as the Sun does in an entire day. But the fact that we did not see light in other wavelengths eliminates a number of astronomical phenomena that are associated with violent events such as gamma-ray bursts from exploding stars and supernovae, which were otherwise candidates for the burst,” explains Daniele Malesani.
But the burst left another clue. The Parkes detection system captured the polarisation of the light. Polarisation is the direction in which electromagnetic waves oscillate and they can be linearly or circularly polarised. The signal from the radio wave burst was more than 20 percent circularly polarised and it suggests that there is a magnetic field in the vicinity.
more information

Sunday, 18 January 2015

The Beagle Has Landed!


On Christmas Day 2003, a kitchen table-size lander descended onto the surface of the red planet on a mission to study the Martian surface and potential clues for life. The probe never called home, and no one knew what happened to it. Until now.
Upon landing, Beagle 2 was designed to unfold like a pocket watch into its main components, a petal-like array of solar panels and a robotic arm bristling with scientific instruments. (Artist's impression: Beagle2.com)
Upon landing, Beagle 2 was designed to unfold like a pocket watch into its main components, a petal-like array of solar panels and a robotic arm bristling with scientific instruments. (Artist's impression: Beagle2.com)
HiRISE image of the Beagle 2 landing site.
HiRISE image of the Beagle 2 landing site.
Zoomed-in view of the landing site shows Beagle 2's petal-like solar panels.
Zoomed-in view of the landing site shows Beagle 2's petal-like solar panels.
The UK-led Beagle 2 Mars Lander, thought lost on Mars since 2003, has been found partially deployed on the surface of the planet, ending the mystery of what happened to the mission more than a decade ago.
Images taken by the HiRISE camera on NASA’s Mars Reconnaissance Orbiter, or MRO, and initially searched by Michael Croon of Trier, Germany, a former member of the European Space Agency’s Mars Express operations team at the European Space Operations Centre, have identified clear evidence for the lander and convincing evidence for key entry and descent components on the surface of Mars within the expected landing area of Isidis Planitia, an impact basin close to the equator.
This finding shows that the Entry, Descent and Landing, or EDL, sequence for Beagle 2 worked and the lander did successfully touchdown on Mars on Christmas Day 2003.
"We've been looking for all the past landers with HiRISE, this is the first time we found one that didn't send a signal after it landed," said Alfred McEwen, principal investigator of the HiRISE mission and professor in the UA's Lunar and Planetary Lab. "If the landing sequence works correctly, the probe sends a radio signal, and you can use that to pinpoint where it is coming from, even if it broadcasts only very briefly. But in the case of Beagle 2, we didn't get anything. All we had to go by was the target landing area."
Since the loss of Beagle 2 following its landing timed for Dec. 25, 2003, a search for it has been underway using images taken by the HiRISE camera on the MRO. HiRISE has been taking occasional pictures of the landing site in addition to pursuing its scientific studies of the surface of Mars. The planned landing area for Beagle 2 at the time of launch was approximately 170 x 100 kilometers (105 x 62 miles) within Isidis Planitia. With a fully deployed Beagle 2 being less than a few meters across and a camera image scale of about 0.3 m (10 inches), detection is a very difficult and a painstaking task. The initial detection came from HiRISE images taken on Feb. 28, 2013, and June 29, 2014 (Images ESP_037145_1915 and ESP_030908_1915). Croon had submitted a request through theHiWISH program, which allows anyone to submit suggestions for HiRISE imaging targets.
"He found something that would be a good candidate at the edge of the frame," McEwen said. "But contrast was low in the first image, and it was difficult to convince yourself something special was there."
The team acquired several more images, which showed a bright spot that seemed to move around.
"That was consistent with Beagle 2," McEwen said. "Because its solar panels were arranged in petals, each one would reflect light differently depending on the angles of the sun and MRO, especially if the lander was resting on sloping ground."
The imaging data may be consistent with only a partial deployment of Beagle 2 following landing, which would explain why no signal or data was received from the lander, as full deployment of all solar panels was needed to expose the RF antenna, which would transmit data and receive commands from Earth via orbiting Mars spacecraft.
The HiRISE images reveal only two or three of the motorized solar panels, but that may be due to their favorable tilts for sun glints. According to the UK Space Agency, if some panels failed to deploy, reasons could include obstruction from an airbag remaining in the proximity of the lander due to gas leakage, or a damaged mechanism or structure or broken electrical connection, perhaps due to unexpected shock loads during landing. The scenario of local terrain topology, including rocks blocking the deployment, is considered unlikely given images of the landing area, which show few rocks, but this cannot be ruled out. Further imaging and analysis is planned to narrow the options for what happened. Slope and height derived from the HiRISE images show that Beagle 2 landed on comparable flat terrain with no major hazards.
The discovery benefited from an additional image clean-up step that the HiRISE team has been testing, which removes very subtle electronic noise patterns that have to do with the way the instruments work on the MRO. Sarah Sutton, a HiRISE image processing scientist at LPL who was involved in processing the images that revealed the marooned lander, pointed out that this process is an additional step to make the images "just a little bit clearer."
"We have to be really careful not to modify the science data," said Sutton, who received her bachelor's degree in mathematics from the UA. "We do not make any enhancements or modify the images. All we do is eliminate subtle artifacts from high-frequency electronic noise. The untrained eye would not see it, but I see it.
"When we look at objects that are at the limit of the resolution of HiRISE, like Beagle 2, every bit of image clean-up helps." 
Beagle 2 was part of the ESA Mars Express Mission launched in June 2003. Mars Express is still orbiting Mars and returning scientific data on the planet. Beagle 2 was successfully ejected from ESA’s Mars Express spacecraft on the Dec. 19, 2003 — 5.75 days away from Mars and Mars Express’ engine firing and orbital injection.
Beagle 2 inspired many in the general public and led indirectly to the UK becoming a leading member of ESA’s Aurora program and the UK-led ESA ExoMars mission. This rover will explore Mars in 2019, drilling up to 2 meters (6 feet) beneath the soil to explore the geochemistry and mineralogy of Mars and search for potential evidence of past life.
Animated overlay of HiRISE images showing the Beagle-2 landing site at different times. Lander components show up differently depending on the angle of sunlight.
Animated overlay of HiRISE images showing the Beagle 2 landing site at different times. Lander components show up differently, depending on the angle of sunlight.

 

Saturday, 17 January 2015

Three Nearly Earth-Size Planets Found Orbiting Nearby Star



This artistic impression shows NASA's planet-hunting Kepler spacecraft operating in a new mission profile called K2. By analyzing data captured by the Kepler spacecraft, a UA-led team of researchers has discovered three new Earth-size planets orbiting a star only 150 light-years from our sun. (Credit: NASA Ames/JPL-Caltech/T Pyle)
Impression (artistic) shows NASA's planet-hunting Kepler spacecraft operating in a new mission profile called K2. By analyzing data captured by the Kepler spacecraft, a UA-led team of researchers has discovered three new Earth-size planets orbiting a star only 150 light-years from our sun. (Credit: NASA Ames/JPL-Caltech/T Pyle)


Researchers led by UA scientist Ian Crossfield has found a star with three planets, one of which may have temperatures moderate enough for liquid water — and maybe even life — to exist.
UA astronomer Ian Crossfield:
UA astronomer Ian Crossfield: "Nature is full of surprises."

NASA’s Kepler Space Telescope, despite being hobbled by the loss of critical guidance systems, has discovered a star with three planets only slightly larger than Earth. The outermost planet orbits in the "Goldilocks" zone, a region where surface temperatures could be moderate enough for liquid water — and perhaps life — to exist.
The star, EPIC 201367065, is a cool red M-dwarf about half the size and mass of our own sun. At a distance of 150 light-years, the star ranks among the top 10 nearest stars known to have transiting planets. The star’s proximity means it is bright enough for astronomers to study the planets’ atmospheres, to determine whether they are like Earth’s atmosphere and possibly conducive to life.
"A thin atmosphere made of nitrogen and oxygen has allowed life to thrive on Earth. But nature is full of surprises. Many exoplanets discovered by the Kepler mission are enveloped by thick, hydrogen-rich atmospheres that are probably incompatible with life as we know it," said Ian Crossfield, the University of Arizona astronomer who led the study.
A paper describing the find by astronomers at the UA, the University of California, Berkeley, the University of Hawaii, Manoa, and other institutions has been submitted to Astrophysical Journal and is freely available on the arXiv website. NASA and the National Science Foundation funded the research.
Co-authors of the paper include Travis Barman, a UA associate professor of planetary sciences, and Joshua Schlieder of the NASA Ames Research Center and colleagues from Germany, the United Kingdom and the U.S.
The three planets are 2.1, 1.7 and 1.5 times the size of Earth. The smallest and outermost planet, at 1.5 Earth radii, orbits far enough from its host star that it receives levels of light from its star similar to those received by Earth from the sun, said UC Berkeley graduate student Erik Petigura. He discovered the planets Jan. 6 while conducting a computer analysis of the Kepler data NASA has made available to astronomers. In order from farthest to closest to their star, the three planets receive 10.5, 3.2 and 1.4 times the light intensity of Earth, Petigura calculated.
"Most planets we have found to date are scorched. This system is the closest star with lukewarm transiting planets," Petigura said. "There is a very real possibility that the outermost planet is rocky like Earth, which means this planet could have the right temperature to support liquid water oceans."
University of Hawaii astronomer Andrew Howard noted that extrasolar planets are discovered by the hundreds these days, although many astronomers are left wondering if any of the newfound worlds are really like Earth. The newly discovered planetary system will help resolve this question, he said.
"We’ve learned in the past year that planets the size and temperature of Earth are common in our Milky Way galaxy," Howard said. "We also discovered some Earth-size planets that appear to be made of the same materials as our Earth, mostly rock and iron."
Kepler’s K2 Mission
After Petigura found the planets in the Kepler light curves, the team quickly employed telescopes in Chile, Hawaii and California to characterize the star’s mass, radius, temperature and age. Two of the telescopes involved — the Automated Planet Finder on Mount Hamilton near San Jose, California, and the Keck Telescope on Mauna Kea, Hawaii — are University of California facilities.
The next step will be observations with other telescopes, including the Hubble Space Telescope, to take the spectroscopic fingerprint of the molecules in the planetary atmospheres. If these warm, nearly Earth-size planets have puffy, hydrogen-rich atmospheres, Hubble will see the telltale signal, Petigura said.
The discovery is all the more remarkable, he said, because the Kepler telescope lost two reaction wheels that kept it pointing at a fixed spot in space.
Kepler was reborn in 2014 as "K2" with a clever strategy of pointing the telescope in the plane of Earth’s orbit, the ecliptic, to stabilize the spacecraft. Kepler is now back to mining the cosmos for planets by searching for eclipses or "transits," as planets pass in front of their host stars and periodically block some of the starlight.
"This discovery proves that K2, despite being somewhat compromised, can still find exciting and scientifically compelling planets," Petigura said. "This ingenious new use of Kepler is a testament to the ingenuity of the scientists and engineers at NASA. This discovery shows that Kepler can still do great science."
Kepler sees only a small fraction of the planetary systems in its gaze: only those with orbital planes aligned edge-on to our view from Earth. Planets with large orbital tilts are missed by Kepler. A census of Kepler planets the team conducted in 2013 corrected statistically for these random orbital orientations and concluded that one in five sunlike stars in the Milky Way galaxy has Earth-size planets in the habitable zone. Accounting for other types of stars as well, there may be 40 billion such planets galaxywide.
The original Kepler mission found thousands of small planets, but most of them were too faint and far away to assess their density and composition and thus determine whether they were high-density, rocky planets like Earth or puffy, low-density planets like Uranus and Neptune. Because the star EPIC-201 is nearby, these mass measurements are possible. The host star, an M-dwarf, is less intrinsically bright than the sun, which means that its planets can reside close to the host-star and still enjoy lukewarm temperatures.
According to Howard, the system most like that of EPIC-201 is Kepler-138, an M-dwarf star with three planets of similar size, though none are in the habitable zone. 

CONTACTS

Researcher contacts:
Erik Petigura, epetigura@berkeley.edu(link sends e-mail), 650-804-1379 (cell)
Ian Crossfield, ianc@lpl.arizona.edu(link sends e-mail), 520-626-2083
Andrew Howard, howard@ifa.hawaii.edu(link sends e-mail), 808-956-8637 (office), 808-208-1224 (cell)

Media contact:
Doug Carroll
520-621-9017

Wednesday, 14 January 2015

Robots Learn to Use Kitchen Tools by Watching YouTube Videos


Autonomous robots can learn and perform complex actions via observation

Imagine having a personal robot prepare your breakfast every morning. Now, imagine that this robot didn’t need any help figuring out how to make the perfect omelet, because it learned all the necessary steps by watching videos on YouTube. It might sound like science fiction, but a team at the University of Maryland has just made a significant breakthrough that will bring this scenario one step closer to reality.
Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) partnered with a scientist at theNational Information Communications Technology Research Centre of Excellence in Australia (NICTA) to develop robotic systems that are able to teach themselves. Specifically, these robots are able to learn the intricate grasping and manipulation movements required for cooking by watching online cooking videos. The key breakthrough is that the robots can “think” for themselves, determining the best combination of observed motions that will allow them to efficiently accomplish a given task.
The work will be presented on Jan. 29, 2015, at the Association for the Advancement of Artificial Intelligence Conference in Austin, Texas. The researchers achieved this milestone by combining approaches from three distinct research areas: artificial intelligence, or the design of computers that can make their own decisions; computer vision, or the engineering of systems that can accurately identify shapes and movements; and natural language processing, or the development of robust systems that can understand spoken commands. Although the underlying work is complex, the team wanted the results to reflect something practical and relatable to people’s daily lives.
“We chose cooking videos because everyone has done it and understands it,” said Yiannis Aloimonos, UMD professor of computer science and director of the Computer Vision Lab, one of 16 labs and centers in UMIACS. “But cooking is complex in terms of manipulation, the steps involved and the tools you use. If you want to cut a cucumber, for example, you need to grab the knife, move it into place, make the cut and observe the results to make sure you did them properly.”
One key challenge was devising a way for the robots to parse individual steps appropriately, while gathering information from videos that varied in quality and consistency. The robots needed to be able to recognize each distinct step, assign it to a “rule” that dictates a certain behavior, and then string together these behaviors in the proper order.
“We are trying to create a technology so that robots eventually can interact with humans,” said Cornelia Fermüller, an associate research scientist at UMIACS. “So they need to understand what humans are doing. For that, we need tools so that the robots can pick up a human’s actions and track them in real time. We are interested in understanding all of these components. How is an action performed by humans? How is it perceived by humans? What are the cognitive processes behind it?”
Aloimonos and Fermüller compare these individual actions to words in a sentence. Once a robot has learned a “vocabulary” of actions, they can then string them together in a way that achieves a given goal. In fact, this is precisely what distinguishes their work from previous efforts.
“Others have tried to copy the movements. Instead, we try to copy the goals. This is the breakthrough,” Aloimonos explained. This approach allows the robots to decide for themselves how best to combine various actions, rather than reproducing a predetermined series of actions.
The work also relies on a specialized software architecture known as deep-learning neural networks. While this approach is not new, it requires lots of processing power to work well, and it took a while for computing technology to catch up. Similar versions of neural networks are responsible for the voice recognition capabilities in smartphones and the facial recognition software used by Facebook and other websites.
While robots have been used to carry out complicated tasks for decades—think automobile assembly lines—these must be carefully programmed and calibrated by human technicians. Self-learning robots could gather the necessary information by watching others, which is the same way humans learn. Aloimonos and Fermüller envision a future in which robots tend to the mundane chores of daily life while humans are freed to pursue more stimulating tasks.
“By having flexible robots, we’re contributing to the next phase of automation. This will be the next industrial revolution,” said Aloimonos. “We will have smart manufacturing environments and completely automated warehouses. It would be great to use autonomous robots for dangerous work—to defuse bombs and clean up nuclear disasters such as the Fukushima event. We have demonstrated that it is possible for humanoid robots to do our human jobs.”
In addition to Aloimonos and Fermüller, study authors included Yezhou Yang, a UMD computer science doctoral student, and Yi Li, a former doctoral student of Aloimonos and Fermüller from NICTA.
UMIACS video "Autonomy in Robotics" featuring researchers Yiannis Aloimonos and Cornelia Fermüller  
The study, “Robot Learning Manipulation Action Plans by ‘Watching’ Unconstrained Videos from the World Wide Web,” Yezhou Yang, Yi Li, Cornelia Fermüller and Yiannis Aloimonos, will be presented on Jan. 29, 2015, at theAssociation for the Advancement of Artificial Intelligence Conference in Austin, Texas.