Friday, 26 September 2014

Planetary Scientists Debate: Is Pluto a Planet?

Is Pluto a Planet
Pluto (left) and Charon (right) dominate this view of the outer solar system. Charon is about half the size of Pluto. Pluto also hosts four tiny moons – Nix, Hydra, Kerberos, and Styx – two of which are seen as small crescents at top left and right. In the distance, a faint Sun illuminates dust within the asteroid belt. David A. Aguilar (CfA)
The Harvard-Smithsonian Center for Astrophysics held a new debate to discuss the planetary status of Pluto.

Cambridge, Massachusetts – What is a planet? For generations of kids the answer was easy. A big ball of rock or gas that orbited our Sun, and there were nine of them in our solar system. But then astronomers started finding more Pluto-sized objects orbiting beyond Neptune. Then they found Jupiter-sized objects circling distant stars, first by the handful and then by the hundreds. Suddenly the answer wasn’t so easy. Were all these newfound things planets?

Since the International Astronomical Union (IAU) is in charge of naming these newly discovered worlds, they tackled the question at their 2006 meeting. They tried to come up with a definition of a planet that everyone could agree on. But the astronomers couldn’t agree. In the end, they voted and picked a definition that they thought would work.

The current, official definition says that a planet is a celestial body that:

is in orbit around the Sun,
is round or nearly round, and
has “cleared the neighborhood” around its orbit.
But this definition baffled the public and classrooms around the country. For one thing, it only applied to planets in our solar system. What about all those exoplanets orbiting other stars? Are they planets? And Pluto was booted from the planet club and called a dwarf planet. Is a dwarf planet a small planet? Not according to the IAU. Even though a dwarf fruit tree is still a small fruit tree, and a dwarf hamster is still a small hamster.

Eight years later, the Harvard-Smithsonian Center for Astrophysics decided to revisit the question of “what is a planet?” On September 18th, we hosted a debate among three leading experts in planetary science, each of whom presented their case as to what a planet is or isn’t. The goal: to find a definition that the eager public audience could agree on!

Science historian Dr. Owen Gingerich, who chaired the IAU planet definition committee, presented the historical viewpoint. Dr. Gareth Williams, associate director of the Minor Planet Center, presented the IAU’s viewpoint. And Dr. Dimitar Sasselov, director of the Harvard Origins of Life Initiative, presented the exoplanet scientist’s viewpoint.

Gingerich argued that “a planet is a culturally defined word that changes over time,” and that Pluto is a planet. Williams defended the IAU definition, which declares that Pluto is not a planet. And Sasselov defined a planet as “the smallest spherical lump of matter that formed around stars or stellar remnants,” which means Pluto is a planet.

In 2006, when the International Astronomical Union voted on the definition of a “planet,” confusion resulted. Almost eight years later, many astronomers and the public are still as uncertain about what a planet is as they were back then. Tonight, three different experts in planetary science will present each of their cases as to what a planet is or isn’t. And then, the audience gets to vote… is Pluto in or out?

After these experts made their best case, the audience got to vote on what a planet is or isn’t and whether Pluto is in or out. The results are in, with no hanging chads in sight.

According to the audience, Sasselov’s definition won the day, and Pluto IS a planet.

Headquartered in Cambridge, Massachusetts, the Harvard-Smithsonian Center for Astrophysics (CfA) is a joint collaboration between the Smithsonian Astrophysical Observatory and the Harvard College Observatory. CfA scientists, organized into six research divisions, study the origin, evolution and ultimate fate of the universe.

Carbon Nanotube Patches Improve Heart Function

Using carbon nanotubes, researchers at Rice University and Texas Children’s Hospital have created heart-defect patches that improve electrical signaling between immature heart cells.
Living heart cells called ventricular myocytes cultured in nanotube-infused hydrogel beat in an experiment by Rice University and Texas Children’s scientists, who are creating patches to repair pediatric heart defects. Courtesy of the Jacot Lab/Rice University
Carbon nanotubes serve as bridges that allow electrical signals to pass unhindered through new pediatric heart-defect patches invented at Rice University and Texas Children’s Hospital.
A team led by bioengineer Jeffrey Jacot and chemical engineer and chemist Matteo Pasquali created the patches infused with conductive single-walled carbon nanotubes. The patches are made of a sponge-like bioscaffold that contains microscopic pores and mimics the body’s extracellular matrix.
The nanotubes overcome a limitation of current patches in which pore walls hinder the transfer of electrical signals between cardiomyocytes, the heart muscle’s beating cells, which take up residence in the patch and eventually replace it with new muscle.
The work appears this month in the American Chemical Society journal ACS Nano. The researchers said their invention could serve as a full-thickness patch to repair defects due to Tetralogy of Fallot, atrial and ventricular septal defects and other defects without the risk of inducing abnormal cardiac rhythms.
Carbon Nanotubes Improve Electrical Signaling Between Immature Heart Cells
Three images reveal the details of heart-defect patches created at Rice University and Texas Children’s Hospital. At top, three otherwise identical patches darken with greater concentrations of carbon nanotubes, which improve electrical signaling between immature heart cells. At center, a scanning electron microscope image shows a patch’s bioscaffold, with pores big enough for heart cells to invade. At bottom, a near-infrared microscopy image shows the presence of individually dispersed single-walled nanotubes. (Credit: Jacot Lab/Rice University)
The original patches created by Jacot’s lab consist primarily of hydrogel and chitosan, a widely used material made from the shells of shrimp and other crustaceans. The patch is attached to a polymer backbone that can hold a stitch and keep it in place to cover a hole in the heart. The pores allow natural cells to invade the patch, which degrades as the cells form networks of their own. The patch, including the backbone, degrades in weeks or months as it is replaced by natural tissue.
Researchers at Rice and elsewhere have found that once cells take their place in the patches, they have difficulty synchronizing with the rest of the beating heart because the scaffold mutes electrical signals that pass from cell to cell. That temporary loss of signal transduction results in arrhythmias.
Nanotubes can fix that, and Jacot, who has a joint appointment at Rice and Texas Children’s, took advantage of the surrounding collaborative research environment.
“This stemmed from talking with Dr. Pasquali’s lab as well as interventional cardiologists in the Texas Medical Center,” Jacot said. “We’ve been looking for a way to get better cell-to-cell communications and were concentrating on the speed of electrical conduction through the patch. We thought nanotubes could be easily integrated.”
Nanotubes enhance the electrical coupling between cells that invade the patch, helping them keep up with the heart’s steady beat. “When cells first populate a patch, their connections are immature compared with native tissue,” Jacot said. The insulating scaffold can delay the cell-to-cell signal further, but the nanotubes forge a path around the obstacles.
Jacot said the relatively low concentration of nanotubes — 67 parts per million in the patches that tested best — is key. Earlier attempts to use nanotubes in heart patches employed much higher quantities and different methods of dispersing them.
Jacot’s lab found a component they were already using in their patches – chitosan – keeps the nanotubes spread out. “Chitosan is amphiphilic, meaning it has hydrophobic and hydrophilic portions, so it can associate with nanotubes (which are hydrophobic) and keep them from clumping. That’s what allows us to use much lower concentrations than others have tried.”
Because the toxicity of carbon nanotubes in biological applications remains an open question, Pasquali said, the fewer one uses, the better. “We want to stay at the percolation threshold, and get to it with the fewest nanotubes possible,” he said. “We can do this if we control dispersion well and use high-quality nanotubes.”
The patches start as a liquid. When nanotubes are added, the mixture is shaken through sonication to disperse the tubes, which would otherwise clump, due to van der Waals attraction. Clumping may have been an issue for experiments that used higher nanotube concentrations, Pasquali said.
The material is spun in a centrifuge to eliminate stray clumps and formed into thin, fingernail-sized discs with a biodegradable polycaprolactone backbone that allows the patch to be sutured into place. Freeze-drying sets the size of the discs’ pores, which are large enough for natural heart cells to infiltrate and for nutrients and waste to pass through.
As a side benefit, nanotubes also make the patches stronger and lower their tendency to swell while providing a handle to precisely tune their rate of degradation, giving hearts enough time to replace them with natural tissue, Jacot said.
“If there’s a hole in the heart, a patch has to take the full mechanical stress,” he said. “It can’t degrade too fast, but it also can’t degrade too slow, because it would end up becoming scar tissue. We want to avoid that.”
Pasquali noted that Rice’s nanotechnology expertise and Texas Medical Center membership offers great synergy. “This is a good example of how it’s much better for an application person like Dr. Jacot to work with experts who know how to handle nanotubes, rather than trying to go solo, as many do,” he said. “We end up with a much better control of the material. The converse is also true, of course, and working with leaders in the biomedical field can really accelerate the path to adoption for these new materials.”
Seokwon Pok, a Rice research scientist in Jacot’s lab, is lead author of the paper. Co-authors are research scientist Flavia Vitale, graduate student Omar Benavides and former postdoctoral researcher Shannon Eichmann, all of Rice. Pasquali is chair of Rice’s Department of Chemistry and a professor of chemical and biomolecular engineering, of materials science and nanoengineering and of chemistry. Jacot is an assistant professor of bioengineering at Rice, director of the Pediatric Cardiac Bioengineering Laboratory at the Congenital Heart Surgery Service at Texas Children’s and an adjunct assistant professor at Baylor College of Medicine.
The National Institutes of Health, the Welch Foundation and Texas Children’s Hospital supported the research.
Publication: Seokwon Pok, et al., “Biocompatible Carbon Nanotube – Chitosan Cardiac Scaffold Matching the Electrical Conductivity of the Heart,” ACS Nano; DOI: 10.1021/nn503693h

New Two-Step Strategy for Weakening Cancer

New Study Reveals Two-Step Strategy for Weakening Cancer
A cancer cell under attack by lymphocytes. Credit: thinkstockphotos.com/Rice University
Researchers from Rice University and the University of Texas MD Anderson Cancer Center reveal a new two-step strategy for weakening cancer.
Research by Rice University scientists who are fighting a cyberwar against cancer finds that the immune system may be a clinician’s most powerful ally.
“Recent research has found that cancer is already adept at using cyberwarfare against the immune system, and we studied the interplay between cancer and the immune system to see how we might turn the tables on cancer,” said Rice University’s Eshel Ben-Jacob, co-author of a new study this week in the Early Edition of the Proceedings of the National Academy of Sciences.
Ben-Jacob and colleagues at Rice’s Center for Theoretical Biological Physics (CTBP) and the University of Texas MD Anderson Cancer Center, developed a computer program that modeled a specific channel of cell-to-cell communication involving exosomes. Exosomes are tiny packets of proteins, messenger RNA and other information-coding segments that both cancer and immune cells make and use to send information to other cells.
“Basically, exosomes are small cassettes of information that are packed and sealed inside small nanoscale vesicles,” Ben-Jacob said. “These nanocarriers are addressed with special markers so they can be delivered to specific types of cells, and they contain a good deal of specific information in the form of signaling proteins, snippets of RNA, microRNAs and other data. Once taken by the target cells, these nanocarriers can order cells to change what they are doing and in some cases even change their identity.”
Ben-Jacob said recent research showed that dendritic cells use exosomal communications to carry out their specialized role as moderators of and mediators between the innate and adaptive immune systems. The innate and adaptive immune systems use different strategies to protect the body from disease. The innate system guards against all threats at all times and is the first to act even against unrecognized invaders. In contrast, the adaptive immune system acts more efficiently, and in a specific way, against recognized, established threats. Dendritic cells, which are part of both the innate and adaptive systems, share information and help “coach” the adaptive system’s hunter-killer cells about which cells to target and how best to destroy them.
“We were inspired to do this research by two papers — one that showed how the dendritic cells use the exosome to fight cancer and another that showed how cancer cells co-opt the exosomal system both to prevent the bone marrow from making dendritic cells and disable dendritic cells’ coaching abilities,” Ben-Jacob said. “This is cyberwarfare, pure and simple. Cancer uses the immune systems’ own communications network to attack not the soldiers but the generals that are coordinating the body’s defense.”
To examine the role of exosome-mediated cell-to-cell communication in the battle between cancer in the immune system, Ben-Jacob and postdoctoral fellow Mingyang Lu, the study’s first author, worked with CTBP colleagues to create a computer model that captured the special aspects of the exosomal exchange between cancer cells, dendritic cells and the other cells in the immune system.
“You should imagine there is a tug-of-war between the cancer and the immune system,” said study co-author and CTBP co-director José Onuchic, Rice’s Harry C. and Olga K. Wiess Professor of Physics and Astronomy. “Sometimes one side wins and sometimes the other. The question is whether we can understand this battle enough to use radiotherapy or chemotherapy in such a way as to change the balance of the tug-of-war in favor of the immune system.”
Based on their findings, Ben-Jacob and Onuchic say the answer is likely yes. In particular, the CTBP model found that the presence of exosomes creates a situation where three possible cancer states can exist, and one of the states — an intermediate state in which cancer is neither strong nor weak but the immune system is on high alert — could be the key for a new therapeutic approach and with reduced side effects.
“When exosomes are not included, there are only two possible states — one where cancer is strong and the immune system is weak and the other where cancer is weak and the immune system is strong,” Ben-Jacob said.
Although the state where cancer is weakened is preferable, there is a growing body of clinical evidence that suggests it is very difficult to force cancer directly from the strong to the weak position, in part because radiation and chemotherapeutic treatments also weaken the immune system as they weaken cancer.
“It is fairly common that a cancer recedes following treatment only to return stronger than ever in just a few months or weeks,” said study co-author Sam Hanash, professor of clinical cancer prevention and director of the Red and Charlene McCombs Institute for the Early Detection and Treatment of Cancer at MD Anderson. “The new model captures this dynamic and suggests alternative scenarios whereby the immune system does its job fighting the cancer.”
Ben-Jacob said the team showed that it was possible to force cancer from the strong to the moderate state by alternating cycles of radiation or chemotherapy with immune-boosting treatments.
“Our model shows that just a few of these treatment-boosting cycles can alter the cancer-immune balance to help the immune system bring the cancer to the moderate state,” Ben-Jacob said. “Once in the intermediate state, cancer can be brought further down to the weak state by a few short pulses of immune boosting.
“It is much more effective to use a two-step process and drive cancer from the strong to the intermediate state and then from the intermediate to the weak state,” he said. “Without the exosome — the cancer-immune cyberwar nanocarriers — and the third state, this two-step approach wouldn’t be possible.”
Ben-Jacob is a senior investigator at CTBP, adjunct professor of biochemistry and cell biology at Rice and the Maguy-Glass Chair in Physics of Complex Systems and professor of physics and astronomy at Tel Aviv University.
In addition to Ben-Jacob, Onuchic and Lu, study co-authors include Rice graduate student Bin Huang and Sam Hanash, director of the Red and Charline McCombs Institute for the Early Detection and Treatment of Cancer at the University of Texas MD Anderson Cancer Center. The research was supported by the Cancer Prevention and Research Institute of Texas, the National Science Foundation and the Tauber Family Funds.
Publication: Mingyang Lu, et al., “Modeling putative therapeutic implications of exosome exchange between tumor and immune cells,” PNAS, 2014; doi: 10.1073/pnas.1416745111

ALMA Sees Signs of Windy Weather Around an Infant Solar System

ALMA Data Suggest Evidence of a Possible Wind in Infant Solar System
Artist impression of the T Tauri star AS 205 N and its companion. New ALMA data suggest that the disk around the star may be expelling gas via a wind. Credit: P. Marenfeld (NOAO/AURA/NSF)
Using ALMA data, astronomers have observed what may be the first-ever signs of windy weather around a T Tauri star.
Astronomers using the Atacama Large Millimeter/submillimeter Array (ALMA) have observed what may be the first-ever signs of windy weather around a T Tauri star, an infant analog of our own Sun. This may help explain why some T Tauri stars have disks that glow weirdly in infrared light while others shine in a more expected fashion.
T Tauri stars are the infant versions of stars like our Sun. They are relatively normal, medium-size stars that are surrounded by the raw materials to build both rocky and gaseous planets. Though nearly invisible in optical light, these disks shine in both infrared and millimeter-wavelength light.
“The material in the disk of a T Tauri star usually, but not always, emits infrared radiation with a predictable energy distribution,” said Colette Salyk, an astronomer with the National Optical Astronomical Observatory (NOAO) in Tucson, Arizona, and lead author on a paper published in the Astrophysical Journal. “Some T Tauri stars, however, like to act up by emitting infrared radiation in unexpected ways.”
To account for the different infrared signature around such similar stars, astronomers propose that winds may be emanating from within some T Tauri stars’ protoplanetary disks. These winds could have important implications for planet formation, potentially robbing the disk of some of the gas required for the formation of giant Jupiter-like planets, or stirring up the disk and causing the building blocks of planets to change location entirely. These winds have been predicted by astronomers, but have never been clearly detected.
Using ALMA, Salyk and her colleagues looked for evidence of a possible wind in AS 205 N – a T Tauri star located 407 light-years away at the edge of a star-forming region in the constellation Ophiuchus, the Snake Bearer. This star seems to exhibit the strange infrared signature that has intrigued astronomers.
With ALMA’s exceptional resolution and sensitivity, the researchers were able to study the distribution of carbon monoxide around the star. Carbon monoxide is an excellent tracer for the molecular gas that makes up stars and their planet-forming disks. These studies confirmed that there was indeed gas leaving the disk’s surface, as would be expected if a wind were present. The properties of the wind, however, did not exactly match expectations.
This difference between observations and expectations could be due to the fact that AS 205 N is actually part of a multiple star system – with a companion, dubbed AS 205 S, that is itself a binary star.
This multiple star arrangement may suggest that the gas is leaving the disk’s surface because it’s being pulled away by the binary companion star rather than ejected by a wind.
“We are hoping these new ALMA observations help us better understand winds, but they have also left us with a new mystery,” said Salyk. “Are we seeing winds, or interactions with the companion star?”
The study’s authors are not pessimistic, however. They plan to continue their research with more ALMA observations, targeting other unusual T Tauri stars, with and without companions, to see whether they show these same features.
T Tauri stars are named after their prototype star, discovered in 1852 – the third star in the constellation Taurus whose brightness was found to vary erratically. At one point, some 4.5 billion years ago, our Sun was a T Tauri star.
Other authors include Klaus Pontoppidan, Space Telescope Science Institute; Stuartt Corder, Joint ALMA Observatory; Diego Muñoz, Center for Space Research, Department of Astronomy, Cornell University; and Ke Zhang and Geoffrey Blake, Division of Geological & Planetary Sciences, California Institute of Technology,
Publication: Colette Salyk, et al., “ALMA Observations of the T Tauri Binary System AS 205: Evidence for Molecular Winds and/or Binary Interactions,” 2014, ApJ, 792, 68; doi:10.1088/0004-637X/792/1/68

MIT Engineers Develop New Technologies to Battle Superbugs

New Technologies Could Enable Novel Strategies for Combating Drug-Resistant Bacteria
A scanning electron micrograph depicts numerous clumps of methicillin-resistant Staphylococcus aureus bacteria, commonly referred to by the acronym MRSA. Image: Janice Haney Carr/Centers for Disease Control and Prevention
Using a gene-editing system that can disable any target gene, MIT engineers have shown that they can selectively kill bacteria carrying harmful genes that confer antibiotic resistance or cause disease.
In recent years, new strains of bacteria have emerged that resist even the most powerful antibiotics. Each year, these superbugs, including drug-resistant forms of tuberculosis and staphylococcus, infect more than 2 million people nationwide, and kill at least 23,000. Despite the urgent need for new treatments, scientists have discovered very few new classes of antibiotics in the past decade.
MIT engineers have now turned a powerful new weapon on these superbugs. Using a gene-editing system that can disable any target gene, they have shown that they can selectively kill bacteria carrying harmful genes that confer antibiotic resistance or cause disease.
Led by Timothy Lu, an associate professor of biological engineering and electrical engineering and computer science, the researchers described their findings in the September 21 issue of Nature Biotechnology. Last month, Lu’s lab reported a different approach to combating resistant bacteria by identifying combinations of genes that work together to make bacteria more susceptible to antibiotics.
Lu hopes that both technologies will lead to new drugs to help fight the growing crisis posed by drug-resistant bacteria.
“This is a pretty crucial moment when there are fewer and fewer new antibiotics available, but more and more antibiotic resistance evolving,” he says. “We’ve been interested in finding new ways to combat antibiotic resistance, and these papers offer two different strategies for doing that.”
Cutting out resistance
Most antibiotics work by interfering with crucial functions such as cell division or protein synthesis. However, some bacteria, including the formidable MRSA (methicillin-resistant Staphylococcus aureus) and CRE (carbapenem-resistant Enterobacteriaceae) organisms, have evolved to become virtually untreatable with existing drugs.
In the new Nature Biotechnology study, graduate students Robert Citorik and Mark Mimee worked with Lu to target specific genes that allow bacteria to survive antibiotic treatment. The CRISPR genome-editing system presented the perfect strategy to go after those genes.
CRISPR, originally discovered by biologists studying the bacterial immune system, involves a set of proteins that bacteria use to defend themselves against bacteriophages (viruses that infect bacteria). One of these proteins, a DNA-cutting enzyme called Cas9, binds to short RNA guide strands that target specific sequences, telling Cas9 where to make its cuts.
Lu and colleagues decided to turn bacteria’s own weapons against them. They designed their RNA guide strands to target genes for antibiotic resistance, including the enzyme NDM-1, which allows bacteria to resist a broad range of beta-lactam antibiotics, including carbapenems. The genes encoding NDM-1 and other antibiotic resistance factors are usually carried on plasmids — circular strands of DNA separate from the bacterial genome — making it easier for them to spread through populations.
When the researchers turned the CRISPR system against NDM-1, they were able to specifically kill more than 99 percent of NDM-1-carrying bacteria, while antibiotics to which the bacteria were resistant did not induce any significant killing. They also successfully targeted another antibiotic resistance gene encoding SHV-18, a mutation in the bacterial chromosome providing resistance to quinolone antibiotics, and a virulence factor in enterohemorrhagic E. coli.
In addition, the researchers showed that the CRISPR system could be used to selectively remove specific bacteria from diverse bacterial communities based on their genetic signatures, thus opening up the potential for “microbiome editing” beyond antimicrobial applications.
To get the CRISPR components into bacteria, the researchers created two delivery vehicles — engineered bacteria that carry CRISPR genes on plasmids, and bacteriophage particles that bind to the bacteria and inject the genes. Both of these carriers successfully spread the CRISPR genes through the population of drug-resistant bacteria. Delivery of the CRISPR system into waxworm larvae infected with a harmful form of E. coli resulted in increased survival of the larvae.
The researchers are now testing this approach in mice, and they envision that eventually the technology could be adapted to deliver the CRISPR components to treat infections or remove other unwanted bacteria in human patients.
“This work represents a very interesting genetic method for killing antibiotic-resistant bacteria in a directed fashion, which in principle could help to combat the spread of antibiotic resistance fueled by excessive broad-spectrum treatment,” says Ahmad Khalil, an assistant professor of biomedical engineering at Boston University who was not part of the research team.
High-speed genetic screens
Another tool Lu has developed to fight antibiotic resistance is a technology called CombiGEM. This system, described in the Proceedings of the National Academy of Sciences the week of Aug. 11, allows scientists to rapidly and systematically search for genetic combinations that sensitize bacteria to different antibiotics.
To test the system, Lu and his graduate student, Allen Cheng, created a library of 34,000 pairs of bacterial genes. All of these genes code for transcription factors, which are proteins that control the expression of other genes. Each gene pair is contained on a single piece of DNA that also includes a six-base-pair barcode for each gene. These barcodes allow the researchers to rapidly identify the genes in each pair without having to sequence the entire strand of DNA.
“You can take advantage of really high-throughput sequencing technologies that allow you, in a single shot, to assess millions of genetic combinations simultaneously and pick out the ones that are successful,” Lu says.
The researchers then delivered the gene pairs into drug-resistant bacteria and treated them with different antibiotics. For each antibiotic, they identified gene combinations that enhanced the killing of target bacteria by 10,000- to 1,000,000-fold. The researchers are now investigating how these genes exert their effects.
“This platform allows you to discover the combinations that are really interesting, but it doesn’t necessarily tell you why they work well,” Lu says. “This is a high-throughput technology for uncovering genetic combinations that look really interesting, and then you have to go downstream and figure out the mechanisms.”
Once scientists understand how these genes influence antibiotic resistance, they could try to design new drugs that mimic the effects, Lu says. It is also possible that the genes themselves could be used as a treatment, if researchers can find a safe and effective way to deliver them.
CombiGEM also enables the generation of combinations of three or four genes in a more powerful way than previously existing methods. “We’re excited about the application of CombiGEM to probe complex multifactorial phenotypes, such as stem cell differentiation, cancer biology, and synthetic circuits,” Lu says.
The research was funded by the National Institutes of Health, the Defense Threat Reduction Agency, the U.S. Army Research Laboratory, the U.S. Army Research Office, the Office of Naval Research, and the Ellison Foundation.
Publication: Allen A. Cheng, et al., “Enhanced killing of antibiotic-resistant bacteria enabled by massively parallel combinatorial genetics,” PNAS, 2014, vol. 111 no. 34, 12462–12467; doi: 10.1073/pnas.1400093111

New “Gold Nanocluster” Technology Improves Solar Cell Performance

New “Gold Nanocluster” Technology Improves Solar Cell Performance

September 26, 2014
Scientists Addvance Solar Power with New Gold Nanocluster Technology
We report for the first time the fabrication of nanocomposite hole-blocking layers consisting of poly-3,4-ethylene-dioxythiophene:poly-styrene-sulfonate (PEDOT:PSS) thin films incorporating networks of gold nanoparticles assembled from Au144(SCH2CH2Ph)60, a molecular gold precursor. These thin films can be prepared reproducibly on indium tin oxide by spinning on it Au144(SCH2CH2Ph)60 solutions in chlorobenzene, annealing the resulting thin film at 400 °C, and subsequently spinning PEDOT:PSS on top. The use of our nanocomposite hole-blocking layers for enhancing the photoconversion efficiency of bulk heterojunction organic solar cells is demonstrated. By varying the concentration of Au144(SCH2CH2Ph)60 in the starting solution and the annealing time, different gold nanostructures were obtained ranging from individual gold nanoparticles (AuNPs) to tessellated networks of gold nanostructures (Tess-AuNPs). Improvement in organic solar cell efficiencies up to 10% relative to a reference cell is demonstrated with Tess-AuNPs embedded in PEDOT:PSS.
Using approximately 10,000 times less gold than in previous studies, researchers from Western University improve solar power performance with new “gold nanocluster” technology.
Scientists at Western University have discovered that a small molecule created with just 144 atoms of gold can increase solar cell performance by more than 10 per cent. These findings,published recently by the high-impact journal Nanoscale, represent a game-changing innovation that holds the potential to take solar power mainstream and dramatically decrease the world’s dependence on traditional, resource-based sources of energy, says Giovanni Fanchini from Western’s Faculty of Science.
Fanchini, the Canada Research Chair in Carbon-based Nanomaterials and Nano-optoelectronics, says the new technology could easily be fast-tracked and integrated into prototypes of solar panels in one to two years and solar-powered phones in as little as five years.
“Every time you recharge your cell phone, you have to plug it in,” says Fanchini, an assistant professor in Western’s Department of Physics and Astronomy. “What if you could charge mobile devices like phones, tablets or laptops on the go? Not only would it be convenient, but the potential energy savings would be significant.”
The Western researchers have already started working with manufacturers of solar components to integrate their findings into existing solar cell technology and are excited about the potential.
“The Canadian business industry already has tremendous know-how in solar manufacturing,” says Fanchini. “Our invention is modular, an add-on to the existing production process, so we anticipate a working prototype very quickly.”
Making nanoplasmonic enhancements, Fanchini and his team use “gold nanoclusters” as building blocks to create a flexible network of antennae on more traditional solar panels to attract an increase of light. While nanotechnology is the science of creating functional systems at the molecular level, nanoplasmonics investigates the interaction of light with and within these systems.
“Picture an extremely delicate fishnet of gold,” explains Fanchini explains, noting that the antennae are so miniscule they are unseen even with a conventional optical microscope. “The fishnet catches the light emitted by the sun and draws it into the active region of the solar cell.”
According to Fanchini, the spectrum of light reflected by gold is centered on the yellow color and matches the light spectrum of the sun making it superior for such antennae as it greatly amplifies the amount of sunlight going directly into the device.
“Gold is very robust, resilient to oxidization and not easily damaged, making it the perfect material for long-term use,” says Fanchini. “And gold can also be recycled.”
It has been known for some time that larger gold nanoparticles enhance solar cell performance, but the Western team is getting results with “a ridiculously small amount” – approximately 10,000 times less than previous studies, which is 10,000 times less expensive too.
Publication: Reg Bauld, et al., “Tessellated gold nanostructures from Au144(SCH2CH2Ph)60 molecular precursors and their use in organic solar cell enhancement,” Nanoscale, 2014,6, 7570-7575; DOI: 10.1039/C4NR01821D

First Mars Observations from NASA’s MAVEN Spacecraft

First MAVEN Images of Mars
The above images are the first first observations of the extended upper atmosphere from NASA’s MAVEN spacecraft.
NASA’s Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft has obtained its first observations of the extended upper atmosphere surrounding Mars.
The Imaging Ultraviolet Spectrograph (IUVS) instrument obtained these false-color images eight hours after the successful completion of Mars orbit insertion by the spacecraft at 10:24 p.m. EDT Sunday, September 21, after a 10-month journey.
The image shows the planet from an altitude of 36,500 km in three ultraviolet wavelength bands. Blue shows the ultraviolet light from the sun scattered from atomic hydrogen gas in an extended cloud that goes to thousands of kilometers above the planet’s surface. Green shows a different wavelength of ultraviolet light that is primarily sunlight reflected off of atomic oxygen, showing the smaller oxygen cloud. Red shows ultraviolet sunlight reflected from the planet’s surface; the bright spot in the lower right is light reflected either from polar ice or clouds.
The oxygen gas is held close to the planet by Mars’ gravity, while lighter hydrogen gas is present to higher altitudes and extends past the edges of the image. These gases derive from the breakdown of water and carbon dioxide in Mars’ atmosphere. Over the course of its one-Earth-year primary science mission, MAVEN observations like these will be used to determine the loss rate of hydrogen and oxygen from the Martian atmosphere. These observations will allow us to determine the amount of water that has escaped from the planet over time.
MAVEN is the first spacecraft dedicated to exploring the tenuous upper atmosphere of Mars.

Theorists Find a New Way to Improve Solar Cell Efficiency

Theorists Discover a New Way to Improve Solar Cell Efficiency
A representation of one-way exciton currents (shown as light-colored trails) in the designed two-dimensional porphyrin lattice. Illustration: Lauren Aleza Kaye
Researchers at MIT and Harvard have discovered a way of rendering excitons immune to getting stuck in minuscule defects as they hop through a material, which could possibly lead to improving efficiency in photovoltaic devices.
A major limitation in the performance of solar cells happens within the photovoltaic material itself: When photons strike the molecules of a solar cell, they transfer their energy, producing quasi-particles called excitons — an energized state of molecules. That energized state can hop from one molecule to the next until it’s transferred to electrons in a wire, which can light up a bulb or turn a motor.
But as the excitons hop through the material, they are prone to getting stuck in minuscule defects, or traps — causing them to release their energy as wasted light.
Now a team of researchers at MIT and Harvard University has found a way of rendering excitons immune to these traps, possibly improving photovoltaic devices’ efficiency. The work is described in a paper in the journal Nature Materials.
Their approach is based on recent research on exotic electronic states known as topological insulators, in which the bulk of a material is an electrical insulator — that is, it does not allow electrons to move freely — while its surface is a good conductor.
The MIT-Harvard team used this underlying principle, called topological protection, but applied it to excitons instead of electrons, explains lead author Joel Yuen, a postdoc in MIT’s Center for Excitonics, part of the Research Laboratory of Electronics. Topological protection, he says, “has been a very popular idea in the physics and materials communities in the last few years,” and has been successfully applied to both electronic and photonic materials.
Moving on the surface
Topological excitons would move only at the surface of a material, Yuen explains, with the direction of their motion determined by the direction of an applied magnetic field. In that respect, their behavior is similar to that of topological electrons or photons.
In its theoretical analysis, the team studied the behavior of excitons in an organic material, a porphyrin thin film, and determined that their motion through the material would be immune to the kind of defects that tend to trap excitons in conventional solar cells.
The choice of porphyrin for this analysis was based on the fact that it is a well-known and widely studied family of materials, says co-author Semion Saikin, a postdoc at Harvard and an affiliate of the Center for Excitonics. The next step, he says, will be to extend the analysis to other kinds of materials.
While the work so far has been theoretical, experimentalists are eager to pursue the concept. Ultimately, this approach could lead to novel circuits that are similar to electronic devices but based on controlling the flow of excitons rather that electrons, Yuen says. “If there are ever excitonic circuits,” he says, “this could be the mechanism” that governs their functioning. But the likely first application of the work would be in creating solar cells that are less vulnerable to the trapping of excitons.
Eric Bittner, a professor of chemistry at the University of Houston who was not associated with this work, says, “The work is interesting on both the fundamental and practical levels. On the fundamental side, it is intriguing that one may be able to create excitonic materials with topological properties. This opens a new avenue for both theoretical and experimental work. … On the practical side, the interesting properties of these materials and the fact that we’re talking about pretty simple starting components — porphyrin thin films — makes them novel materials for new devices.”
The work received support from the U.S. Department of Energy and the Defense Threat Reduction Agency. Norman Yao, a graduate student at Harvard, was also a co-author.
Publication: Joel Yuen-Zhou, et al., “Topologically protected excitons in porphyrin thin films,” Nature Materials, 2014; doi:10.1038/nmat4073

New Research Mathematically Proves Quantum Effects Stop the Formation of Black Holes

Researchers Prove Quantum Effects Stop the Formation of Black Holes
New research by Laura Mersini-Houghton at UNC-Chapel Hill mathematically proves that quantum effects are strong enough to stop the formation of black holes, opening up a new round of discussions about the origins of the universe.
Black holes have long captured the public imagination and been the subject of popular culture, from Star Trek to Hollywood. They are the ultimate unknown – the blackest and most dense objects in the universe that do not even let light escape.
And as if they weren’t bizarre enough to begin with, now add this to the mix: they don’t exist.
By merging two seemingly conflicting theories, Laura Mersini-Houghton, a physics professor at UNC-Chapel Hill in the College of Arts and Sciences, has proven, mathematically, that black holes can never come into being in the first place. The work not only forces scientists to reimagine the fabric of space-time, but also rethink the origins of the universe.
“I’m still not over the shock,” said Mersini-Houghton. “We’ve been studying this problem for a more than 50 years and this solution gives us a lot to think about.”
For decades, black holes were thought to form when a massive star collapses under its own gravity to a single point in space – imagine the Earth being squished into a ball the size of a peanut – called a singularity. So the story went, an invisible membrane known as the event horizon surrounds the singularity and crossing this horizon means that you could never cross back. It’s the point where a black hole’s gravitational pull is so strong that nothing can escape it.
The reason black holes are so bizarre is that it pits two fundamental theories of the universe against each other. Einstein’s theory of gravity predicts the formation of black holes but a fundamental law of quantum theory states that no information from the universe can ever disappear. Efforts to combine these two theories lead to mathematical nonsense, and became known as the information loss paradox.
In 1974, Stephen Hawking used quantum mechanics to show that black holes emit radiation. Since then, scientists have detected fingerprints in the cosmos that are consistent with this radiation, identifying an ever-increasing list of the universe’s black holes.
But now Mersini-Houghton describes an entirely new scenario. She and Hawking both agree that as a star collapses under its own gravity, it produces Hawking radiation. However, in her new work, Mersini-Houghton shows that by giving off this radiation, the star also sheds mass. So much so that as it shrinks it no longer has the density to become a black hole.
Before a black hole can form, the dying star swells one last time and then explodes. A singularity never forms and neither does an event horizon. The take home message of her work is clear: there is no such thing as a black hole.
The paper, which was recently submitted to ArXiv, an online repository of physics papers that is not peer-reviewed, offers exact numerical solutions to this problem and was done in collaboration with Harald Peiffer, an expert on numerical relativity at the University of Toronto. An earlier paper, by Mersini-Houghton, originally submitted to ArXiv in June, was published in the journal Physics Letters B, and offers approximate solutions to the problem.
Experimental evidence may one day provide physical proof as to whether or not black holes exist in the universe. But for now, Mersini-Houghton says the mathematics are conclusive.
Many physicists and astronomers believe that our universe originated from a singularity that began expanding with the Big Bang. However, if singularities do not exist, then physicists have to rethink their ideas of the Big Bang and whether it ever happened.
“Physicists have been trying to merge these two theories – Einstein’s theory of gravity and quantum mechanics – for decades, but this scenario brings these two theories together, into harmony,” said Mersini-Houghton. “And that’s a big deal.”
Publication: Laura Mersini-Houghton, “Backreaction of Hawking radiation on a gravitationally collapsing star I: Black holes?,” Physics Letters B, 2014, DOI: 10.1016/j.physletb.2014.09.018

Why Binary Stars Are So Abundant

New Models Show Most Stars Were Formed When Protostars Broke Up
These images show the distribution of density in the central plane of a three-dimensional model of a molecular cloud core from which stars are born. The model computes the cloud’s evolution over the free-fall timescale, which is how long it would take an object to collapse under its own gravity without any opposing forces interfering. The free-fall time is a common metric for measuring the timescale of astrophysical processes. In a) the free-fall time is 0.0, meaning this is the initial configuration of the cloud, and moving on the model shows the cloud core in various stages of collapse: b) a free-fall time of 1.40 or 66,080 years; c) a free-fall time of 1.51 or 71,272 years; and d) a free-fall time of 1.68 or 79,296 years. Collapse takes somewhat longer than a free-fall time in this model because of the presence of magnetic fields, which slow the collapse process, but are not strong enough to prevent the cloud from fragmenting into a multiple protostar system (d). For context, the region shown in a) and b) is about 0.21 light years (or 2.0 x 1017 centimeters) across, while the region shown in c) and d) is about 0.02 light years (or 2.0 x 1016 cm) across. Image is provided courtesy of Alan Boss
New research from the Carnegie Institution for Science helps to explain why binary stars are so abundant.
Washington, D.C. — New modeling studies from Carnegie’s Alan Boss demonstrate that most of the stars we see were formed when unstable clusters of newly formed protostars broke up. These protostars are born out of rotating clouds of dust and gas, which act as nurseries for star formation. Rare clusters of multiple protostars remain stable and mature into multi-star systems. The unstable ones will eject stars until they achieve stability and end up as single or binary stars. The work is published in The Astrophysical Journal.
About two-thirds of all stars within 81 light years (25 parsecs) of Earth are binary or part of multi-star systems. Younger star and protostar populations have a higher frequency of multi-star systems than older ones, an observation that ties in with Boss’ findings that many single-star systems start out as binary or multi-star systems from which stars are ejected to achieve stability.
Protostar clusters are formed when the core of a molecular cloud collapses due to its own gravity and breaks up into pieces, a process called fragmentation. The physical forces involved in the collapse are subjects of great interest to scientists, because they can teach us about the life cycles of stars and how our own Sun may have been born. One force that affects collapse is the magnetic field that threads the clouds, potentially stifling the fragmentation process.
Boss’ work shows that when a cloud collapses, the fragmentation process depends on the initial strength of the magnetic field, which acts against the gravity that causes the collapse. Above a certain magnetic field strength, single protostars are formed, while below it, the cloud fragments into multiple protostars. This second scenario is evidently commonplace, given the large number of binary and multi-star systems, although single stars can form by this mechanism as well through ejection from a cluster.
“When we look up at the night sky,” Boss said, “the human eye is unable to see that binary stars are the rule, rather than the exception. These new calculations help to explain why binaries are so abundant.”
Publication: Alan P. Boss and Sandra A. Keiser, “Collapse and Fragmentation of Magnetic Molecular Cloud Cores with the Enzo AMR MHD Code. II. Prolate and Oblate Cores,” 2014, ApJ, 794, 44; doi:10.1088/0004-637X/794/1/44

First Mars Observations from NASA’s MAVEN Spacecraft

First MAVEN Images of Mars
The above images are the first first observations of the extended upper atmosphere from NASA’s MAVEN spacecraft.
NASA’s Mars Atmosphere and Volatile Evolution (MAVEN) spacecraft has obtained its first observations of the extended upper atmosphere surrounding Mars.
The Imaging Ultraviolet Spectrograph (IUVS) instrument obtained these false-color images eight hours after the successful completion of Mars orbit insertion by the spacecraft at 10:24 p.m. EDT Sunday, September 21, after a 10-month journey.
The image shows the planet from an altitude of 36,500 km in three ultraviolet wavelength bands. Blue shows the ultraviolet light from the sun scattered from atomic hydrogen gas in an extended cloud that goes to thousands of kilometers above the planet’s surface. Green shows a different wavelength of ultraviolet light that is primarily sunlight reflected off of atomic oxygen, showing the smaller oxygen cloud. Red shows ultraviolet sunlight reflected from the planet’s surface; the bright spot in the lower right is light reflected either from polar ice or clouds.
The oxygen gas is held close to the planet by Mars’ gravity, while lighter hydrogen gas is present to higher altitudes and extends past the edges of the image. These gases derive from the breakdown of water and carbon dioxide in Mars’ atmosphere. Over the course of its one-Earth-year primary science mission, MAVEN observations like these will be used to determine the loss rate of hydrogen and oxygen from the Martian atmosphere. These observations will allow us to determine the amount of water that has escaped from the planet over time.
MAVEN is the first spacecraft dedicated to exploring the tenuous upper atmosphere of Mars.