Physics | Popular Science https://www.popsci.com/category/physics/ Awe-inspiring science reporting, technology news, and DIY projects. Skunks to space robots, primates to climates. That's Popular Science, 145 years strong. Thu, 02 Nov 2023 21:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.popsci.com/uploads/2021/04/28/cropped-PSC3.png?auto=webp&width=32&height=32 Physics | Popular Science https://www.popsci.com/category/physics/ 32 32 How physicists built the world’s smallest particle accelerator https://www.popsci.com/science/tiniest-particle-accelerator/ Thu, 02 Nov 2023 21:00:00 +0000 https://www.popsci.com/?p=585750
The particle accelerator on a one-cent coin.
A microchip with the electron-accelerating structures with, in comparison, a one cent coin. FAU/Laser Physics, Stefanie Kraus, Julian Litze

The coin-sized device is a proof-of-concept, but could inspire future medical devices.

The post How physicists built the world’s smallest particle accelerator appeared first on Popular Science.

]]>
The particle accelerator on a one-cent coin.
A microchip with the electron-accelerating structures with, in comparison, a one cent coin. FAU/Laser Physics, Stefanie Kraus, Julian Litze

If you think of a particle accelerator, what may come to mind is something like CERN’s Large Hadron Collider (LHC): a multibillion-dollar colossus that’s dozens of miles wide and crosses international borders in the name of unlocking how the universe works.

But particle accelerators take many forms. There are more than 30,000 accelerators in the world today. While some of them—including LHC—are designed to unveil the universe’s secrets, the vast majority have far more Earthly purposes. They’re used for everything from generating beams of brilliant light to manufacturing electronics to imaging the body and treating cancer. In fact, a hospital can buy a room-sized medical accelerator for just a few hundred thousand dollars. And, as of last month, scientists have made another curious addition to the list: the smallest particle accelerator yet.

Physicists have fabricated an accelerator the size of a coin, publishing their work in Nature on October 18. This device is just a tech demo, but its creators hope it opens the gateway to even smaller accelerators that could fit on a silicon chip.

“I consider this paper to be really interesting and cool physics, for sure, and it’s been an effort that’s been going on for a long time,” says Howard Milchberg, a physicist at the University of Maryland, who was not involved with the research.

[Related: The green revolution is coming for power-hungry particle accelerators]

This mini-accelerator is not merely a Lilliputian LHC. Depending on its operational calendar, LHC fires protons or the nuclei of lead atoms around a large circle. This miniaturized accelerator instead fires electrons down a straight line. 

Plenty of other linear electron accelerators have existed, including most famously the now-dismantled two-mile-long Stanford Linear Collider. Traditionally, electron accelerators boost their projectiles by shooting them through metallic cavities, typically made from copper, that contain twitching electromagnetic fields. The chambers thus push particles along like surfers on electric waves. 

But some physicists believe that these old-fashioned accelerators are not ideal. The metallic cavities are prone to errors. They’re also unwieldy and require large equipment. The researchers’ new accelerator instead uses precise laser shots to push the electrons.

Physicists have been trying to make laser accelerators since the 1960s. Called photonic accelerators, referring to the study of light, they can be smaller and more cost-efficient than their cavity-based counterparts. But only in the past decade have lasers become precise and affordable enough for even experimental photonic accelerators to be practical.

Making them smaller, then, brought its own series of daunting obstacles. A major stumbling block had been the fact that engineers didn’t have the sophisticated technology needed to craft a mini accelerator’s tiny parts.

Take the coin-sized accelerator the researchers tried to build. First, it generates electrons using a part repurposed from an electron microscope. Then, the device pushes the electrons down a colonnade: two rows of several hundred silicon pillars, each just 2 micrometers tall, with an even smaller gap between the rows. A laser strikes the top of the pillars, creating electric fields that boost the electrons squeezed inside—at least on paper.

“Making such small features with enough precision is extremely demanding,” says Tomáš Chlouba, a physicist at Friedrich-Alexander-Universität Erlangen-Nürnberg in Germany, and one of the paper’s authors. “You need really top-of-the-line devices…these are not cheap devices, and these are not devices that were available in the 90s.”

[Related: Scientists found a fleeting particle from the universe’s first moments]

But chip fabrication is always advancing. Now, Chlouba and his colleagues could rely on techniques that are already common in the world of semiconductor manufacturing. They fashioned a successful prototype. The device can deliver only about 1 electron per second, a tiny trickle by particle accelerator standards. (The average wire inside the average device in your home carries quadrillions of times more electrons.) Moreover, the electrons have about the same energy as those inside an old-style cathode ray tube television: again, a pittance by particle accelerator standards. 

As a result, “I don’t know how practical it could be,” says Milchberg. Fitting more electrons down the colonnade would be like hitting a bullseye with a shotgun blast, he says.

Indeed, Chlouba makes it patently clear that he and his colleagues are very far away from using this accelerator for anything resembling a real-world application. If they want to do that, they’ll need to make many more electrons, with much higher energies. Milchberg says it is also not clear if batches of electrons can fit together down the colonnade without their negative electric charges pushing them apart.

But if researchers succeed at overcoming these hurdles, Chlouba could imagine a host of applications for particle accelerators that could be arranged on a standard silicon chip. Medical professionals already use electron accelerators to treat skin cancer. With that in mind, some doctors might imagine an accelerator that is small enough to insert inside the body via an endoscope. “This is smaller, cheaper, and fits everywhere,” Chlouba says.

The post How physicists built the world’s smallest particle accelerator appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How this computer scientist is rethinking color theory https://www.popsci.com/science/color-theory-schrodinger-algorithm/ Tue, 31 Oct 2023 13:00:00 +0000 https://www.popsci.com/?p=584474
scientist pulls back stage curtain with colorful shapes behind; illustration
Aaron Fernandez for Popular Science

There’s a flaw in the famous model that programmers use to translate color to pixels.

The post How this computer scientist is rethinking color theory appeared first on Popular Science.

]]>
scientist pulls back stage curtain with colorful shapes behind; illustration
Aaron Fernandez for Popular Science

BENEATH A CLEAR SKY and a high sun, a regular human eye can see nearly the entire visible color spectrum. Remove direct sunlight, and a reflection offers only a sliver of the rainbow. But despite darkness distorting our points of reference, we can still determine color in shadow. Many factors influence the hues we detect: our eyes, our brains, the air, objects the light bounces from, Earth’s geometry, and even our visual memories.

Trying to replicate that breadth and sensitivity to color on a computer monitor or printer is both a nightmare and a dream for technologists. And that’s exactly the problem that Roxana Bujack, a staff scientist on the Data Science at Scale Team at Los Alamos National Laboratory in New Mexico, is trying to solve with computations. Math is behind “everything that happens in Photoshop,” Bujack says. “It’s all just matrices and operations, but you see immediately with your eyes what this math does.”

Any answer to this problem would be a far cry from art-class color wheels, or even how most computer screens and printers operate today. Digital work relies on the RGB (red, green, blue) model, which uses a monitor’s light source to adjust the brightness of those three colors to create pigment in pixels. The CMY (cyan, magenta, yellow) model behind printers, meanwhile, is subtractive, removing colors from a white base; if you want to print yellow on card stock, the printer combines the CMY inks to change the lighter background by varying degrees to reach the desired color.

These color models were last updated a century ago. Erwin Schrödinger, of quantum cat fame, along with mathematician Bernhard Riemann and physicist Hermann von Helmholtz, improved RGB. Realizing that the distance between, say, a rosy red and a drab green could not be measured on a straight line, they looked for a more flexible model. They shifted from representations of color in a familiar physical space, what’s known as Euclidean geometry, to the warped world of Riemannian geometry.

Bujack likens their interpretation to an airline service map. Routes aren’t indicated with straight lines, but rather half-moons that reflect Earth’s curvature. “Suppose you take two colors and then pick one that lies on the shortest path between them—say, magenta in the middle, purple to the right, and pink to the left. Then you measure the paths from magenta to purple and from magenta to pink,” she says. “The sum of those two path segments should equal the length of the whole path drawn from magenta to pink, representing the perceived difference between those two colors. It should add up, just like the flight distances from Seattle through Reykjavik to Amsterdam.”

Schrödinger’s 3D model has been the foundation of color theory for more than 100 years. Scientists and developers apply it when seeking to perfect the digital representation of colors on the screens of machines. It helps translate into pixels the ways by which a human eye distinguishes different shades, like the way you’re able to recognize this text as black and the background as white without a blur.

For Bujack, the contours of this space are familiar. She studied mathematics at Leipzig University in Germany, where a course on image processing propelled her into a subset of that field. That’s where she became fascinated by the math that powers programs as diverse as Photoshop and processor-consuming video games. She graduated with a doctorate in computer science in 2014 before landing at the Los Alamos Laboratory, former home to the Manhattan Project.

There, in 2021, her team launched a project with a modest aim: to build algorithms that would design color maps, streamlining the conversion of pigments into digits and date, Bujack says. Illustrators who use Photoshop, Final Cut, and similar programs would benefit; so would the climate scientists, physicists, and weather researchers who represent numerical data with colors.

But they discovered an inconsistency that upended the century-old understanding of the field. “Schrödinger’s work was super-advanced, realizing we need a curved space to describe color space and that this stupid Euclidean space is not working out,” says Bujack. But Schrödinger and his collaborators “did not notice that we need a more robust model.”

Schrödinger’s math doesn’t work, Bujack and her team found, because it fails to predict the correct hues between two colors. On a flight path—halfway between Seattle and Reykjavik, for example—you can calculate how long you have left in your journey. But a midpoint between purple and red does not produce the expected color. The old 3D approach overestimated how different we perceive one shade to be from the next. The Los Alamos team published its findings in April 2022 in the journal Proceedings of the National Academy of Sciences. “As a scientist, I have always dreamed of proving someone famous wrong,” says Bujack. “However, this level of fame exceeds even my wildest dreams.”

But that revelation did not come with an obvious solution. “The current model is not accurate,” says Bujack. “[But] that doesn’t mean we have an off-the-shelf model to replace it.” Because mapping out the new space is “way more laborious” than Schrödinger’s calculations, a mathematical update is “years and years and years in the future.”

The consequences of this discovery, however, could make their way to our computers sooner. Nick Spiker, a color engineer working on IDT Maker, a proprietary digital relighting system, consulted with Bujack after her study was published. He’s since submitted a patent for a product that could help video producers and photographers change the apparent time of day in their videos and pictures.

While it hasn’t led to a replacement model yet, Bujack’s insight will help build something better—for instance, “If you’re watching Netflix or any visual content and you want accurate color,” Spiker says. He adds, “Now this is going to make images appear more realistic than ever before.”

Read more PopSci+ stories.

The post How this computer scientist is rethinking color theory appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Gravitational wave detector now squeezes light to find more black holes https://www.popsci.com/science/ligo-quantum-squeezing-detections/ Fri, 27 Oct 2023 10:00:00 +0000 https://www.popsci.com/?p=583746
Dark black holes merge together in a brown, star-studded illustration.
Two merging black holes, each roughly 30 times the mass of the sun, in a computer simulation.

The cutting-edge move has boosted the cosmic collisions LIGO can hear by up to 70 percent.

The post Gravitational wave detector now squeezes light to find more black holes appeared first on Popular Science.

]]>
Dark black holes merge together in a brown, star-studded illustration.
Two merging black holes, each roughly 30 times the mass of the sun, in a computer simulation.

Gravitational wave observatories, such as the Laser Interferometer Gravitational-Wave Observatory (LIGO), are exercises in extreme sensitivity. LIGO’s two experimental ears—one in Louisiana, another in Washington state—listen to ripples in space-time left behind by objects that include black holes and neutron stars. To do this, LIGO carefully watches for minute fluctuations in miles-long laser beams. The challenge is that everything from rumbling tractors to the weather to quantum noise can cause disturbances of their own. A huge part of gravitational wave observation is the science of weeding out unwanted noise.

Now, following a round of upgrades, both of LIGO’s ears can hear 60 percent more events than ever before. Much of the credit goes to a system that corrects for barely perceptible quantum noise by very literally squeezing the light.

Physicists and engineers have been tinkering with light-squeezing in the lab for decades, and their work is showing real results. “It’s not a demonstration anymore,” says Lee McCuller, a physicist at Caltech. “We’re actually using it.” McCuller and his colleagues will publish their work in the journal Physical Review X on October 30.

Gravitational waves are an odd curiosity of how gravity works, as predicted by general relativity. As a falling rock casts ripples in water, sufficiently spectacular events—say, two black holes or two neutron stars merging together—cast waves in the fabric of space-time. Listening into those gravitational waves allows astronomers to peek at massive objects like black holes and neutron stars that are otherwise difficult to see clearly. Scientists can only pull this off thanks to devices like LIGO.

LIGO’s ears are shaped like very large Ls, their arms precisely 4 kilometers (2.49 miles) long. A laser beam, split in two, travels each down one of the arms. Those beams bounce off a mirror at the far end, and return back to the vertex, where they can be recombined into a single beam. Tiny shifts in space-time—gravitational waves—can subtly stretch and squeeze either arm, etching patterns in the recombined beam’s light.

The length shifts are extremely subtle, far too slight to even dream of seeing with the naked eye. The task of detecting such a slight shift becomes even trickier when LIGO detectors are prone to earthquakes, weather, and human activity, all of which create noise that rattles the mirrors or shakes up the laser beams.

Physicists have developed ways of cutting out all that noise. They can keep the arms in a vacuum, devoid of all other matter, to prevent sound waves. They can suspend mirrors to isolate them from vibrations. They can measure the noise of the outside world and adjust the instruments accordingly, like a very large noise-cancelling headset. 

Green light shines on a complex device used to reduce quantum noise.
One of LIGO’s quantum squeezers in operation.

But something that these methods cannot filter out is quantum physics. Even in a perfect vacuum, the inherent randomness of the universe at its tiniest scales—particles popping in and out of existence—makes its mark. “You’ve got a natural fluctuation on the level of your measurement that can mask a weak gravitational wave signal,” says Patrick Sutton, an astrophysicist at Cardiff University, a member of the LIGO-Virgo collaboration who wasn’t an author of the new study.

[Related: We’ve recorded a whopping 35 gravitational wave events in just 5 months]

LIGO detected the first-ever confirmed gravitational waves in 2016. Around the same time, its operators were thinking about ways to weed out the quantum disturbances. Physicists can manipulate light by trapping it within a crystal and “squeezing” it. They installed such a crystal on both LIGO detectors in time for the observatory’s third round of detections, which began in 2019.

The upgrade enabled LIGO to work with laser light with higher frequencies. But squeezing light like this came at a cost: making it more difficult to read lower-frequency light. This is problematic, because the gravitational waves from events we can detect—such as black hole mergers—tend to produce a good deal of lower-frequency light in LIGO.

So, after COVID-19 forced LIGO to shut down in mid-2020, its operators added a new chamber to their squeezing setup. This chamber allows a more adaptive approach, manipulating different properties of light at different frequencies. To do this, the chamber must trap light for 3 milliseconds—enough time for light to travel hundreds of miles. The chamber began operation when LIGO’s fourth, current observing run switched on earlier this year.

“It took a lot of engineering and design work and careful thinking to make this an upgrade that does its job and improves squeezing, but doesn’t introduce new noise,” McCuller says.

Both of LIGO’s detectors can now pick up gravitational waves from further into the cosmos and from a wider swath of space. LIGO now hears about 60 to 70 percent more events, according to Sutton. Better sensitivity also allows astronomers to measure gravitational waves with greatly increased precision, which lets them test the theory of general relativity. “It’s a significant jump,” Sutton says.

[Related: Astronomers now know how supermassive black holes blast us with energy]

LIGO’s fellow detector in Europe, Virgo, is implementing the same frequency-dependent squeezing based on its scientists’ own research. “We don’t currently know of any other technique that can improve upon this one,” McCuller says. “In terms of new techniques, this is the best one we actually know how to use at the moment.”

All the gravitational wave events we’ve seen so far came from two black holes or two neutron stars emerging: loud, violent events that leave equally violent splashes. But gravitational wave listeners would like to use gravitational waves to listen to other events, too, such as supernovas, gamma ray bursts, and pulsars. We aren’t quite there yet, but squeezing may get us closer by letting us take full advantage of the hardware we have.

“The key there is just to make the detectors ever more sensitive—bring that noise down and down and down—until, eventually we start seeing some,” Sutton says. “I think those will be very exciting days.”

The post Gravitational wave detector now squeezes light to find more black holes appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This nuclear byproduct is fueling debate over Fukushima’s seafood https://www.popsci.com/environment/fukushima-water-releases-tritium/ Sat, 07 Oct 2023 19:00:00 +0000 https://www.popsci.com/?p=577435
Blue bins of fish and other seafood caught near the Fukushima nuclear plant in Japan
Fishery workers sort out seafood caught in Japan's Fukushima prefecture about a week after the country began discharging treated wastewater from the Fukushima Daiichi nuclear power plant. STR/JIJI Press/AFP via Getty Images

Is disposing water from the Fukushima nuclear plant into the ocean safe for marine life? Scientists say it's complicated.

The post This nuclear byproduct is fueling debate over Fukushima’s seafood appeared first on Popular Science.

]]>
Blue bins of fish and other seafood caught near the Fukushima nuclear plant in Japan
Fishery workers sort out seafood caught in Japan's Fukushima prefecture about a week after the country began discharging treated wastewater from the Fukushima Daiichi nuclear power plant. STR/JIJI Press/AFP via Getty Images

On October 5, operators of Japan’s derelict Fukushima Daiichi nuclear power plant resumed pumping out wastewater held in the facility for the past 12 years. Over the following two-and-a-half weeks, Tokyo Electric Power Company (TEPCO) plans to release around 7,800 tons of treated water into the Pacific Ocean.

This is TEPCO’s second round of discharging nuclear plant wastewater, following an initial release in September. Plans call for the process, which was approved by and is being overseen by the Japanese government, to go on intermittently for some 30 years. But the approach has been controversial: Polls suggest that around 40 percent of the Japanese public opposes it, and it has sparked backlash from ecological activists, local fishermen, South Korean citizens, and the Chinese government, who fear that radiation will harm Pacific ecosystems and contaminate seafood.

Globally, some scientists argue there is no cause for concern. “The doses [or radiation] really are incredibly low,” says Jim Smith, an environmental scientist at the University of Portsmouth in the UK. “It’s less than a dental X-ray, even if you’re consuming seafood from that area.”

Smith vouches for the water release’s safety in an opinion article published on October 5 in the journal Science. The International Atomic Energy Agency has endorsed TEPCO’s process and also vouched for its safety. But experts in other fields have strong reservations about continuing with the pumping.

“There are hundreds of clear examples showing that, where radioactivity levels are high, there are deleterious consequences,” says Timothy Mousseau, a biologist at the University of South Carolina.

[Related: Nuclear war inspired peacetime ‘gamma gardens’ for growing mutant plants]

After a tsunami struck the Fukushima nuclear power plant in 2011, TEPCO started frantically shunting water into the six reactors to stop them from overheating and causing an even greater catastrophe. They stored the resulting 1.25 million tons of radioactive wastewater in tanks on-site. TEPCO and the Japanese government say that if Fukushima Daiichi is ever to be decommissioned, that water will have to go elsewhere.

In the past decade, TEPCO says it’s been able to treat the wastewater with a series of chemical reactions and cleanse most of the contaminant radioisotopes, including iodine-131, cesium-134, and cesium-137. But much of the current controversy swirls around one isotope the treatment couldn’t remove: tritium.

Tritium is a hydrogen isotope that has two extra neutrons. A byproduct of nuclear fission, it is radioactive with a half-life of around 12 years. Because tritium shares many properties with hydrogen, its atoms can infiltrate water molecules and create a radioactive liquid that looks and behaves almost identically to what we drink.

This makes separating it from nuclear wastewater challenging—in fact, no existing technology can treat tritium in the sheer volume of water contained at Fukushima. Some of the plan’s opponents argue that authorities should postpone any releases until scientists develop a system that could cleanse tritium from large amounts of water.

But TEPCO argues they’re running out of room to keep the wastewater. As a result, they have chosen to heavily dilute it—100 parts “clean” water for every 1 part of tritium water—and pipe it into the Pacific.

“There is no option for Fukushima or TEPCO but to release the water,” says Awadhesh Jha, an environmental toxicologist at the University of Plymouth in the UK. “This is an area which is prone to earthquakes and tsunamis. They can’t store it—they have to deal with it.”

Smith believes the same properties that allow tritium to hide in water molecules means it doesn’t build up in marine life, citing environmental research by him and his colleagues. For decades, they’ve been studying fish and insects in lakes, pools, and ponds downstream from the nuclear disaster at Chernobyl. “We haven’t really found significant impacts of radiation on the ecosystem,” Smith says.

[Related: Ultra-powerful X-rays are helping physicists understand Chernobyl]

What’s more, Japanese officials testing seawater during the initial release did not find recordable levels of tritium, which Smith attributes to the wastewater’s dilution.

But the first release barely scratches the surface of Fukushima’s wastewater, and Jha warns that the scientific evidence regarding tritium’s effect in the sea is mixed. There are still a lot of questions about how potent tritium effects are on different biological systems and different parts of the food chain. Some results do suggest that the isotope can damage fish chromosomes as effectively as higher-energy X-rays or gamma rays, leading to negative health outcomes later in life.

Additionally, experts have found tritium can bind to organic matter in various ecosystems and persist there for decades. “These things have not been addressed adequately,” Jha says.

Smith argues that there’s less tritium in this release than in natural sources, like cosmic rays that strike the upper atmosphere and create tritium rain from above. Furthermore, he says that damage to fish DNA does not necessarily correlate to adverse effects for wildlife or people. “We know that radiation, even at low doses, can damage DNA, but that’s not sufficient to damage how the organism reproduces, how it lives, and how it develops,” he says.

“We don’t know that the effects of the water release will be negligible, because we don’t really know for sure how much radioactive material actually will be released in the future,” Mousseau counters. He adds that independent oversight of the process could quell some of the environmental and health concerns.

Smith and other proponents of TEPCO’s plan point out that it’s actually common practice in the nuclear industry. Power plants use water to naturally cool their reactors, leaving them with tons of tritium-laced waste to dispose. Because tritium is, again, close to impossible to remove from large quantities of H20 with current technology, power plants (including ones in China) dump it back into bodies of water at concentrations that exceed those in the Fukushima releases.

“That doesn’t justify that we should keep discharging,” Jha says. “We need to do more work on what it does.”

If tritium levels stay as low as TEPCO and Smith assure they will, then the seafood from the region may very well be safe to eat. But plenty of experts like Mousseau and Jha don’t think there is enough scientific evidence to say that with certainty.

The post This nuclear byproduct is fueling debate over Fukushima’s seafood appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new satellite’s ‘plasma brake’ uses Earth’s atmosphere to avoid becoming space junk https://www.popsci.com/science/estonia-plasma-brake-satellite/ Thu, 05 Oct 2023 16:30:00 +0000 https://www.popsci.com/?p=577240
Orbital cubesat plasma brake concept art
The tiny system will test a fuel-free, lightweight method for slowing down satellites. University of Tartu/ESA

The ESTCube-2 is set to launch this weekend.

The post A new satellite’s ‘plasma brake’ uses Earth’s atmosphere to avoid becoming space junk appeared first on Popular Science.

]]>
Orbital cubesat plasma brake concept art
The tiny system will test a fuel-free, lightweight method for slowing down satellites. University of Tartu/ESA

It took eight years and the collaborative efforts of over 600 interdisciplinary undergraduate students, but Estonia’s second satellite is finally on track to launch later this week. Once in orbit thanks to a lift aboard one of the European Space Agency’s Vega VV23 rockets, the tiny  8.5 lb ESTCube-2 will test an elegant method to potentially help clear the skies’ increasingly worrisome space junk issue using a novel “plasma brake.”

Designed by Finnish Meteorological Institute physicist Pekka Janhunen, the electric sail (E-sail) technology harnesses the physics underlying Earth’s ionosphere—the atmosphere’s electrically charged outer layer. Once in orbit, Estonia’s ESTCube-2 will deploy a nearly 165-foot-long tether composed of hair-thin aluminum wires that, once charged via solar power, will repel the almost motionless plasma within the ionosphere.

[Related: The FCC just dished out their first space junk fine.]

“​​Historically, tethers have been prone to snap in space due to micrometeorites or other hazards,” Janhunen explained in an October 3 statement ahead of the mission launch. “So ESTCube-2’s net-like microtether design brings added redundancy with two parallel and two zig-zagging bonded wires.”

If successful, the drag should slow down the tiny cubesat enough to shorten its orbital decay time to just a two-year lifespan. Not only that, but it would do so without any physical propellant source, thus offering a lightweight, low-cost alternative to existing satellite decommissioning options.

“It is exciting to see if the plasma break is going to work as planned… and if the tether itself is as robust as it needs to be,” Carolin Frueh, an associate professor of aeronautics and astronautics at Purdue University, tells PopSci via email. “The longer a dead or decommissioned satellite is out there, the higher the risk that it runs into other objects, which leads to fragmentation and the creation of even more debris objects.”

According to Frueh, although drag sails have been explored to help with Low Earth Orbit (LEO) satellites’ end-of-life maneuvers in the past, “the plasma brake technology has the potential to be more robust and more easily deployable at the end of life compared to a classical large solar sail.”

After just seven decades’ worth of space travel, junk is already a huge issue for ongoing private- and government-funded missions. Literally millions of tiny trash pieces now orbit the Earth as fast as 17,500 mph, each one a potential mission-ender. Such debris could also prove fatal to unfortunate astronauts in their path. 

Although multiple international efforts are underway to help mitigate the amount of space junk, even the process of planning such operations can be difficult. Earlier this year, for example, an ESA space debris cleanup pilot project grew more complicated after its orbital trash target reportedly unexpectedly collided with other debris. On October 2, the Federal Communications Commission issued its first-ever orbital littering fine after satellite television provider Dish Network failed to properly deorbit a decommissioned, direct broadcast EchoStar-7 satellite last year.

“As satellite operations become more prevalent and the space economy accelerates, we must be certain that operators comply with their commitments,” Enforcement Bureau Chief Loyaan A. Egal said at the time.

Estonia’s second-ever satellite is scheduled to launch on October 7 from the ESA’s spaceport in French Guiana.

The post A new satellite’s ‘plasma brake’ uses Earth’s atmosphere to avoid becoming space junk appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
JWST takes a jab at the mystery of the universe’s expansion rate https://www.popsci.com/science/universe-expansion-jwst-hubble-constant/ Tue, 03 Oct 2023 16:00:00 +0000 https://www.popsci.com/?p=576745
A purplish spiral galaxy with red and yellow space objects.
Spiral galaxy NGC 5584, which resides 72 million light-years away, contains pulsating stars called Cepheid variables. NASA, ESA, CSA, Adam G. Riess (JHU, STScI)

The powerful space telescope's precise measurements confirm we have a problem.

The post JWST takes a jab at the mystery of the universe’s expansion rate appeared first on Popular Science.

]]>
A purplish spiral galaxy with red and yellow space objects.
Spiral galaxy NGC 5584, which resides 72 million light-years away, contains pulsating stars called Cepheid variables. NASA, ESA, CSA, Adam G. Riess (JHU, STScI)

The universe is expanding—but astronomers can’t agree how fast. And NASA’s superstar observatory, the James Webb Space Telescope, just confirmed there’s a problem in our understanding of the stretching cosmos. JWST’s new measurements are the most precise of their kind, but they don’t clear up a baffling mismatch in the two methods scientists track this growth. 

In 1929, astronomer Edwin Hubble discovered that all the galaxies we can see are moving away from us. The relationship between the distance to a galaxy and how fast it’s moving is now known as Hubble’s law. This law uses the also-eponymous Hubble constant to describe the rate at which the universe is expanding. It also tells us the age of the universe: Astronomers can use the Hubble constant to “rewind” time to when the universe would be a single point in space—the big bang.

There are two main ways to measure this fundamental number. One is by tracing tiny fluctuations in the cosmic microwave background from the beginning of the universe. The other is to watch flickering stars known as Cepheids. But those two methods disagree. This baffling mismatch is known as the Hubble tension, and it’s unclear if it’s a problem with our models of the universe or our measurements.

If it’s our measurements, the error might result from the way we survey Cepheid stars. Astronomers consider these objects to be a type of “standard candle,” a thing in space whose intrinsic brightness is known. We can observe how bright one of these stars looks in the sky. If it’s faint, it’s farther away. Brighter is closer. 

Researchers use the luminosity of these stars like a yardstick to measure distance. Then, with methods such as spectroscopy, they can gauge the motion of far-off galaxies. Putting those observations together tells us how fast the universe is expanding.

[Related: NASA releases Hubble images of cotton candy-colored clouds in Orion Nebula]

“When we use Cepheids like this, we need to be very, very sure we’re measuring their brightnesses correctly, otherwise our distance measurements will be off. However, Cepheids can be in crowded parts of their galaxies and if our telescopes aren’t sensitive enough, we can’t clearly distinguish a Cepheid from the stars around it,” explains astronomer Tarini Konchady, a program officer at the National Academies of Sciences, Engineering, and Medicine. 

Before JWST, the Hubble Space Telescope (HST) took the best measurements of Cepheid stars. HST couldn’t distinguish individual Cepheids where they were bunched in crowded regions, but JWST can—and it just did. JWST peered into two distant galaxies, and made measurements of the Hubble constant 2.5 times better than HST could. 

“Webb’s measurements have dramatically cut the noise in the Cepheid measurements,” said project lead Adam Riess, an astronomer at Johns Hopkins University in a NASA press release. “This kind of improvement is the stuff astronomers dream of!”

One of JWST’s major advantages is its ability to look at the cosmos in infrared light, which helps cut through dust between our telescopes and the Cepheids. “Sharp infrared vision is one of the James Webb Space Telescope’s superpowers,” Riess said.

[Related: How old is the universe? Our answer keeps getting more precise.]

However, the new measurements matched up with those from HST, just with smaller error bars—so we can’t confidently pin the mystery on those old numbers.

The new results from Riess and team are just the beginning, though, and they still have many more galaxies to observe with JWST. “I think the jury is still out on whether the JWST has completely eliminated crowding as a solution to the Hubble tension,” says University of Chicago astronomer Abigail Lee. “Analyzing the data for the rest of the 42 galaxies [that JWST plans to observe] will illuminate whether the Hubble tension is alive and real or if there are indeed just errors in the Cepheid measurements.”

The fate of the universe, or at least the Hubble tension, doesn’t just hinge on JWST. Many other facilities will come online in the next few years, providing more evidence for this investigation. The Vera Rubin Observatory, for example, is going to scan the whole Southern sky every few nights when it opens next year, and will likely discover many more Cepheid stars.

“We’re at a point where astronomers are going to be deluged by the most sensitive and wide-reaching data yet,” says Konchady. There might not be a clear answer yet, but astronomers are surely on the case to figure out this mystery.

This post has been updated to include additional details about astronomical methods for measuring the expansion rate.

The post JWST takes a jab at the mystery of the universe’s expansion rate appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Winners of the 2023 Nobel Prize in physics measured electrons by the attosecond https://www.popsci.com/science/nobel-prize-physics-attosecond/ Tue, 03 Oct 2023 13:30:00 +0000 https://www.popsci.com/?p=576735
An illustration of Pierre Agostini, Ferenc Krausz, and Anne L´Huillier. The three will share the 2023 Nobel prize in physics.
Pierre Agostini, Ferenc Krausz, and Anne L´Huillier will share the 2023 Nobel prize in physics. Niklas Elmehed/Nobel Prize Outreach

Their groundbreaking research helps generate and measure some of the 'most rapid physical effects known.'

The post Winners of the 2023 Nobel Prize in physics measured electrons by the attosecond appeared first on Popular Science.

]]>
An illustration of Pierre Agostini, Ferenc Krausz, and Anne L´Huillier. The three will share the 2023 Nobel prize in physics.
Pierre Agostini, Ferenc Krausz, and Anne L´Huillier will share the 2023 Nobel prize in physics. Niklas Elmehed/Nobel Prize Outreach

The 2023 Nobel prize in physics was just awarded to three physicists for their work probing the world of electrons. Pierre Agostini, Ferenc Krausz, and Anne L’Huillier will jointly share the prestigious prize.

[Related: When light flashes for a quintillionth of a second, things get weird.]

These physicists “are being recognised for their experiments, which have given humanity new tools for exploring the world of electrons inside atoms and molecules,” the Nobel committee wrote on Tuesday. “Pierre Agostini, Ferenc Krausz and Anne L’Huillier have demonstrated a way to create extremely short pulses of light that can be used to measure the rapid processes in which electrons move or change energy.”

Agostini is a professor emeritus at Ohio State University. Krausz is affiliated with the Max Planck Institute of Quantum Optics and the Ludwig Maximilian University of Munich. L’Huillier is a professor at Lund University in Sweden and the fifth woman ever awarded the physics prize. 

Discovering the attosecond

When perceived by humans, fast-moving events flow into each other similar to the way a flip book of still images can be perceived as continual movement. To better investigate these extremely brief events, special technology is needed.

In the world of electrons, these changes occur in an attosecond, or only a millionth of a trillionth of a second. An attosecond is so short that there are as many attoseconds in one second as there have been seconds since the birth of the universe roughly 13.8 billion years ago

Electrons’ movements in atoms and molecules are measured in these attoseconds. Agostini, Krausz, and L’Huillier have conducted experiments that demonstrate how attosecond pulses could actually be observed and measured, according to the awarding committee.

Overtones of light

In 1987, L’Huillier discovered that many different overtones of light arose when she transmitted infrared laser light through a noble gas. Each individual overtone is a light wave that has a given number of cycles for each cycle in the laser light. The overtones are caused by the laser light interacting with atoms in the gas. They give some electrons an extra energy boost that is then emitted as light. In the almost four decades since, L’Huillier has continued to explore this phenomenon which laid the foundation for subsequent breakthroughs.

[Related: This record-breaking X-ray laser is ready to unlock quantum secrets.]

In 2001, Agostini produced and investigated a series of consecutive light pulses. During these experiments, each pulse lasted only 250 attoseconds. At the same time, Krausz was working with another type of experiment. His experiment made it possible to isolate a single light pulse that lasted 650 attoseconds.

This work enabled the investigation into physical processes that are so rapid that they were previously impossible to follow. 

“We can now open the door to the world of electrons. Attosecond physics gives us the opportunity to understand mechanisms that are governed by electrons. The next step will be utilizing them,” Chair of the Nobel Committee for Physics Eva Olsson said in a statement.

This groundbreaking work has potential applications in electronics and medicine in the future. In electronics, understanding and controlling how electrons behave in a material is crucial. Attosecond pulses could also identify different molecules in future medical diagnostics.

“In much the same fashion that a photographer may use a flash of light to capture a hummingbird’s wing or a baseball being hit, this year’s Nobel prize winners developed revolutionary methods to generate and measure extremely fast laser pulses that can capture some of the most rapid physical effects known,” Johns Hopkins University physicist N. Peter Armitage told PopSci in an email. “Among other aspects, their work gives insight into the motion of electrons between atoms and allows movies of chemical reactions to be made. It’s remarkable fundamental science, and was done for that reason, but these discoveries may ultimately allow insight into the effects that give superconductivity at high temperatures and efficient energy harvesting from light.”

The 2022 Nobel prize in physics was awarded to John F. Clauser, Alain Aspect, and Anton Zeilinger for their independent contributions to understanding quantum entanglement. Other past winners include Pierre and Marie (Sklodowska) Curie in 1903 and Max Planck in 1918.

The post Winners of the 2023 Nobel Prize in physics measured electrons by the attosecond appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Two fault lines near Seattle could rupture in one giant earthquake https://www.popsci.com/science/earthquake-two-faults-seattle/ Thu, 28 Sep 2023 11:00:00 +0000 https://www.popsci.com/?p=575463
The Seattle skyline.
Residents of Seattle should be aware of the earthquake risks in their area, experts say. Depositphotos

Tree ring samples reveal a pair of quakes, or one large one, in Seattle’s geologic history.

The post Two fault lines near Seattle could rupture in one giant earthquake appeared first on Popular Science.

]]>
The Seattle skyline.
Residents of Seattle should be aware of the earthquake risks in their area, experts say. Depositphotos

Earthquakes occur along fractures, or faults, in the earth’s crust. When one of these cracks in the ground suddenly moves, it can cause a quake. And sometimes a quake at one fault can trigger activity at another, creating one large earthquake—or multiple in quick succession.

Researchers have linked two faults in the Seattle area, and they’ve discovered this geologic duo was responsible for an earth-shifting event more than a millennium ago. In a new study published in the journal Science Advances, researchers analyzed old tree samples from Washington’s Puget Sound region. This paper is one of the few to link the Seattle fault with the Saddle Mountain fault, and the authors say that current hazard models need to be updated to include this data. While experts say residents of the Seattle area don’t need to be particularly alarmed by these findings, this paper is a reminder to be aware of the area’s earthquake risks.

Quakes can set off landslides that uproot trees or tsunamis that drown them. “If those trees are preserved, you can go back and work with them and find out exactly when they died and thus when the earthquake occurred,” says lead study author Bryan Black, a dendrochronologist at the University of Arizona. 

Radiocarbon dating showed these dead trees were more than 1,100 years old. Using dendrochronology, the science of analyzing tree rings, the team confirmed that between 923 and 924 CE two faults in the Seattle area produced either one large earthquake of magnitude 7.8, or two sequential earthquakes of slightly lower magnitude. By deciphering the tree rings, Black managed to determine that trees killed near the Seattle fault died around the same time as trees killed near the Saddle Mountain fault. “I could narrow things down and know that this was sometime during the Douglas fir dormant season of 923 to 924, in about a six-month window,” Black says.

[Related: Why most countries don’t have enough earthquake-resilient buildings]

There were two possible scenarios for how this all went down: Either this was one big earthquake that ruptured two separate faults. Or these were two separate earthquakes, with one triggering the other on different faults. “We estimated that the multi-fault earthquake, the one large earthquake scenario, is about three times as likely as the two-earthquake scenario,” says Morgan Page, a geophysicist at the United States Geological Survey, and a co-author of the paper. 

A cross section from a tree that drowned when a forest was carried into Lake Washington as part of a landslide.
A cross section from a tree that drowned when a forest was carried into Lake Washington as part of a landslide. Bryan Black

It’s not unusual that faults can influence each other to create larger earthquakes. In February this year, Turkey experienced two devastating earthquakes from separate faults in short succession, followed by dozens of damaging aftershocks. In 2016 New Zealand experienced a series of quakes that ruptured at least 21 different faults

This new finding may lead agencies to recalibrate their hazard models. (Faults are considered potential earthquake sources if they have been active within the past 1.6 million years.) When thinking about risk, it’s important to consider the upper limits for what is possible, says Corina Allen, chief hazards geologist at the Washington State Department of Natural Resources. If these faults together produced a magnitude 7.8 earthquake 1,100 years ago—which is not that long ago on a geological time scale—they may want to calculate how a similar earthquake might play out today, she says. 

[Related: Earthquakes can cause serious psychological aftershocks]

It will be especially important for governments at state and local levels to update their models, but individuals should also have plans in mind in case of a large earthquake. Allen says best calculations suggest that there’s a 10 to 15 percent chance of a really big earthquake in Washington—of magnitude 9.0 or higher—within the next 50 years. If you plan for a large quake, she adds, you’ll also be prepared for the ones that are “smaller and more likely.”

While this paper’s discoveries may feel like bad news, it’s really a reminder that earthquakes pose a perennial hazard along the West Coast, Page says. Lots of urban centers are clustered around faults that are only likely to produce earthquakes of small or moderate magnitude. But because of the presence of buildings and people, those can be more devastating than larger earthquakes in remote areas, she says. Rather than focusing only on “the big one,” it’s important to think about the small or moderate earthquakes that could happen underneath you. In a quake-prone area, have food and water on hand, secure your home’s heavy objects, and know what you need to do to protect yourself.

The best thing we can all do is be aware that the risks exist and are not going away, says Allen. Even if we can’t pin down a timeline for the next quake—“geology doesn’t work like clockwork,” Allen notes—we’re always learning more. 

The post Two fault lines near Seattle could rupture in one giant earthquake appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Does antimatter fall down or up? We now have a definitive answer. https://www.popsci.com/science/antimatter-gravity/ Wed, 27 Sep 2023 21:14:47 +0000 https://www.popsci.com/?p=575473
CERN scientists in hard hats putting antihydrogen in a vacuum chamber tube to test the effects of gravity on antimatter
The hardest part of the ALPHA experiment was not making antimatter fall, but creating and containing it in a tall vacuum chamber. CERN

Gravity wins—this time around.

The post Does antimatter fall down or up? We now have a definitive answer. appeared first on Popular Science.

]]>
CERN scientists in hard hats putting antihydrogen in a vacuum chamber tube to test the effects of gravity on antimatter
The hardest part of the ALPHA experiment was not making antimatter fall, but creating and containing it in a tall vacuum chamber. CERN

Albert Einstein didn’t know about the existence of antimatter when he came up with the theory of general relativity, which has governed our understanding of gravity ever since. More than a century later, scientists are still debating how gravity affects antimatter, the elusive mirror versions of the particles that abide within us and around us. In other words, does an antimatter droplet fall down or up? 

Common physics wisdom holds that it should fall down. A tenet of general relativity itself known as the weak equivalence principle implies that gravity shouldn’t care whether something is matter or antimatter. At the same time, a small contingent of experts argue that antimatter falling up might explain, for instance, the mystical dark energy that potentially dominates our universe.

As it happens, particle physicists now have the first direct evidence that antimatter falls down. The Antihydrogen Laser Physics Apparatus (ALPHA) collaboration, an international team based at CERN, measured gravity’s impact on antimatter for the first time. The ALPHA group published their work in the journal Nature today. 

Every particle in the universe has an antimatter reflection with an identical mass and opposite electrical charge; the inverses are hidden in nature, but have been detected in cosmic rays and used in medical imaging for decades. But actually creating antimatter in any meaningful amount is tricky because as soon as a particle of matter and its antagonist meet, the two self-destruct into pure energy. Therefore, antimatter must be carefully cordoned off from all matter, which makes it extra difficult to drop it or play with it any way.

“Everything about antimatter is challenging,” says Jeffrey Hangst, a physicist at Aarhus University in Denmark and a member of the ALPHA group. “It just really sucks to have to work with it.”

Adding to the challenge, gravity is extremely weak on the microscopic scale of atoms and subatomic particles. As early as the 1960s, physicists first thought about measuring gravity’s effects on positrons, or anti-electrons, which have positive rather than negative electric charge. Alas, that same electric charge makes positrons susceptible to tiny electric fields—and electromagnetism eclipses gravity’s force.

So, to properly test gravity’s influence on antimatter, researchers needed a neutral particle. The only “one of the horizon” was the antihydrogen atom, says Joel Fajans, a physicist at UC Berkeley and another member of the ALPHA group.

Antihydrogen is the first, most fundamental element of the anti-periodic table. Just as the garden-variety hydrogen atom consists of one proton and one electron, the basic antihydrogen atom consists of one negatively charged antiproton and an orbiting positron. Physicists only created antihydrogen atoms in the 1990s; they couldn’t trap and store some until 2010.

“We had to learn how to make it, and then we had to learn how to hold onto it, and then we had to learn how to interact with it, and so on,” says Hangst.

Once they overcame those hurdles, they were finally able to study antihydrogen’s properties—such as its behavior under gravity. For the new paper, the ALPHA group designed  a vertical vacuum chamber around a vertical tube devoid of any matter to prevent the antihydrogen from annihilating prematurely. Scientists wrapped part of the tube inside a superconducting magnetic “bottle,” creating a magnetic field that locked the antihydrogen in place until they needed to use it.

Building this apparatus took years on end. “We spent hundreds of hours just studying the magnetic field without using antimatter at all to convince ourselves that we knew what we were doing,” says Hangst. To produce a magnetic field strong enough to hold the antihydrogen, they had to keep the device chilled at -452 degrees Fahrenheit. 

The ALPHA group then dialed down the magnetic field to open the top and bottom of the bottle, and let the antihydrogen atoms loose until they crashed into the tube’s wall. They measured where those atomic deaths happened: above or under the position the antimatter was held in. Some 80 percent of atoms fell a few centimeters below the trap, in line with what a cloud of regular hydrogen atoms would do in the same setup. (The other 20 percent simply popped out.)

“It’s been a lot of fun doing the experiment,” Fajans says. “People have been thinking about this problem for a hundred years … we now have a definitive answer.”

Other researchers around the world are now trying to replicate the result. Their ranks include two other CERN collaborations, GBAR and AEgIS, that are also focused on antihydrogen atoms. The ALPHA team themselves hope to tinker with their experiment to gain more confidence in the outcome.

For instance, when the authors of the Nature study computed how rapidly the antihydrogen atoms accelerated downward with gravity, they found it was 75 percent of the rate physicists would expect for regular hydrogen atoms. But they expect the discrepancy to fade when they repeat these observations to find a more precise result. “This number and these uncertainties are essentially consistent with our best expectation for what gravity would have looked like in our experiment,” says William Bertsche, a physicist at the University of Manchester and another member of the ALPHA group.

But it’s also possible that gravity influences matter and antimatter in different ways. Such an anomaly would throw the weak equivalence principle—and, by extension, general relativity as a whole—into doubt.

Solving this essential question could lead to more answers around the birth of the universe, too. Antimatter lies at the heart of one of physics’ great unsolved mysteries: Why don’t we see more of it? Our laws of physics clearly decree that the big bang ought to have created equal parts matter and antimatter. If so, the two halves of our cosmos should have self-destructed shortly after birth.

Instead, we observe a universe filled with matter and devoid of discernable antimatter to balance it. Either the big bang created an unexplained glut of matter, or something unknown happened. Scientists call this cosmic riddle the baryogenesis problem.

“Any difference that you find between hydrogen and antihydrogen would be an extremely important clue to the baryogenesis problem,” says Fajans.

The post Does antimatter fall down or up? We now have a definitive answer. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The mathematical theory that connects swimming sperm, zebra stripes, and sunflower seeds https://www.popsci.com/science/alan-turing-pattern-zebra-sperm/ Wed, 27 Sep 2023 13:00:00 +0000 https://www.popsci.com/?p=574986
A close up of the black and white stripes of a zebra. The same patterns that dictate zebra stripes could also control the way sperm swim.
Recognizable patterns in nature may appear spontaneously when chemicals within the objects or organisms diffuse and then react together. Deposit Photos

Scientists inch closer to understanding the very basis of nature’s patterns.

The post The mathematical theory that connects swimming sperm, zebra stripes, and sunflower seeds appeared first on Popular Science.

]]>
A close up of the black and white stripes of a zebra. The same patterns that dictate zebra stripes could also control the way sperm swim.
Recognizable patterns in nature may appear spontaneously when chemicals within the objects or organisms diffuse and then react together. Deposit Photos

In nature, patterns of chemical interactions between two different substances are believed to govern the designs our eyes see—for example, a zebra’s stripes. These stripey designs are governed by a mathematical basis that is potentially overseeing another completely unrelated thing—the wavy patterns formed by sperm’s motion. According to a study published September 27 in the journal Nature Communications, the same mathematical theory could traverse both.

[Related: Monarch butterflies’ signature color patterns could inspire better drone design.]

To understand the connection, we need to go back more than 70 years. The wavy undulations of a sperm’s tail—or flagella—make striped patterns in space-time. These patterns potentially follow the same template proposed by mathematician Alan Turing, one of the most famous scientists of the 20th century. Turing is most well-known for helping crack the enigma code during World War II and ushering in a new age of computer science, but he also developed a theory informally called the reaction-diffusion theory for pattern formation. This 1952 theory predicted that recognizable patterns in nature may appear spontaneously when chemicals within the objects or organisms diffuse and then react together.

While this theory hasn’t been well proven by experimental evidence, Turing’s theory sparked more research into using reaction-diffusion mathematics as a way to understand natural patterns. These so-called Turing patterns are believed to govern leopard spots, whorls of seeds in sunflower heads, and even patterns of sand on the beach. 

In this new study, a team from the University of Bristol in England used Turing patterns as a way to look at the movement of sperm’s flagella and vibrating hair-like cells called cilia. 

“Live spontaneous motion of flagella and cilia is observed everywhere in nature, but little is known about how they are orchestrated,” study co-author and mathematician Hermes Gadêlha said in a statement. “They are critical in health and disease, reproduction, evolution, and survivorship of almost every aquatic microorganism [on] earth.”

Flagellar undulations are believed to make stripe patterns in space-time, in the form of the waves that travel along the tail to drive the sperm forward when it is in fluid. To look deeper, Gadêlha and his team used mathematical modeling, simulations, and data fitting to show that wavy flagellar movement can actually arise spontaneously without the influence of the fluid in their environment. According to the team, this is mathematically equivalent to Turing’s reaction-diffusion system that was first proposed for chemical patterns over 70 years ago.

For the swimming sperm, chemical reactions of molecular motors power its tail and the bending movement diffuses along the tail in waves. The fluid itself is playing a very minor role on how the tail moves.

[Related: The genes behind your fingerprints just got weirder.]

“We show that this mathematical ‘recipe’ is followed by two very distant species—bull sperm and Chlamydomonas (a green algae that is used as a model organism across science), suggesting that nature replicates similar solutions,” said Gadêlha. “Traveling waves emerge spontaneously even when the flagellum is uninfluenced by the surrounding fluid. This means that the flagellum has a fool-proof mechanism to enable swimming in low viscosity environments, which would otherwise be impossible for aquatic species. It is the first time that model simulations compare well with experimental data.”

The findings of this study could help understand fertility issues associated with abnormal flagellar motion, diseases caused by ineffective cilia, and be applied to robotics. Other models in nature may exist that could provide further experimental proof of Turing’s template, but more research is needed.  

The post The mathematical theory that connects swimming sperm, zebra stripes, and sunflower seeds appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A massive detector in China will try to find a supernova before it happens https://www.popsci.com/science/juno-neutrino-detector-supernova/ Tue, 26 Sep 2023 15:00:00 +0000 https://www.popsci.com/?p=574515
A metal sphere under construction as workers climb over it.
Workers at the construction site of China's next-generation neutrino detector, Jiangmen Underground Neutrino Observatory. Qiu Xinsheng/VCG via Getty Images

Ghostly particles can give advance warning that a star is about to explode.

The post A massive detector in China will try to find a supernova before it happens appeared first on Popular Science.

]]>
A metal sphere under construction as workers climb over it.
Workers at the construction site of China's next-generation neutrino detector, Jiangmen Underground Neutrino Observatory. Qiu Xinsheng/VCG via Getty Images

Trillions of particles from distant stars and galaxies are streaming through your body every second—you just can’t feel them. These ghost-like particles are called neutrinos. Although the universe spits them out constantly, these objects barely interact with matter—they can even slip through humanity’s toughest barriers, such as steel or lead walls. 

Some neutrinos come from supernovae, the extravagant deaths of the biggest stars; they’re also produced by radioactive decay in Earth’s rocks, reactions in the sun, and even our planet’s aurorae. These hard-to-see particles are all over the place and crucial to multiple areas of science, but we’re still in need of better ways of finding them. Now, a new observatory under construction in China’s Guangdong province—the Jiangmen Underground Neutrino Observatory, or JUNO—plans to hunt these elusive particles with better sensitivity than ever before. 

Like most neutrino detectors, it’s a huge vat filled with liquid for the neutrinos to interact with—the bigger the net, the more fish you’re likely to catch. When it is completed, JUNO will be 20 times larger than the largest existing detector of the same type,” says Yufeng Li, a researcher and member of the JUNO collaboration at the Institute of High Energy Physics (IHEP) in Beijing. Currently under construction and expected to start operation in 2024, this detector will not only be bigger, but also more sensitive to slight variations in neutrinos’ energies than any of its predecessors. Li adds, it’s going to be “a unique and important observatory in the community.”

[Related: The Milky Way’s ghostly neutrinos have finally been found]

The observatory’s most ambitious goal is to preemptively spot neutrinos from stars that are dying but haven’t exploded yet. That way, telescopes can catch these stars in their final destructive act. “Neutrinos are expected to reach Earth hours earlier than photons because of their weakly-interacting nature,” explains Irene Tamborra, a physicist at the Niels Bohr Institute in Denmark not affiliated with the project. 

Astronomers still don’t know the finer details of how a star explodes, but observing the supernova as it starts might help give some clues. “The early detection of neutrinos will be crucial to point the telescopes in the direction of the supernova and catch its electromagnetic emission early on,” adds Tamborra. JUNO should be able to alert astronomers hours to days before a star is slated to explode, giving them time to prep and point their telescopes. It might even be able to measure the faint background of neutrinos coming from distant supernovae, all across the galaxy, which is of great interest to cosmologists trying to put together a picture of the whole universe. 

A staff member works at the construction site of the underground neutrino observatory.
A staff member works at the construction site of the underground neutrino observatory. Deng Hua/Xinhua via Getty Images

In addition to supernovae, the observatory will be searching for neutrinos from much closer to home: nuclear reactors. The nearby Yangjiang and Taishan nuclear power plants produce neutrinos, and physicists are hoping to get a taste of those neutrinos’ flavors with JUNO. Neutrinos come in three flavors (yes, they’re really called that!), known as the electron, tau, and muon neutrinos. They can flip between their different states in so-called oscillations. Scientists can calculate the number of neutrinos of each kind they expect from the power plant, and compare to what they actually observe with JUNO to better understand these flips.

[Related: This ghostly particle may be why dark matter keeps eluding us]

“It is also very likely that there will be surprise discoveries, as that often happens when powerful new experiments are deployed,” says Ohio State University astrophysicist John Beacom.

JUNO isn’t the only big observatory after neutrinos. The current largest liquid neutrino detector is Super-Kamiokande in Japan, and researchers there are planning a huge upgrade to make it the Hyper-Kamiokande. The United States is getting in the game too, currently using a detector at the Fermi National Accelerator Lab and planning its own multi-billion-dollar next-gen observatory, called the Deep Underground Neutrino Experiment. These projects are a few years away, though, so IHEP president Yifang Wang told Science that he gives JUNO “3-to-1 odds to get there first” to figure out some fundamental properties of neutrinos.

No matter who wins the race, this observatory is opening up one of our windows to the universe a bit wider. “JUNO is a huge step forward for neutrino physics and astrophysics,” Beacom says, “and I’m very excited to see what it will do.”

The post A massive detector in China will try to find a supernova before it happens appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What is matter? It’s not as basic as you’d think. https://www.popsci.com/science/what-is-matter/ Mon, 25 Sep 2023 10:00:00 +0000 https://www.popsci.com/?p=573508
Gold atom with nucleus and floating particles to depict what is matter
An atom consists of protons, neutrons, electrons, and a nucleus. But matter consists of a whole lot more. Deposit Photos

Matter makes up nearly a third of the universe, but is still shrouded in secrets.

The post What is matter? It’s not as basic as you’d think. appeared first on Popular Science.

]]>
Gold atom with nucleus and floating particles to depict what is matter
An atom consists of protons, neutrons, electrons, and a nucleus. But matter consists of a whole lot more. Deposit Photos

A little less than one-third of the universe—around 31 percent—consists of matter. A new calculation confirms that number; astrophysicists have long believed that something other than tangible stuff makes up the majority of our reality. So then, what is matter exactly?

One of the hallmarks of Albert Einstein’s theory of special relativity is that mass and energy are inseparable. All mass has intrinsic energy; this is the significance of Einstein’s famous E=mc2 equation. When cosmologists weigh the universe, they’re measuring both mass and energy at once. And 31 percent of that amount is matter, whether it’s visible or invisible.

That difference is key: Not all matter is alike. Very little of it, in fact, forms the objects we can see or touch. The universe is replete with examples of matter that are far stranger.

What is matter?

When we think of “matter,” we might picture the objects we see or their basic building block: the atom. 

Our conception of the atom has evolved over years. Thinkers throughout history had vague ideas that existence could be divided into basic components. But something that resembles the modern idea of the atom is generally credited to British chemist John Dalton. In 1808, he proposed that indivisible particles made up matter. Different base substances—the  elements—arose from atoms with different sizes, masses, and properties. 

John Dalton's primitive period table to depict what is matter.
John Dalton, a Quaker teacher, suggested that each element is made of characteristic atoms and that the weight ratio of the atoms in the products will be the same as the ratio for the reactants. SSLP/Getty Images

Dalton’s schema had 20 elements. Combining those elements created more complex chemical compounds. When the chemist Dmitri Mendeleev constructed a primitive period table in 1869, he listed 63 elements. Today we have cataloged 118

But if only it were that simple. Since the early 20th century, physicists have known that tinier building blocks lurk within atoms: swirling negatively charged electrons and shrouded nuclei, made from positively charged protons and neutral neutrons. We know now, too, that each element corresponds to atoms with a certain number of protons.

[Related: How does electricity work?]

And it’s still not that simple. By the middle of the century, physicists realized that protons and neutrons are actually combinations of even tinier particles, called quarks. To be precise, protons and neutrons both contain three quarks each: a configuration type that physicists call baryons. For that reason, protons, neutrons, and the matter they form—the stuff of our daily lives—are often called “baryonic matter.”

Strange matter in the sky

In our everyday world, baryonic matter typically exists in one of four states: solid, liquid, gas, and plasma. 

Again, matter is not that simple. Under extreme conditions, it can take on a menagerie of more exotic forms. At high enough pressures, materials can become supercritical fluids, simultaneously liquid and gas. At low enough temperatures, multiple atoms coalesce together, creating the Bose-Einstein condensate. These atoms behave as one, acting in all sorts of odd quantum ways

Such exotic states are not limited to the laboratory. Just look at neutron stars: Their undead cores aren’t quite massive enough to collapse into black holes when they go supernova. Instead, as their cores crumple, intense forces rip apart their atomic nuclei and crush the rubble together. The result is essentially a giant ball of neutrons—and protons that absorb electrons, becoming neutrons in the process—and it’s very, very dense. A single spoonful of a neutron star would weigh a billion tons.

Neutron star in infrared with disc of warm dust spinning around it to depict what is matter
This animation depicts a neutron star (RX J0806.4-4123) with a disk of warm dust that produces an infrared signature as detected by NASA’s Hubble Space Telescope. The disk wasn’t directly photographed, but one way to explain the data is by hypothesizing a disk structure that could be 18 billion miles across. NASA, ESA, and N. Tr’Ehnl (Pennsylvania State University)

There are, potentially, hundreds of millions of neutron stars in the Milky Way alone. Deep in their centers, some scientists think, pressures and temperatures are high enough to rip neutrons apart too. Those neutrons may break the quarks that form them.

Physicists study neutron stars to learn about these objects—and what happened at the beginning of the universe. The matter we see around us did not always exist; it formed in the aftermath of the big bang. Before atoms formed, protons and neutrons swam alone through the universe. Even earlier, before there were protons and neutrons, everything was a superheated quark slurry.

Scientists can recreate that state, in some fashion, in particle accelerators. But that disappears in a flash that lasts a fraction of a second. It’s no comparison to the extremely long-lasting neutron stars  “You have a lab that basically exists forever,” says Fridolin Weber, a physicist at San Diego State University.

Matter in the grand scheme of the universe

Over the past several decades, astronomers have developed several ways to understand the universe’s basic parameters. They can examine its large-scale structure and identify  subtle fluctuations in the density of the matter they can see. They can watch how objects’ gravity bends passing light.

A specific way to measure matter density—the proportion of the universe made up of visible and invisible matter—is to pick apart the cosmic microwave background of the big bang. From 2009 to 2013, the European Space Agency’s Planck observatory prodded the afterglow to give scientists the best calculation of the matter density yet, 31 percent.

[Related: Does antimatter fall down or up? We now have a definitive answer.]

The most recent research used a different technique called the mass-richness relation, essentially examining clusters of galaxies, counting how many galaxies exist in each cluster, using that to calculate each group’s mass, and reverse-engineering the matter density. The technique isn’t new, but until now it was raw and unrefined.

“When we did our work, as far as I know, this is the first time that the mass-richness relation has been used to get a result that’s in very good agreement with Planck,” says Gillian Wilson, an astrophysicist at the University of California Riverside, and one of the authors of a paper published in The Astrophysical Journal on September 13. 

Yet remember, it’s not that simple. Only a small fraction—thought to be around 15 percent of matter, or 3 percent of the universe—is visible. The rest, most scientists think, is dark matter. We can detect the ripples that dark matter leaves in gravity. But we can’t observe it directly.

LZ Dark Matter detector with gold photomultipliers to depict what is matter
The 494 xenon-filled photomultipliers on the LUX-ZEPLIN dark matter detector can sense solitary photons from deep space. LUX-ZEPLIN Experiment

Consequently, we aren’t certain what dark matter is. Some scientists believe it is baryonic matter, just in a form that we can’t easily see: Perhaps it is black holes that formed in the early universe, for instance. Others believe it consists of particles that must barely interact at all with our familiar matter. Some scientists believe it is a mix of these. And at least some scientists believe that dark matter does not exist at all.

If it does exist, we might see it with a new generation of telescopes, such as eROSITA, the Rubin Observatory, the Nancy Grace Roman Space Telescope, and Euclid, that can scan ever greater swathes of the universe and see a wider variety of galaxies at different times in cosmic history. “These new surveys might change our understanding of the whole universe [and its matter],” says Mohamed El Hashash, an astrophysicist at the University of California Riverside, and another of the authors. “This is what I personally expect.”

The post What is matter? It’s not as basic as you’d think. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Nature generates more data than the internet … for now https://www.popsci.com/science/human-nature-data-comparison/ Fri, 22 Sep 2023 19:00:00 +0000 https://www.popsci.com/?p=573562
Internet data server farm with green and pink glowing LED lights
A data server farm in Frankfurt, Germany. By some estimates, the internet is growing at a rate of 26 percent annually. Sebastian Gollnow/picture alliance via Getty Images

In the next century, the information transmitted over the internet might eclipse the information shared between Earth's most abundant lifeforms.

The post Nature generates more data than the internet … for now appeared first on Popular Science.

]]>
Internet data server farm with green and pink glowing LED lights
A data server farm in Frankfurt, Germany. By some estimates, the internet is growing at a rate of 26 percent annually. Sebastian Gollnow/picture alliance via Getty Images

Is Earth primarily a planet of life, a world stewarded by the animals, plants, bacteria, and everything else that lives here? Or, is it a planet dominated by human creations? Certainly, we’ve reshaped our home in many ways—from pumping greenhouse gases into the atmosphere to literally redrawing coastlines. But by one measure, biology wins without a contest.

 In an opinion piece published in the journal Life on August 31, astronomers and astrobiologists estimated the amount of information transmitted by a massive class of organisms and technology for communication. Their results are clear: Earth’s biosphere churns out far more information than the internet has in its 30-year history. “This indicates that, for all the rapid progress achieved by humans, nature is still far more remarkable in terms of its complexity,” says Manasvi Lingam, an astrobiologist at the Florida Institute of Technology and one of the paper’s authors.

[Related: Inside the lab that’s growing mushroom computers]

But that could change in the very near future. Lingam and his colleagues say that, if the internet keeps growing at its current voracious rate, it will eclipse the data that comes out of the biosphere in less than a century. This could help us hone our search for intelligent life on other planets by telling us what type of information we should seek.

To represent information from technology, the authors focused on the amount of data transferred through the internet, which far outweighs any other form of human communication. Each second, the internet carries about 40 terabytes of information. They then compared it to the volume of information flowing through Earth’s biosphere. We might not think of the natural world as a realm of big data, but living things have their own ways of communicating. “To my way of thought, one of the reasons—although not the only one—underpinning the complexity of the biosphere is the massive amount of information flow associated with it,” Lingam says.

Bird calls, whale song, and pheromones are all forms of communication, to be sure. But Lingam and his colleagues focused on the information that individual cells transmit—often in the form of molecules that other cells pick up and respond accordingly, such as producing particular proteins. The authors specifically focused on the 100 octillion single-celled prokaryotes that make up the majority of our planet’s biomass

“That is fairly representative of most life on Earth,” says Andrew Rushby, an astrobiologist at Birkbeck, University of London, who was not an author of the paper. “Just a green slime clinging to the surface of the planet. With a couple of primates running around on it, occasionally.”

Bacteria colony forming red biofilm on black background
This colorized image shows an intricate colony of millions of the single-celled bacterium Pseudomonas aeruginosa that have self-organized into a sticky, mat-like colony called a biofilm, which allows them to cooperate with each other, adapt to changes in their environment, and ensure their survival. Scott Chimileski and Roberto Kolter, Harvard Medical School, Boston

As all of Earth’s prokaryotes signal to each other, according to the authors’ estimate, they generate around a billion times as much data as our technology. But human progress is rapid: According to one estimate, the internet is growing by around 26 percent every year. Under the bold assumption that both these rates hold steady for decades to come, the authors stated its size will continue to balloon until it dwarfs the biosphere in around 90 years’ time, sometime in the early 22nd century.

What, then, does a world where we create more information than nature actually look like? It’s hard to predict for certain. The 2110s version of Earth may be as strange to us as the present Earth would seem to a person from the 1930s. That said, picture alien astronomers in another star system carefully monitoring our planet. Rather than glimpsing a planet teeming with natural life, their first impressions of Earth might be a torrent of digital data.

Now, picture the reverse. For decades, scientists and military experts have sought out signatures of extraterrestrials in whatever form it may take. Astronomers have traditionally focused on the energy that a civilization of intelligent life might use—but earlier this year, one group crunched the numbers to determine if aliens in a nearby star system could pick up the leakage from mobile phone towers. (The answer is probably not, at least with LTE networks and technology like today’s radio telescopes.)

MeerKAT radio telescope dish under starry sky
The MeerKAT radio telescope array in South Africa scans for, among other things, extraterrestrial communication signals from distant stars. MeerKAT

On the flip side, we don’t totally have the observational capabilities to home in on extraterrestrial life yet. “I don’t think there’s any way that we could detect the kind of predictions and findings that [Lingam and his coauthors] have quantified here,” Rushby says. “How can we remotely determine this kind of information capacity, or this information transfer rate? We’re probably not at the stage where we could do that.”

But Rushby thinks the study is an interesting next step in a trend. Astrobiologists—certainly those searching for extraterrestrial life—are increasingly thinking about the types and volume of information that different forms of life carries. “There does seem to be this information ‘revolution,’” he says, “where we’re thinking about life in a slightly different way.” In the end, we might learn that there’s more harmony between the communication networks nature has built and computers.

The post Nature generates more data than the internet … for now appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This record-breaking X-ray laser is ready to unlock quantum secrets https://www.popsci.com/technology/slac-x-ray-laser-upgrade/ Tue, 19 Sep 2023 17:00:00 +0000 https://www.popsci.com/?p=572415
Scientist inspects portion of LCLS installation
The upgrades can produce up to 1 million X-ray pulses per second. Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

The latest additions to the Linac Coherent Light Source-II will usher it into a new era of discovery.

The post This record-breaking X-ray laser is ready to unlock quantum secrets appeared first on Popular Science.

]]>
Scientist inspects portion of LCLS installation
The upgrades can produce up to 1 million X-ray pulses per second. Jacqueline Ramseyer Orrell/SLAC National Accelerator Laboratory

One of the world’s most powerful lasers can soon begin peering deeper into the atomic world thanks to recent, cutting-edge X-ray upgrades. Stanford’s SLAC National Accelerator Laboratory has announced improvements to the X-ray free-electron laser (XFEL) component of the Linac Coherent Light Source-II (LCLS-II) allowing “unparalleled capabilities” for examining quantum materials—a milestone over 13 years in the making.

“This achievement marks the culmination of over a decade of work,” said LCLS-II Project Director Greg Hays via the September 18 statement. “It shows that all the different elements of LCLS-II are working in harmony to produce X-ray laser light in an entirely new mode of operation.”

[Related: How to make an X-ray laser that’s colder than space.]

Despite its “laser” classification, LCLS-II can be thought of more as a massive microscope than a device generating bright pinpoints of light. When powered up, an XFEL creates extremely bright X-ray light pulses so quickly they can capture behavioral details of electrons, atoms, and molecules on their natural timescales. SLAC built the world’s first physical XFEL, which began operating in 2009 by firing electrons via a particle accelerator through a room temperature copper pipe at 120 pulses per second.

LCLS-II’s XFEL, however, offers as many as a million X-ray pulses per second—roughly 8,000 times more often, as well as 10,000 times brighter, than its progenitor. LCLS-II’s record-shattering abilities hinge upon a state-of-the-art superconducting accelerator that uses 37 cryogenic modules to cool its environment down to an astonishing -456 F; that’s even colder than the vacuum of outer space, and only a few degrees’ shy of absolute zero.

Interior shot of LCLS cryoplant
Credit: Olivier Bonin/SLAC National Accelerator Laboratory Olivier Bonin/SLAC National Accelerator Laboratory

Scientists intend to use the LCLS-II upgrades to study quantum materials’ interactions, as well, something pivotal to accurately examine their “unusual and often counter-intuitive properties,” according to SLAC’s announcement. A better understanding of these attributes could lead to ultrafast data processing, more energy efficient devices, quantum computers, as well as a host of other technological breakthroughs. “From the intricate dance of proteins to the machinery of photosynthesis, LCLS-II will shed light on biological systems in never-before-seen detail,” reads SLAC’s rundown.

[Related: Physicists take first-ever X-rays of single atoms.]

PopSci has followed the progress of LCLS-II’s underlying superconductor tech for decades now. “Far down on the temperature scale near absolute zero (−459°F) lies a strange world of ‘electrical perpetual motion’—or ­superconductivity—where electric currents, once set in motion, flow forever,” PopSci first described in 1967. “With new developments in materials and the methods for cooling them, truly fantastic devices are taking shape in laboratories across the country.”

The post This record-breaking X-ray laser is ready to unlock quantum secrets appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Tonga volcanic eruption reshaped the seafloor in mind-boggling ways https://www.popsci.com/environment/tonga-eruption-seafloor-fiber-cables/ Thu, 07 Sep 2023 18:00:00 +0000 https://www.popsci.com/?p=568621
An eruption emerges from the ocean in a cloud of ash and a lightning strike.
The Hunga Tonga volcano eruption triggered lightning and a tsunami. Tonga Geological Services via NOAA

Immense flows traveled up to 60 miles away, damaging the region's underwater infrastructure.

The post The Tonga volcanic eruption reshaped the seafloor in mind-boggling ways appeared first on Popular Science.

]]>
An eruption emerges from the ocean in a cloud of ash and a lightning strike.
The Hunga Tonga volcano eruption triggered lightning and a tsunami. Tonga Geological Services via NOAA

On January 15, 2022, the drowned caldera under the South Pacific isles of Hunga Tonga and Hunga Haʻapai in Tonga blew up. The volcanic eruption shot gas and ash 36 miles up into Earth’s mesosphere, higher than the plume from any other volcano on record. The most powerful explosion observed on Earth in modern history unleashed a tsunami that reached Peru and a sonic boom heard as far as Alaska.

New research shows that when the huge volume of volcanic ash, dust, glass fell back into the water, it reshaped the seafloor in a dramatic fashion. For the first time, scientists have reconstructed what might have happened beneath the Pacific’s violently strewn waves. According to a paper published in Science today, all that material flowed underwater for dozens of miles.

“These processes have never been observed before,” says study author Isobel Yeo, a marine volcanologist at the UK’s National Oceanography Centre.

About 45 miles from the volcano, the eruption cut off a seafloor fiber-optic cable. For Tongans and rescuers, the broken cable was a major inconvenience that severely disrupted the islands’ internet. For scientists, the abrupt severance of internet traffic provided a timestamp of when something touched the cable: around an hour and a half after the eruption.

The cut also alerted scientists to the fact that the eruption had disrupted the seafloor, which isn’t easy to spot. “We can’t see it from satellites,” says Yeo. “We actually have to go there and do a survey.” So in the months after the eruption, Yeo and her fellow researchers set out to fish clues from the surrounding waters and piece them back together.

A Tongan charter boat owner named Branko Sugar had caught the initial eruption with a mobile phone camera, giving an exact time when volcanic ejecta began to fall into the water. Several months later, the boat RV Tangaroa sailed from New Zealand to survey the seafloor and collect volcanic flow samples. Unlike in much of the ocean, the seafloor around Tonga had already been mapped, allowing scientists to corroborate changes to the topography. 

[Related: The centuries-long quest to map the seafloor’s hidden secrets]

The scene researchers reconstructed, had it unfolded above ground, might fit neatly into Roland Emmerich disaster film. The volcano moved as much matter in a few hours as the world’s rivers delivered into the oceans in a whole year. These truly immense flows traveled more than 60 miles from their origin, carving out gullies as tall as skyscrapers.

When the volcano blew, it spewed out immense quantities of rock, ash, glass, and gas that fell back to earth. This is bog-standard for such eruptions, and it typically produces the fast-moving pyroclastic flows that menace anything in their path. But over Hunga Tonga–Hunga Haʻapai, that falling mass had nowhere to go but out to sea.

Satellite imagery of the January 2022 eruption.
Satellite imagery of the January 2022 eruption. NASA Worldview, NOAA/NESDIS/STAR

“It’s that Goldilocks spot of dropping huge amounts of really dense material straight down into the ocean, onto a really steep slope, eroding extra material,” says Michael Clare, a marine geologist at the National Oceanographic Centre and another author. “It bogs up, it becomes more dense, and it just really goes.”

Scientists estimated the material fanned out from Hunga Tonga–Hunga Haʻapai at 75 miles per hour—as fast as, or faster than, the speed limit of most U.S. interstate highways. If correct, that’s 50 percent faster than any other underwater flow recorded on the planet. That rushing earth gushed back up underwater slopes as tall as mountains.

“It’s like seeing a snow avalanche, thinking you’re safe on the mountain next to it, and this thing just comes straight up against you,” says Clare.

These underwater flows, according to the researchers, had never been observed before. But understanding volcanic impacts on the seafloor is about more than scientific curiosity. In the last two centuries, we’ve laid vital infrastructure below the water: first for telegraph cables, then telephone lines, and now optical fibers that carry the internet.

Trying to prepare a single cable for an eruption of this scale is like trying to prepare for being struck by a train—it can’t really be done. Instead, a surer way to protect communications is to lay more cables, ensuring that one disaster won’t break all connectivity.

[Related: Mixing volcanic ash with meteorites may have jump-started life on Earth]

In many parts of the globe, that’s already the case. Fishing accidents break cables all the time, without much lasting effect. If, for instance, the world experienced a repeat of the 1929 earthquake-induced landslide that cut off cables off Newfoundland, we probably wouldn’t notice too much: There are plenty of other routes for internet traffic to run between Europe and North America.

As a global map of seafloor cables shows, though, that isn’t true everywhere. In Tonga in 2022, a single severed cable all but entirely cut the archipelago off from the internet. Many other islands, especially in the developing world, are similarly vulnerable.

And those cables are of great value to geologists, too. “Without having the cables, we’d probably still be in the dark and wouldn’t know these sorts of events happen on the scale that they do,” says Clare.

The post The Tonga volcanic eruption reshaped the seafloor in mind-boggling ways appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Seismic sensors reveal the true intensity of explosions in Ukraine https://www.popsci.com/science/seismic-conflict-monitoring/ Thu, 07 Sep 2023 10:00:00 +0000 https://www.popsci.com/?p=568386
Soldiers inspect a missile that landed in a Ukraine street without detonating.
Ukrainian military members stand near a missile that stuck from the road after Russian shelling on September 2. Roman Chop/Global Images Ukraine/Getty Images

Space satellites and other scientific tools can give us a window into war.

The post Seismic sensors reveal the true intensity of explosions in Ukraine appeared first on Popular Science.

]]>
Soldiers inspect a missile that landed in a Ukraine street without detonating.
Ukrainian military members stand near a missile that stuck from the road after Russian shelling on September 2. Roman Chop/Global Images Ukraine/Getty Images

Since Russia invaded Ukraine in February 2022, the earth has been shaking—not from natural earthquakes, but from bombings and other wartime explosions. By harnessing seismic data from sensors within Ukraine, international scientists have used the ground-rocking tremors after explosions to track the events of the war. 

This is the first time such data have been used to monitor explosions in an active combat zone in almost real-time. The results, published in the journal Nature, show far more explosions than previously reported: more than 1,200 explosions in the war’s first nine months, throughout Kyiv, Zhytomyr, and Chernihiv.

“Seismic data provide an objective data source, which is important for understanding what is happening in the war, for providing potential evidence where there are claims of breaches of international law, or for verifying individual attacks,” explains lead author Ben Dando, a seismologist at the Norwegian Seismic Array (NORSAR).

[Related: Ukraine claims it built a battle drone called SkyKnight that can carry a bomb]

Dando and his colleagues’ data comes from an array of 23 seismic sensors outside of Kyiv. From the signals recorded by these seismometers, the researchers were able to pinpoint the time, location, and intensity of each explosion. Smaller disruptions, like the blast that accompanies a gunshot, are too weak for these sensors to detect; what they can observe are almost certainly large impacts, such as those from missiles and bombs.

Such detections can bring clarity to the confusion of armed conflict. It’s especially vital in Ukraine, which has been flooded with disinformation and propaganda. Accurate and timely information on the events of a battle are key for other countries and watchdog organizations to intervene—especially if it seems like international laws are being broken. Marco Bohnhoff, a seismologist at the GFZ Potsdam German Research Center who was not involved in the study, told German magazine SPIEGEL that this kind of seismic monitoring could be used to confirm events and expose deliberate misinformation in war reporting.

A map of seismic detections.
A map of seismic detections, colored by date (those before February 2022 are gray) and scaled by magnitude. The white triangles show the locations of individual sensors in the seismic array. Dando et al./Nature

Seismic data “can provide insight into how certain locations are being targeted and at what intensity,” Dando says. For example, the Nova Kakhovka dam in Ukraine was destroyed in June 2023, causing widespread flooding and a humanitarian crisis. Ukrainian officials claimed the damage was due to Russian bombing. If true, the destruction of civilian infrastructure would be considered a war crime under several international protocols. The hope is that seismic monitoring, like that done by Dando and colleagues, will provide further insight into situations like these and enable international responses.

[Related: The terrible history behind cluster munitions]

This is not the first time that scientific Earth-monitoring technology has overlapped with a conflict. Other techniques, namely satellite imaging, have also been used for this kind of surveillance in recent history, including during the Russia-Ukraine war. Satellites have captured images of destroyed infrastructure and large-scale movement of war materiel. A space-based NASA project intended to track human-made light sources at night, known as Black Marble, has even identified war-related power outages in Ukraine. Such satellite data “proves invaluable in identifying vulnerable populations deserving of immediate assistance,” says Ranjay Shrestha, a remote sensing expert involved with the Black Marble project at NASA Goddard Space Flight Center.

Remote sensing techniques have their limitations. They work best when coupled with on-the-ground information and context to produce accurate interpretations. “Consider, for example, instances in Ukraine where residents intentionally turned off their lights to reduce the risk of aerial attacks,” says Shrestha. “Without corroborating ground truth information, we might misinterpret the situation as a power outage resulting from infrastructure damage.”

Dando’s organization, NORSAR, was founded on the principle of using seismic data to study nuclear explosions as part of the Comprehensive Nuclear Test Ban Treaty. The 23 sensors outside Kyiv that powered this study are part of that system, which had been used to detect nuclear tests across the world that violate international law. Usually, though, there aren’t suitable high-quality seismic sensors so close to an active military conflict. “We’re now seeing that with the right sensors in the right place,” Dando says, “there is significant value that seismic and acoustic data can provide for active conflict monitoring.”   

The post Seismic sensors reveal the true intensity of explosions in Ukraine appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
India just landed on the moon. Now it’s headed for the sun. https://www.popsci.com/science/aditya-l1-solar-probe-isro/ Fri, 01 Sep 2023 18:00:00 +0000 https://www.popsci.com/?p=567591
The rocket that will carry ISRO's spacecraft Aditya-L1 beyond Earth.
The rocket that will carry ISRO's spacecraft Aditya-L1 beyond Earth. ISRO

India's Aditya-L1 spacecraft should wind up some 932,000 miles away to monitor our star.

The post India just landed on the moon. Now it’s headed for the sun. appeared first on Popular Science.

]]>
The rocket that will carry ISRO's spacecraft Aditya-L1 beyond Earth.
The rocket that will carry ISRO's spacecraft Aditya-L1 beyond Earth. ISRO

Update (September 5, 2023): India successfully launched its Aditya-L1 solar observatory on September 2 at 2:20 am EST. It is expected to arrive at its first destination between the Earth and the sun in January 2024.

On August 23, the Indian Space Research Organization (ISRO) pulled off the Chandrayaan-3 mission, depositing the Vikram lander and Pragyan rover near the lunar South Pole. India is now the fourth nation to land on the moon—following Russia, the US and China— and the first to land near the lunar South Pole, where the rover has already detected sulfur and oxygen in the moon’s soil. Fresh off of this success, ISRO already has another mission underway, and its next target is something much bigger—the sun.

The ISRO’s Aditya-L1 spacecraft, armed with an array of sensors for studying solar physics, is scheduled to launch around 2 a.m. Eastern on September 2, atop a PSLV-C57 rocket from the Satish Dhawan Space Center in Sriharikota, in southeast India.

Aditya-L1 will begin a four-month journey to a special point in space. About 932,000 miles away is the sun-Earth L1 Lagrangian, an area where the gravity of Earth and the sun cancel out. By entering into an orbit around L1, the spacecraft can maintain a constant position relative to Earth as it orbits around the sun. It shares this maneuver with the NASA-ESA Solar and Heliospheric Observatory, or SOHO, which has been in the sun-observation business since 1996. If it reaches the L1 orbit, Aditya-L1 will join SOHO, NASA’s Parker Solar Probe, ESA’s Solar orbiter, and a handful of other spacecraft dedicated to studying the closest star to Earth. 

“This mission has instrumentation that captures a little bit of everything that all of these missions have already done, but that doesn’t mean we’re going to replicate science,” says Maria Weber, a solar astrophysicist at Delta State University in Mississippi, who also runs the state’s only planetarium at that campus. ”We’re getting more information and more data now at another time, a new time in the solar cycle, that previous missions haven’t been able to capture for us.” The sun undergoes 11-year patterns of waxing and waning magnetic activity, and the current solar cycle is expected to peak in 2025, corresponding with more sunspots and solar eruptions.

A spacecraft wrapped in gold foil in a clean room.
Aditya-L1 being prepped for its mission in a cleanroom. ISRO

Aditya-L1 will carry seven scientific payloads, including four remote sensing instruments: a coronagraph, which creates an artificial eclipse for better study of the sun’s corona, an ultraviolet telescope, and high and low X-ray spectrographs, which can help study the temperature variations in parts of the sun. 

[Related: Would a massive shade between Earth and the sun help slow climate change?]

“One thing I’m excited about is the high-energy component,” says Rutgers University radio solar physicist Dale Gary. Aditya-L1 will be able to study high-energy x-rays associated with solar flare and other activity in ways that SOHO cannot. And L1 is a good position for that sort of study, he says, since there is a more stable background of radiation against which to measure solar X-rays. Past measurements made in Earth orbit had to contend with Van Allen radiation belts

Aditya-L1’s ultraviolet telescope will also be unique, Gary says. It measures ultraviolet light, which has shorter wavelengths than visible light; the shortest or extreme UV light, near the X-ray spectrum, has already been measured by SOHO, but Aditya will capture the longer UV wavelengths.

That could allow Aditya-L1 to study parts of the sun’s atmosphere that have been somewhat neglected, Gary says, such as the transition region between the chromosphere, an area about 250 miles about the sun’s surface, and the corona, the outermost layer of the sun that begins around 1,300 miles above the solar surface and extends, tenuously, out through the solar system. 

Although ground-based telescopes can take some measurements similar to Aditya’s, the spacecraft is also kitted out with “in situ” instruments, which measure features of the sun that can only be observed while in space. “It’s taking measurements of magnetic fields right where it’s sitting, and it’s taking measurements of the solar wind particles,” Weber says. 

Like all solar physics missions, Aditya-L1 will inevitably serve two overall purposes. The first is to better understand how the sun—and other stars— work. The second is to help predict that behavior, particularly solar flares and coronal mass ejections. Those eruptions of charged particles and magnetic fields can impact Earth’s atmosphere and pose risks to satellites and astronauts. In March 2022, a geomagnetic storm caused by solar radiation caused Earth’s atmosphere to swell, knocking 40 newly launched SpaceX Starling satellites to fall out of orbit. 

“We live with this star and so, ultimately, we want to be able to predict its behavior,” Weber says. “We’re getting better and better at that all the time, but the only way we can predict its behavior, is to learn as much as we can even more about it.”

[Related: Why is space cold if the sun is hot?]

Aside from Aditya-L1’s scientific mission, its success will mark another feather in the cap of ISRO, another step in that space agency’s hard work to make India a space power, according to Wendy Whitman Cobb a space policy expert and instructor at the US Air Force School of Advanced Air and Space Studies (who was commenting on her own behalf, not for the US government). 

“India has had some pretty expansive plans for the past two decades,” she says. “A lot of countries say they’re going to do something, but I think India is that rare example of a country who’s actually doing it.”

Of course, space is hard. ISRO’s first lunar landing attempt with Chandrayaan-2, in 2019, was a failure, and there’s no guarantee Aditya-L1 will make it to L1. “It’s a technical achievement to go into the correct orbit when you get there,” Gary says. “There’s a learning curve. It would be very exciting if they accomplish their goals and get everything turned on correctly.”

The post India just landed on the moon. Now it’s headed for the sun. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How the world’s biggest particle accelerator is racing to cook up plasma from after the big bang https://www.popsci.com/science/large-hadron-collider-quark-gluon-plasma/ Thu, 31 Aug 2023 10:00:00 +0000 https://www.popsci.com/?p=566750
collage of cern images
Collage by Russ Smith; photos from left: Maximillien Brice / CERN; CERN; X-ray: NASA / CXC / University of Amsterdam / N.Rea et al; Optical: DSS

For 30 years, physicists around the world have been trying to reconstruct how life-giving particles formed in the very early universe. ALICE is their mightiest effort yet.

The post How the world’s biggest particle accelerator is racing to cook up plasma from after the big bang appeared first on Popular Science.

]]>
collage of cern images
Collage by Russ Smith; photos from left: Maximillien Brice / CERN; CERN; X-ray: NASA / CXC / University of Amsterdam / N.Rea et al; Optical: DSS

NORMALLY, creating a universe isn’t the job of the Large Hadron Collider (LHC). Most of the back-breaking science—singling out and tracking Higgs bosons, for example—from the world’s largest particle accelerator happens when it launches humble protons at nearly the speed of light.

But for around a month near the end of each year, LHC switches its ammunition from protons to bullets that are about 208 times heavier: lead ions.

When the LHC crashes those ions into each other, scientists can—if they have worked everything out properly—glimpse a fleeting droplet of a universe like the one that ceased to exist a few millionths of a second after the big bang.

This is the story of quark-gluon plasma. Take an atom, any atom. Peel away its whirling electron clouds to reveal its core, the atomic nucleus. Then, finely dice the nucleus into its base components, protons and neutrons.

When physicists first split an atomic nucleus in the early 20th century, this was as far as they got. Protons, neutrons, and electrons formed the entire universe’s mass—well, those, plus dashes of short-lived electrically charged particles like muons. But calculations, primitive particle accelerators, and cosmic rays striking Earth’s atmosphere began to reveal an additional menagerie of esoteric particles: kaons, pions, hyperons, and others that sound as if they’d give aliens psychic powers.

It seemed rather inelegant of the universe to present so many basic ingredients. Physicists soon figured out that some of those particles weren’t elementary at all, but combinations of even tinier particles, which they named with a word partly inspired by James Joyce’s Finnegans Wake: quarks.

Quarks come in six different “flavors,” but the vast majority of the observable universe consists of just two: up quarks and down quarks. A proton consists of two up quarks and one down quark; a neutron, two down and one up. (The other four, in ascending order of heaviness and elusiveness: strange quarks, charm quarks, beauty quarks, and the top quark.)

CERN particle accelerator
The ALICE experiment measures heavy-ion collisions (and their aftermath) with the world’s longest particle accelerator, hosted at CERN. Wladyslaw Henryk Trzaska / CERN

At this point, the list of ingredients ends. You can’t ordinarily chop a proton or neutron into quarks in our world; in most cases, quarks can’t exist on their own. But by the 1970s, physicists had come up with a workaround: heating things up. At a point that scientists call the Hagedorn temperature, those subatomic particles are reduced to a high-energy soup of quarks and the even tinier particles that glue them together: gluons. Scientists dubbed that soup quark-gluon plasma (QGP).

It’s a tantalizing recipe because, again, quarks and gluons can’t normally exist on their own, and reconstructing them from the larger particles they build is challenging. “If I give you water, it’s very difficult to tell the properties of [hydrogen and oxygen atoms],” says Bedangadas Mohanty, a physicist at India’s National Institute of Science Education and Research and at CERN. “Similarly, I can give you protons, neutrons, pions…but if you really want to study properties of quarks and gluons, you need them in a box, free.”

This isn’t a recipe you can test in a home oven. In units of the everyday world, the temperature in a hadronic system is about 3 trillion degrees Fahrenheit—100 thousand times hotter than the center of the sun. The best appliance for the job is a particle accelerator. 

But not just any particle accelerator will do. You need to boost your particles with sufficient energy. And when scientists set out to create QGP, LHC was no more than a dream of a distant future. Instead, CERN had an older collider only about a quarter of LHC’s circumference: the Super Proton Synchrotron (SPS).

As its name suggests, SPS was designed to crash protons into fixed targets. But by the end of the 1980s, scientists had decided to try swapping out the protons for heavy ions—lead nuclei—and see what they could manage. In experiment after experiment across the 1990s, CERN researchers thought they saw something happening to the nuclei. 

“Somewhat to our surprise, already at these relatively low energies, it looked like we were creating quark-gluon plasma,” says Marco van Leeuwen, a physicist at Dutch National Institute for Subatomic Physics and at CERN. In 2000, his team claimed they had “compelling evidence” of the achievement.

For the brief flickers for which the quantum matter exists in the world, physicists can watch the plasma materialize in what they call “little bangs.”

Across the Atlantic, CERN’s counterparts at Long Island’s Brookhaven National Laboratory had been trying their hands with equal parts optimism and uncertainty. The uncertainty faded around the turn of the millennium, when Brookhaven switched on the Relativistic Heavy Ion Collider (RHIC), a device designed specifically to create QGP.

“RHIC turned on, and we were deeply within quark-gluon plasma,” says James Dunlop, a physicist at Brookhaven National Laboratory.

So there are two major QGP factories in the world today: CERN and Brookhaven. With this pair of colliders, for the brief flickers for which the quantum matter exists in the world, physicists can watch the plasma materialize in what they call “little bangs.”

helmeted person stands inside inner workings at CERN
At ALICE’s heart lies a 39-foot-long solenoid maganet, coiled around a thermal shield and a number of fast-trigger detectors. Julien Marius Ordan / Maximillien Brice / CERN

Going back and forth in time

The closer in time to the big bang that you travel, the less the universe resembles your familiar one. As of this writing, the James Webb Space Telescope has possibly observed galaxies from around 320 million years after the big bang. Go farther back, and you’ll reach a very literal Dark Ages—a time before the first stars, when there was little to illuminate the universe except the cosmic background.

In this shadowy age, astronomy steadily gives way to subatomic physics. Go even farther back, to just 380,000 years after the big bang, and electrons are just joining their nuclei to form atoms. Keep going back; the universe is ever smaller, denser, hotter. Seconds after the big bang, protons and neutrons haven’t joined together to form nuclei more complex than hydrogen. 

Go back even farther—around a millionth of a second after the big bang—and the universe is hot enough that quarks and gluons stay split apart. It’s a miniature version of this universe that physicists seek to create.

Physicists puzzle over that universe in office blocks like the exquisitely modernist one overlooking CERN’s visitors center. Look out this building’s window, and you might see the terminus of a Geneva tram line. Cornavin, the city’s main railway station, is only 20 minutes away.

CERN physicists Urs Wiedemann and Federico Antinori meet me in their office. Wiedemann is a theoretical physicist by background; Antinori is an experimentalist, presiding over heavy-ion collision runs. Studying QGP requires the talents of both.

“The existence of quark-gluon plasma we have established,” says Antinori. “What is most interesting is understanding what kind of animal it is.”

For instance, their colleagues who first created QGP expected to find a sort of gas. Instead, QGP behaves like a liquid. QGP, in fact, behaves like what’s called a perfect liquid, one with almost no viscosity. (Yes, the early universe may have been, very briefly, a sort of superheated ocean. Many creation myths might find a distant mirror inside a particle accelerator.)

Both Antinori and Wiedemann are especially interested in watching the liquid come into being, watching atomic nuclei rend themselves apart. Some scientists call the process a “phase transition,” as if creating QGP is like melting snow to create liquid water. But turning protons and neutrons into QGP is far more than melting ice; it’s creating a transition into a very different world with fundamentally different laws of physics. “The symmetries of the world we live in change,” Wiedemann says.

This transition happened in reverse in the very early universe as it cooled down past the Hagedorn temperature. The quarks and gluons clumped together, forming the protons and neutrons that, in turn, form the atoms we know and love today.

But physicists struggle to understand this process with mathematics. They come closer by examining QGP collisions in the lab.

scintillator array at CERN
Central detector components, like the VZERO scintillator array, were built to handle the “ultra-relativistic energies” of the LHC. Julien Marius Ordan / CERN

QGP is also a laboratory for the strong nuclear force. One of the four fundamental forces of the universe—alongside gravity, electromagnetism, and the weak nuclear force that governs certain radioactive processes—the strong nuclear force is what holds particles together at the hearts of atoms. The gluons in QGP’s name are the strong nuclear force’s tools. Without them, charged particles would electromagnetically repel each other and atoms would rip themselves apart.

Yet while we know quite a lot about gravity and electromagnetism, the inner workings of the strong nuclear force remain a secret. Moreover, scientists want to learn more about the role the strong nuclear force plays.

“You can say, ‘I understand how an electron interacts with a photon,’” says Wiedemann, “but that doesn’t mean that you understand how a laser functions. That doesn’t mean that you know why this table doesn’t break down.”

Again, to understand such things, they’ve got to crash heavy ions together.

With the likes of SPS, scientists could look at droplets of QGP and confirm they existed. But if they wanted to actually peer inside and see their properties at work—to examine them—they’d need something more powerful.

“It was clear,” says Antinori, “that one had to go to higher energies than were available at the SPS.”

The universe-faking machine

Crossing from CERN’s campus into France, it’s impossible to tell that this green and pleasant vale—under the grace of the Jura Mountains—sits atop a 17-mile-long ring of superconducting magnets and steel. Scattered around that ring are different experiments and detectors. The search for QGP is headquartered in one such detector.

The road there passes through the glistening hamlet of Saint-Genis-Pouilly, where many of CERN’s staff live. On the pastoral outskirts sits a cluster of industrial cuboids and cooling towers.

Apart from a mural on the corrugated metal facade overlooking a parking lot, the complex doesn’t really advertise that this is where scientists look for QGP—that one of these warehouselike buildings is the outer cocoon of a large ion collider experiment called, well, A Large Ion Collider Experiment (ALICE).

inner workings at CERN
To date, more than 2,000 physicists from 40 different countries have been involved with the decades-long experiment. Jan Hosan / CERN / Fotogloria Agency

CERN physicist Nima Zardoshti greets me beneath that mural: ALICE’s detector, the QGP-watcher, depicted in a pastel-colored mural. Zardoshti leads me inside, past a control room that wouldn’t look out of place in a moon-landing documentary, around a corner covered in sheet metal, and out to a precipice. A concrete shield caps it, several stories below. “This concrete is what stops radiation,” he explains.

Beneath it, occluded from sight, sits the genuine article, a machine the size of a small building that weighs nearly the same as the Eiffel Tower. The detector sits more than 180 feet beneath the ground, accessible by a mine lift. No one is allowed to go down there while the LHC is running, save for CERN’s fire department, which needs to move in quickly if any radioactive or hazardous materials combust.

The heavy ions that collide inside that machine don’t originate in this building. Several miles away sits the old SPS, transformed into LHC’s first steppingstone. SPS accelerates bunches of lead nuclei up to very near the speed of light. Once they’re ready, the shorter collider unloads them into the longer one.

But unlike SPS, LHC doesn’t do fixed-target experiments. Instead, ALICE creates a magnetic squeeze that goads lead beams, racing in opposite directions, into violently crashing head-on.

Lead ions make fine ingredients. A lead-208 ion has 82 protons and 126 neutrons, and both of those are “magic numbers” that help make the nuclei as spherical as nuclei can become. Spherical nuclei create better collisions. (Across the Atlantic, Brookhaven’s RHIC uses gold ions.)

ALICE’s detector isn’t a camera; QGP isn’t like a ball of light that you can “see.” When these lead ions collide at high energies, they erupt into a flash of QGP, which dissipates into a perfect storm of smaller particles. Instead of watching for light, the detector watches the particles as they cascade away. 

A proton-proton collision might produce a few dozen particles—maybe a hundred, if physicists are lucky. A heavy-ion collision produces several thousand.

When heavy ions collide, they create a flash of QGP and spiky jets of more “normal” particles: often combinations of heavy quarks, like charm and beauty quarks. The jets pierce through the QGP before they reach the detector. Physicists can reconstruct what the QGP looked like by examining those jets and how they changed as they passed through.

First those particles crash through silicon chips not unlike the pixels in your smartphone. Then the particles pass through a time projection chamber: a cylinder filled with gas. Still streaking at high energy, they shoot through the gas atoms like meteors through the upper atmosphere. They knock electrons free of their atoms, leaving brilliant trails that the chamber can pick up.

inner workings at CERN
After completing major upgrades in 2021, the ALICE team is ready for Run 3, where they aim to increase the number of particle collisions they sample by 50 times. Jan Hosan / CERN / Fotogloria Agency

For fans of particle physics equipment, the time projection chamber makes ALICE special. “It’s super useful, but the downside of it, and why other experiments don’t use it, is it’s very slow,” says Zardoshti. “The process takes, I think, roughly something on the order of a millionth of a second.”

ALICE creates about 3.5 terabytes of data—around the equivalent of three full-length feature films—each second. Physicists process that data to reconstruct the QGP that produced the particles. Much of that data is processed right here, but much of it is also processed by a vast global network of computers.

From particle accelerators to neutron stars

Particle physics is a field that always has one foot extended decades into the future. While ALICE kicked into operation in 2010, physicists had already begun sketching it out in the early 1990s, years before scientists had even detected QGP at all. 

One of their current big questions is whether they can make QGP by smashing ions smaller than lead or gold. They’ve already succeeded with xenon; later this year, they want to try with an even scanter substance like oxygen. “We want to see: Where is the transition where we can make this material?” says Zardoshti. “Is oxygen already too light?” They expect the life-giving element to work. But in particle physics, there’s no knowing for certain until after the fact.

In the longer term, ALICE’s stewards have big plans. After 2025, the LHC will shut off for several years for maintenance and upgrades, which will boost the collider’s energy. Alongside those upgrades will come a wholesale renovation of ALICE’s detector, scheduled for installation as early as 2033. All of this is planned out precisely many years in advance.

CERN’s stewards are daring to draft a device for an even more distant future, a Future Circular Collider that would be more than three times the LHC’s size and wouldn’t be online till the 2050s. No one is sure yet if it will pan out; if it does, it will require securing an investment of more than 20 billion euros.

ALICE project's inner workings at CERN
ALICE’s inner tracking system holds the record for the biggest pixel system ever built. Felix Reidt / Jochen Klein / CERN

Higher energies, larger colliders, and more sensitive detectors all make for stronger tools in QGP-watchers’ arsenals. The particles they’re seeking are tiny and incredibly short-lived, and they need those tools to see more of them.

But while particle physicists have spent billions of euros and decades of effort bringing fragments of the very early universe back into reality, some astrophysicists think the universe might have been showing the same zeal.

Instead of a particle accelerator, the universe can avail itself of a far more powerful appliance: a neutron star. 

When an immense star, far larger than the mass of our sun, ends its life in a spectacular supernova, the shard of a core that remains begins to cave in. The core can’t be too large, or else it will collapse into a black hole. But if the mass is just right, the core will reach pressures and temperatures that might just tear atomic nuclei apart into quarks. It’s like the ALICE experiment at scale in a more natural setting—the unruly universe, where it all began.

Read more PopSci+ stories.

The post How the world’s biggest particle accelerator is racing to cook up plasma from after the big bang appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The surprising strategy behind running the fastest marathon https://www.popsci.com/science/marathon-running-formations/ Thu, 24 Aug 2023 12:00:00 +0000 https://www.popsci.com/?p=565059
Runners forming a V-shape with marathoner Eluid Kipchoge in the rear, on a road beside trees.
Eliud Kipchoge, at left in a white tank, behind pacers forming an aerodynamic V in 2019. Robert Szaniszlo/NurPhoto/Getty Images

Aerodynamics experts disagree over the best shape for reducing drag: Is it a V or a swordfish?

The post The surprising strategy behind running the fastest marathon appeared first on Popular Science.

]]>
Runners forming a V-shape with marathoner Eluid Kipchoge in the rear, on a road beside trees.
Eliud Kipchoge, at left in a white tank, behind pacers forming an aerodynamic V in 2019. Robert Szaniszlo/NurPhoto/Getty Images

In 2019, Kenyan long-distance runner Eluid Kipchoge became the first person to run a marathon in under two hours. This achievement had held an almost mythical status—existing, as some physiologists projected, at the edge of a human body’s capabilities. But in the years leading up to Kipchoge’s feat, he and other elite athletes had squeezed their marathon times ever closer. Finally, four years ago, in an unsanctioned event along a flat six-mile loop in Vienna, Kipchoge cruised at record speed, completing 26.2 miles in one hour, 59 minutes, and 40 seconds. 

Officially, the two-hour marathon barrier has not been broken, according to World Athletics, the organization that keeps race records. For one, there were no other competitors in Kipchoge’s event. Plus, he wasn’t alone. He ran in a pack with several expert runners, known as pacers or “rabbits,” creating an aerodynamic shape around him. Wind tunnel tests and computer simulations helped the team finesse the formation: Five pacers positioned in front made a V, like the inverse of a flock of geese, while two additional racers ran slightly behind the marathoner, at his left and right flank. After each loop, the pacers rotated out for a fresh set of legs. With this protection from headwinds, Kipchoge completed the distance more than a minute faster than he ever had.

But as it turns out, there may be an even better drafting formation for running a marathon—or so says a team of researchers at École Centrale in Lyon, France, who recently tested multiple shapes by placing miniature manikins in a wind tunnel. They’ve proposed an arrangement, somewhat resembling a swordfish’s body, that would reduce drag forces by 60 percent on a runner. It would result in marathon times about a minute quicker than the Vienna V, they claim. Using the swordfish shape, it “may be possible to run the fastest marathon ever,” the study authors write in a paper recently published in the journal Proceedings of the Royal Society: A

But the scientists who helped develop and verify the buzzed-about 2019 configuration dispute that assertion, citing flaws in the new study’s wind tunnel setup and the proportions of the mini model runners used to explore it. “The paper itself shows huge differences between present results and those from previous studies,” says Bert Blocken, a professor of civil engineering at KU Leuven in Belgium, who used computer and wind tunnel simulations to analyze drag reduction in the Vienna race. While the Proceedings paper indicates that a V-shape reduces drag by 50 percent, Blocken says that past work found it was closer to 85 percent.

[Related: How epic wind tunnels on Earth make us better at flying through space]

The benefits of marathon formations are not in dispute here. Aerodynamics researchers agree that packs offer an advantage to going solo, especially at the pace elite runners travel. The drag force acting on an object is proportional to the object’s speed squared, points out Pietro Salizzoni, a professor of fluid mechanics and an author of the new study. In other words, the faster you go, the more extreme headwinds you face. On a leisurely stroll, these disturbances are essentially undetectable. But at the speed Kipchoge ran at for his record—an average of 4.5-minute miles or 13 miles an hour, which would feel like a sprint to most people—pushing air out of the way becomes a literal drag.

Orange figurines showing a V-shape formation.
Blocken and his team tested these figures, scanned from marathon runners, in a wind tunnel. KU Leuven

For Vienna, three studies pointed to a V-shaped pack, Blocken says: a UK consultancy company’s tests of 110 formations using fluid dynamics simulations; his team’s own computer simulations of 15 formations; and wind tunnel experiments involving 10 formations. (Confidentiality clauses from INEOS, the British chemical company that sponsored Kipchoge’s race, means those reports have not been published.) “All three detailed previous studies gave the same outcome,” Blocken explains: The configuration with a “V-shape in front of the target athlete and two runners behind him” produced the lowest aerodynamic drag.

Salizzoni and his coauthors independently tested eight formation styles, including the INEOS V-shape, by mounting stationary 6.5-inch manikins in an indoor wind tunnel. Their goal was to measure air resistance proportional to what a marathoner would experience. Ultimately, they also found a benefit to placing two pacers in the back. The force acting on a running target “is the sum of the pressure on the front and on behind,” Salizzoni notes—those in the rear help minimize any pressure drops. “You want to control the wake you are producing,” he says, similar to the back wings on a Formula One race car.

A diagram of the swordfish running shape.
A top view showing where pacers would be, in blue, and the target athlete, in red, for three permutations of the swordfish formation. (The measurements are 1/10 scale, in centimeters.) Marro et al. Proc. R. Soc. A

Where the new findings differ substantially is in the positions of the pacers out front. Salizzoni’s team concluded the most effective was a swordfish-profile shape: a lone pacer, followed by four other pacers forming a skinny diamond four feet behind, and finally the target athlete less than five feet behind the diamond’s rear. The “narrower wedge” in this formation could allow runners to “sort of slice through the air,” University of Colorado Boulder physiologist Rodger Kram, who wasn’t part of the research team, told Science News.

[Related: Why do marathon runners get the runs?]

Blocken remains unconvinced—he argues that the team’s manikins were inappropriately proportioned. “The model used in the study by the present authors seems to be some sort of small cartoon model that is very different from the geometry of a real human body,” he says, referring to the unrealistic chest-belly ratio and sharp edges of the models’ poseable joints. Blocken’s studies used smooth and solid manikins based on scans of real human marathoners. 

Plastic figures used to test a marathon formation.
A poseable manikin used to test the eight drafting formations. Marro et al. Proc. R. Soc. A

Salizzoni counters that their figurines had a “equivalent area and an equivalent form” to an athlete, and that using moveable models helped provide more accurate data. After all, humans in motion don’t have their arms and legs fixed in place. This “could still give a realistic result for the single runner,” Blocken say, but he points out that the fluid dynamics of formations are much more sensitive to subtle changes. In fact, as the New Yorker noted at the time, the configuration for the 2019 race was so precisely tuned that if Kipchoge moved five inches out of place, he would be much more exposed to aerodynamic drag.

It may be some time before a top marathoner puts another pack run to the test. Kipchoge typically only competes in two marathons a year. His second race of 2023 will be in Berlin in September, where he officially set the world record in 2022—without the help of a formation—at 2:01:09.

The post The surprising strategy behind running the fastest marathon appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A fleeting subatomic particle may be exposing flaws in a major physics theory https://www.popsci.com/science/muon-measurement-fermilab/ Thu, 17 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=563623
The ring-shaped machinery of the Fermi National Accelerator Laboratory.
The Department of Energy’s Fermi National Accelerator Laboratory near Chicago. Ryan Postel/Fermilab

A refined measurement for subatomic muons has major implications—if fundamental theories are accurate.

The post A fleeting subatomic particle may be exposing flaws in a major physics theory appeared first on Popular Science.

]]>
The ring-shaped machinery of the Fermi National Accelerator Laboratory.
The Department of Energy’s Fermi National Accelerator Laboratory near Chicago. Ryan Postel/Fermilab

One of the biggest questions in particle physics is whether the field itself tells an incomplete picture of the universe. At Fermilab, a US Department of Energy facility in suburban Chicago, particle physicists are trying to resolve this identity crisis. There, members of the Muon g–2 (pronounced as “g minus 2”) Collaboration have been carefully measuring a peculiar particle known as a muon. Last week, they released their updated results: the muon—a heavier, more ephemeral counterpart of the electron—may be under the influence of something unknown.

If accurate, it’s a sign that the theories forming the foundation of modern particle physics don’t tell the whole story. Or is it? While the Collaboration’s scientists have been studying muons, theoretical researchers have been re-evaluating their numbers, leaving doubt whether such an error exists.

“Either way, there’s something that’s not understood, and it needs to be resolved,” says Ian Bailey, a particle physicist at Lancaster University in the UK and a member of the Muon g–2 collaboration.

The tried and tested basic law of modern particle physics—what scientists call the Standard Model—enshrines the muon as one of our universe’s fundamental building blocks. Muons, like electrons, are subatomic particles that carry negative electrical charge; unlike electrons, muons decay after a few millionths of a second. Still, scientists readily encounter muons in the wild. Earth’s upper atmosphere is laced with muon rain, spawned by high-energy cosmic rays striking our planet. 

But if the muon doesn’t always look like physicists expect it to look, that is a sign that the Standard Model is incomplete, and some hitherto unknown physics is at play. “The muon, it turns out, is predicted to have more sensitivity to the existence of new physics than…the electron,” says Bailey.

[Related: The green revolution is coming for power-hungry particle accelerators]

Also like electrons, muons spin like whirling tops, which creates a magnetic field. The titular g defines how quickly it spins. In isolation, a muon’s g has a value of 2. In reality, muons don’t exist in isolation. Even in a vacuum, muons are hounded by throngs of short-lived “virtual particles” that pop in and out of quantum existence, influencing a muon’s spin.

The Standard Model should account for these particles, too. But in the 2000s, scientists at Brookhaven National Laboratory measured g and found that it was subtly but significantly greater than the Standard Model’s prediction. Perhaps the Brookhaven scientists had gotten it wrong—or, perhaps, the muon was at the mercy of particles or forces the Standard Model doesn’t consider.

Breaking the Standard Model would be one of the biggest moments in particle physics history, and particle physicists don’t take such disruption lightly. The Brookhaven scientists  moved their experiment to Fermilab in Illinois, where they could take advantage of a more powerful particle accelerator to mass-produce muons. In 2018, the Muon g–2 experiment began. 

Three years later, the experimental collaboration released their first results, suggesting that Brookhaven hadn’t made a mistake or seen an illusion. The results released last week add data from two additional runs in 2018 and 2019, corroborating what was published in 2021 and improving its precision. Their observed value for g—around 2.0023—diverges from what theory would predict after the eighth decimal place.

[Related: Scientists found a fleeting particle from the universe’s first moments]

“We’ve got a true value of the magnetic anomaly pinned down nicely,” says Lawrence Gibbons, a particle physicist at Cornell University and a member of the Muon g–2 collaboration.

Had this result come out several years ago, physicists might have heralded it as definitive proof of physics beyond the Standard Model. But today, it’s not so straightforward. Few affairs of the quantum world are simple, but the spanner in these quantum works is the fact that the Standard Model’s prediction itself is blurry.

“There has been a change coming from the theory side,” says Bailey.

Physicists think that the “virtual particles” that pull at a muon’s g do so with different forces. Some particles yank with electromagnetism, whose influence is easy to calculate. Others do so via the strong nuclear force (whose effects we mainly notice because it holds particles together inside atomic nuclei). Computing the strong nuclear force’s influence is nightmarishly complex, and theoretical particle physicists often substituted data from past experiments in their calculations. 

Recently, however, some groups of theorists have adopted a technique known as “lattice quantum chromodynamics,” or lattice QCD, which allows them to crunch strong nuclear force numbers on computers. When scientists feed lattice QCD numbers into their g predictions, they produce a result that’s more in line with Muon g–2’s results.

Adding to the confusion is that a different particle experiment located in Siberia—known as CMD-3—produced a result that also makes the Muon g–2 discrepancy disappear. “That one is a real head scratcher,” says Gibbons.

The Muon g–2 Collaboration isn’t done. Crunching through three times as much data, collected between 2021 and 2023, remains on the collaboration’s to-do list. Once they analyze all that data, which may be ready in 2025, physicists believe they can make their g minus 2 estimate twice as precise. But it’s not clear whether this refinement would settle things, as theoretical physicists race to update their predictions. The question of whether or not muons really are misbehaving remains an open one.

The post A fleeting subatomic particle may be exposing flaws in a major physics theory appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
When two stars orbit each other, gravity gets weird https://www.popsci.com/science/theory-of-gravity-alternative/ Tue, 15 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=563124
A purple galaxy cluster against a black background of space, studded with stars.
Studying galaxy clusters such as this one helps astronomers look for the nature of dark matter. NASA/CXO/Fabian et al.; Gendron-Marsolais et al.; NRAO/AUI/NSF; SDSS

Newton and Einstein's explanations for gravity might not fully explain some cosmic phenomena.

The post When two stars orbit each other, gravity gets weird appeared first on Popular Science.

]]>
A purple galaxy cluster against a black background of space, studded with stars.
Studying galaxy clusters such as this one helps astronomers look for the nature of dark matter. NASA/CXO/Fabian et al.; Gendron-Marsolais et al.; NRAO/AUI/NSF; SDSS

The idea of gravity as we know it has been around for a long time. More than 300 years ago, Isaac Newton first shared his theory of gravitation, describing how massive objects are attracted to each other. Then, around a hundred years ago, Albert Einstein refined and expanded upon Newton’s ideas to create the theory of relativity—explaining gravity as the way objects, especially at the extremes across the universe, warp the fabric of space around them.

But there are still a few mysteries in the cosmos that even the well-tested ideas of relativity can’t explain. The biggest one? Dark matter, the most notorious problem in astronomy today. Many scientists think dark matter is some kind of yet-unknown particle that obeys traditional laws of gravity. Others think the issue is actually gravity itself. In that view, perhaps we need a modified theory of gravity—also known as MOND, for MOdified Newtonian Dynamics—where, at the largest and smallest scales, gravity acts differently from the usual Newton or Einstein theories.

MOND is often met with significant skepticism, because Newton and Einstein’s ideas of gravity have had so much success. But new observations recently published in The Astrophysical Journal claim to provide evidence for modified gravity by taking a detailed look at the ways binary stars move around each other. 

“The new results provide direct evidence that Newton’s theory simply breaks down” at certain scales, explains Kyu-Hyun Chae, astronomer at Sejong University in Seoul, South Korea and author of the new paper claiming evidence for MOND. Chae used data from the European Gaia satellite, which has been measuring the positions and motions of stars with unprecedented precision over the past decade. In particular, he looked at binary stars with particularly wide, far-apart orbits to measure their accelerations, for which MOND and traditional theories predict different values. 

[Related: Have we been measuring gravity wrong this whole time?]

These spaced-out stars move pretty slowly, enabling tests of gravity where there are tiny accelerations. These small accelerations are where the two theories of gravity diverge, and modified gravity predicts the stars will move 30 to 40 percent faster than they would under “normal” gravity—precisely what Chae claims to have seen in the data. At the small scales of binary stars, too, according to Chae, dark matter can’t really have an effect, so it can’t explain the observed differences from the predictions of traditional gravity.

Xavier Hernandez, an astronomer at the National Autonomous University of Mexico who first proposed the idea of testing gravity with wide binary systems but wasn’t involved in the new work, has confidence in these new results, especially since they complement his past work. “Two largely independent and complementary approaches have been shown to yield the same result,” he says, emphasizing that this a clear example of the scientific process.

The best explanation for Chae’s observations is a particular flavor of modified gravity theories, called AQUAL MOND. But just because gravity might not be a perfect match to one theory, doesn’t mean we need to throw out everything we have. “There are many versions of modified gravity because it can be anything that goes beyond Einstein’s theory of general relativity,” said physicist Sergei Ketov in a news release from the University of Tokyo Kavli Institute. “Modified gravity does not rule out Einstein’s theory, but it shows its boundaries.”

[Related: Gravity could be bringing you down with IBS]

Not all in the scientific community are convinced this is actually a “smoking-gun” for MOND, though. “The quick answer is that this result is a confluence of three things: good science, bad science, and the ugly state of science news,” wrote science communicator Ethan Siegel on Friday in his column Starts with a Bang. Siegel and other scientists have expressed concerns about the reliability of the observations used in Chae’s study—with some even publishing contradictory research—and discontent with news articles creating the impression that this work is a decisive victory for modified gravity. Depending on what stars scientists include in their analysis, the results vary, and these scientists currently disagree on what assumptions are the correct ones to make.

“If anyone is truly skeptical, he/she should try to disprove my results,” counters Chae. However, he empathizes with the motivation for some of the disbelief. The analyses at odds with this research, he adds, failed to include an important self-calibration step. Current modified gravity theories are “like the Bohr model of atoms without quantum physics developed yet. But, we need to remember that quantum physics was eventually developed,” he adds. (The Bohr model is the classic elementary-school science view of an atom, with electrons orbiting a nucleus, which was later replaced by the much fuzzier and probabilistic view of quantum mechanics.)

Only time and many other tests will be able to determine which theory will come out on top, and if dark matter is a particle or just a tweak to gravity. “We have these binary stars orbiting each other in front of us, and not doing what Newton said they should be doing,” says Hernandez. “Not considering modified gravity is no longer an option.”

This post has been updated with additional comments from Chae.

The post When two stars orbit each other, gravity gets weird appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a US lab created energy with fusion—again https://www.popsci.com/science/nuclear-fusion-second-success-nif/ Sun, 13 Aug 2023 17:00:00 +0000 https://www.popsci.com/?p=562508
Machinery at the center of the National Ignition Facility.
The target chamber of LLNL’s National Ignition Facility. Lawrence Livermore National Laboratory

A barrage of X-rays hit a tiny pellet at temperatures and pressures greater than our sun's.

The post How a US lab created energy with fusion—again appeared first on Popular Science.

]]>
Machinery at the center of the National Ignition Facility.
The target chamber of LLNL’s National Ignition Facility. Lawrence Livermore National Laboratory

About eight months ago, scientists at a US-government-funded lab replicated the process that powers stars—nuclear fusion—and created more energy than they put in. Now, physicists and engineers at the same facility, the National Ignition Facility (NIF) at Northern California’s Lawrence Livermore National Laboratory, appear to have successfully created an energy-gaining fusion experiment for the second time.

NIF’s latest achievement is a step closer—the second step down a very long road—to a dream of fusion providing the world with clean, abundant energy. There is a long way to go before a fusion power plant opens in your city. But scientists are optimistic.

“It indicates that the scientists at [NIF] and their collaborators understand what happened back in December well enough that they have been able to make it happen again,” says John Pasley, a fusion scientist at York University in the UK who wasn’t part of this experiment.

NIF declined to comment, noting that the facility’s scientists had not yet formally presented their results. Until that happens, there’s a lot we won’t know about the specifics of the experiment, which took place on July 30.

There are multiple ways of achieving fusion, and NIF works with one, called inertial confinement fusion (ICF). In NIF’s setup, a high-powered laser beam splits into 192 smaller beams, showering a capsule that scientists call a hohlraum. Inside the hohlraum’s walls, this barrage spawns X-rays that crash into the capsule’s filling: a pellet of deuterium and tritium, super-squeezing it at temperatures and pressures more intense than the sun’s, initiating fusion.

The goal of all this work is to pass the break-even point and create more energy than the laser puts in: an achievement that fusion scientists call gain. In December’s experiment, 2.05 megajoules of laser beams elicited 3.15 megajoules of fusion energy. We won’t know for sure until NIF releases its data, but unnamed sources told the Financial Times that this second success created even greater gain.

[Related: Cold fusion is making a scientific comeback]

In addition, the December experiment achieved self-heating: a state where the fusion reaction powered itself, like a fire that no longer needs stoking. Many scientists think self-heating is a prerequisite to generating power in ICF. Outside scientists speculate that NIF’s new experiment also achieved self-heating.

“An obvious part of the scientific process is that you get the same result,” says Dennis Whyte, a fusion scientist at MIT who also wasn’t involved in the NIF research. “Of course, that’s extremely heartening.”

This is no small feat. ICF experiments are notoriously delicate. Very subtle changes to the lasers’ angles, to the shapes of the hohlraum and the pellet, and to any one of dozens of other factors could drastically alter the output. NIF in December barely scratched the surface of fusion gain, and it’s clear that tiny changes were the difference between passing break-even and not.

“We also repeat things, not just to see if they repeat, but also to see the sensitivities,” Whyte says. “Seeing the variability and the differences of those from experiment to experiment is really exciting.”

Since the 1950s fusion scientists have tried to accomplish what the NIF team has done, twice, in the past year. But the long-term goal is to turn these experimental forays into clean, cheap, abundant energy for the world’s people. Converting that milestone into a power plant is another quest entirely, and it has only just begun. If creating gain in the lab is like learning to light a fire, then using it to generate electricity is like building a steam engine.

“I would like to see them gradually shift some of their focus from demonstration of ignition and gain toward investigation of target designs that are closer to those which might be employed in a fusion power reactor,” Pasley says. 

[Related: Microsoft thinks this startup can deliver on nuclear fusion by 2028]

To build a viable power plant, NIF will need to show greater gain. The December experiment created about 1.5 times as much energy as the NIF scientists put in. Even if the July experiment created two or three times as much energy, NIF won’t have come close to the gain that fusion scientists think is necessary for a viable power plant: some 100 times.

Gain of that magnitude would also make fusion a viable addition to the larger electrical grid. It’s difficult to understate the importance of NIF’s achievement, but the facility didn’t actually generate more energy than it took from the outside world. To power the laser that created those 3.15 megajoules, the device needed 300 megajoules from California’s grid.

NIF isn’t really the optimal place to complete this quest, partly because it was built to maintain the US nuclear weapons stockpile and can’t focus on fusion all the time. But for now, NIF will likely keep trying, running more and more laser shots. And scientists can compare the results with simulations to understand what is happening under the surface.

“What we assume is going to happen now is we’re going to get dozens of [runs], and we’re going to really learn a lot,” Whyte says.

The post How a US lab created energy with fusion—again appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Colorado is getting a state-of-the-art laser fusion facility https://www.popsci.com/technology/laser-fusion-facility-csu/ Thu, 10 Aug 2023 18:00:00 +0000 https://www.popsci.com/?p=562282
Green high density laser array
High density laser-created plasma physics could help build nuclear fusion technology. Colorado State University

The $150 million project aims to help advance nuclear fusion energy research alongside other physics goals.

The post Colorado is getting a state-of-the-art laser fusion facility appeared first on Popular Science.

]]>
Green high density laser array
High density laser-created plasma physics could help build nuclear fusion technology. Colorado State University

The path to fusion power is getting a $150 million boost thanks to a partnership between Colorado State University and private laser energy company, Marvel Fusion. Announced on Monday, the new facility will be located on the CSU Foothills Campus, and is set to feature at least three, multi-petawatt laser systems designed to advance research in “clean fusion energy, microelectronics, optics and photonics, materials science, medical imaging, and high energy density science.”

According to Marvel Fusion’s August 7 statement, the development of laser fusion “is critical because of its ability to dramatically reduce the carbon footprint of how energy is supplied globally.” Nuclear fusion has long been considered the Holy Grail of clean energy generation—the necessary resources are virtually unlimited, and produces vastly larger amounts of energy compared to other green alternatives. Unlike the nuclear fission reactions seen in traditional nuclear power plants, fusion involves forcing atoms together within extremely high temperatures to produce a new atom with a smaller mass.

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day.]

“This is an exciting opportunity for laser-based science, a dream facility for discovery and advanced technology development with great potential for societal impact,” said Jorge Rocca, director of CSU’s Laboratory for Advanced Lasers and Extreme Photonics, in this week’s announcement.

Although the CSU-Marvel Fusion project aims to begin operations in 2026, it is likely still many more years before nuclear fusion energy can affordably be produced at scale; some experts estimate it could take multiple decades to reach the goal, if ever.

Particle Physics photo
Rendering of CSU and Marvel Fusion’s new laser facility. Credit: Hord Coplan Macht

Still, researchers have significant gains towards sustainable nuclear fusion energy. In 2021, for example, a team in the UK generated a record-breaking 59 megajoules of energy in only five seconds via fusion technology—enough to power a home for an entire day. Earlier this year, the US government also doled out a number of grants earmarked to reignite research into cold fusion.

[Related: Cold fusion is making a scientific comeback.]

The overall prospects are tantalizing enough that major companies are investing heavily in fusion research. Earlier this year, Microsoft announced a power purchasing agreement with Helion, a startup hoping to achieve sustainable fusion by 2028. As ambitious as that may sound, Helion’s aspirations have garnered the interest of other investors—including OpenAI CEO Sam Altman, who contributed $375 million to the company in 2021.

Alongside the new partnership project, Marvel Fusion is working towards the construction of a prototype composed of hundreds of laser systems “capable of achieving fusion ignition and proving the technology at scale.”

The post Colorado is getting a state-of-the-art laser fusion facility appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Two tiny stars fit into an orbit smaller than our sun https://www.popsci.com/science/tiny-star-binary-system/ Tue, 08 Aug 2023 10:00:00 +0000 https://www.popsci.com/?p=561717
An illustration of a brown dwarf and a hotter star, in white.
A NASA illustration of a binary system, including a brown dwarf, though its pictured companion (to the upper left) is a long-dead white dwarf. NOIRLab/NSF/AURA/P. Marenfeld/Acknowledgement: William Pendrill

This unusual system 'shouldn't exist,' says one astronomer, who notes the orbit is as long as his daily commute.

The post Two tiny stars fit into an orbit smaller than our sun appeared first on Popular Science.

]]>
An illustration of a brown dwarf and a hotter star, in white.
A NASA illustration of a binary system, including a brown dwarf, though its pictured companion (to the upper left) is a long-dead white dwarf. NOIRLab/NSF/AURA/P. Marenfeld/Acknowledgement: William Pendrill

Reality is stranger than fiction, especially in space, where astronomers just spotted two tiny stars orbiting so close together that the whole system could fit inside our sun. In a new article submitted to the Open Journal of Astrophysics, researchers present the discovery of ZTF J2020+5033, a not-quite-a-star object called a brown dwarf that’s circling a small, low-mass star.

This is what’s known as a binary system, where two stars are bound to each other in a sort of gravitational dance—think the iconic twin suns in the sky above Tatooine, the Star Wars planet. What’s wild about this particular new—and very real—binary is just how small it is. “This system shouldn’t exist,” says Mark Popinchalk, an astronomer at the American Museum of Natural History not involved in the new research. 

The brown dwarf completes one lap of its parent star in just under two hours, about the time it takes Popinchalk to commute from Brooklyn to his Manhattan office and back. “I would have been skeptical of the system,” he adds, but the authors have collected “an impressive amount of data” using multiple telescopes and techniques to support this discovery.

[Related: Your guide to the types of stars, from their dusty births to violent deaths]

“The orbit is much tighter (i.e., smaller, with a shorter orbital period) than any previously discovered brown dwarf binaries,” says lead author Kareem El-Badry, an astronomer at Caltech. “Until now it seemed like these kinds of binaries were unable to reach such short periods, but this system shows that is not the case.”

Binary systems are an important tool for astronomers to understand stars more generally. Thanks to the gravitational interactions between the two components, researchers can measure mass, radius, and temperature and other key properties more reliably and accurately for binaries than they can when observing lone stars. These measurements are needed to test our models and understanding of how stars change over time.

The center of this binary system is a low-mass star—something smaller than our sun—with a brown dwarf orbiting around it. Brown dwarfs are sometimes called “failed stars” because they’re not quite big enough to be a star but too big to be a planet. “Failed stars” may be a misnomer, though, since astronomers are still trying to figure out if brown dwarfs and stars are born the same way.

This particular newly discovered brown dwarf, which is about 80 times the mass of Jupiter, is on the cusp of being massive enough to be a star. Studying it in particular can help astronomers unravel how these intermediate objects came to be. “The way brown dwarfs form still has several big question marks around it, and each brown dwarf/low-mass star binary system is an important laboratory to answer these questions,” says Popinchalk. ZTF J2020+5033 is such a large example of a brown dwarf that someday, if any of its partner star’s material transfers onto it, that addition might push the brown dwarf into star territory—“like a cosmic gift, some mass passed on to an old friend to help them over the line and into the category of full fledged star,” says Popinchalk.

[Related: Dust clumps around a young star could one day form planets]

Plus, this new binary’s tight orbit poses a puzzle for researchers. Stars are puffier when they’re young—so much so that if these stars weren’t old, they couldn’t orbit so close and would be touching. “A majority of known brown dwarfs are young and inflated,” says El-Badry. “So it lets us test models for how brown dwarfs should cool as they age.” Their youthful puffiness also means they couldn’t have possibly been in this orbit their whole lives, and instead the orbit somehow shrunk with the stars by a factor of five over their lifetimes.

The authors propose the shrinking orbit could be caused by magnetic braking, where energetic particles from a star are funneled through its magnetic field, robbing the star of energy. Existing models assume that magnetic braking doesn’t work for small stars, but it looks like it must be operating here. If small stars decelerate more than previously thought, this could have big impacts for the evolution of other types of binary stars too—X-ray binaries that have a neutron star and a low-mass star, or cataclysmic variables with a low-mass star and a white dwarf.

The post Two tiny stars fit into an orbit smaller than our sun appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Plasma beams could one day cool overheating electronics in a flash https://www.popsci.com/technology/plasma-cooling-ray/ Thu, 03 Aug 2023 16:00:00 +0000 https://www.popsci.com/?p=561001
Two researchers look at green plasma ray in lab setting
Plasma beams can get extremely hot, but not before potentially flash cooling a target. Tom Cogill / UVA

Researchers have developed a 'freeze ray' that relies on thermodynamic quirks to chill its targets.

The post Plasma beams could one day cool overheating electronics in a flash appeared first on Popular Science.

]]>
Two researchers look at green plasma ray in lab setting
Plasma beams can get extremely hot, but not before potentially flash cooling a target. Tom Cogill / UVA

Earth’s air is often a decent, convenient coolant for military planes’ electronics systems, while ocean waters function similarly for naval ships. But neither source is exactly available the farther you get from the planet’s surface—say, the upper atmosphere and outer space, for example. There, it’s much more difficult to keep electronics at safe temperatures, given that coolant is heavy and takes up valuable onboard real estate. According to new findings recently published in ACS Nano, one potential aid could be found via harnessing plasma—ironically, the same matter that composes stars and lightning bolts.

Researchers at the University of Virginia’s Experiments and Simulations in Thermal Engineering (ExSITE) Lab have discovered an extremely promising, previously unrealized way to quickly cool down surfaces: plasma “freeze” rays.

[Related: Will future planes fly on wings of plasma?]

Using plasma to lower temperatures may seem counterintuitive—after all, plasma can easily heat to 45,000 degrees Fahrenheit, and higher—but according to mechanical and aerospace engineer Patrick Hopkins, shooting a focused jet of matter’s fourth state can offer some incredibly interesting thermodynamic results.

“What I specialize in is doing really, really fast and really, really small measurements of temperature,” Hopkins recently explained. “So when we turned on the plasma, we could measure temperature immediately where the plasma hit, then we could see how the surface changed.”

In their experiments, Hopkin’s team fired a purple jet of helium-generated plasma through a thin needle coated in ceramic onto a gold-plated target. They then measured the effects on the target’s surface using specialized, custom microscopic instruments, only to record some incredible results.

“We saw the surface cool first, then it would heat up,” said Hopkins.

Purple plasma beam firing at target
Credit: Tom Cogill / UVA

After repeated tests and observations of the phenomenon, the team determined the plasma beam must be first striking a micro-thin layer of carbon and water molecules, which quickly evaporates the coating much like what happens when you air dry after getting out of a pool in the summer. Or, more simply, Hopkins is making the materials sweat.

“Evaporation of water molecules on the body requires energy; it takes energy from [the] body, and that’s why you feel cold,” said Hopkins. “In this case, the plasma rips off the absorbed [molecules], energy is released, and that’s what cools.”

Researchers measured a decrease in temperature as much as a few degrees for a few microseconds—perhaps unimpressive on a human scale, but such a difference could be extremely helpful in delicate, highly advanced electronics and instruments. Going forward, Hopkins’ team is experimenting with both various plasma gasses, as well as their impact on different materials like copper and semiconductors. Eventually, the researchers envision a time when robotic arm attachments can pinpoint hotspots in devices to cool via tiny plasma shots from an electrode.

“This plasma jet is like a laser beam; it’s like a lightning bolt,” said Hopkins. “It can be extremely localized.”

The post Plasma beams could one day cool overheating electronics in a flash appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
A new kind of thermal imaging sees the world in striking colors https://www.popsci.com/technology/hadar-thermal-camera/ Wed, 26 Jul 2023 16:00:00 +0000 https://www.popsci.com/?p=559135
Thermal vision of a home.
Thermal imaging (seen here) has been around for a while, but HADAR could up the game. Deposit Photos

Here's how 'heat-assisted detection and ranging,' aka HADAR, could revolutionize AI visualization systems.

The post A new kind of thermal imaging sees the world in striking colors appeared first on Popular Science.

]]>
Thermal vision of a home.
Thermal imaging (seen here) has been around for a while, but HADAR could up the game. Deposit Photos

A team of researchers has designed a completely new camera imaging system based on AI interpretations of heat signatures. Once refined, “heat-assisted detection and ranging,” aka HADAR, could one day revolutionize the way autonomous vehicles and robots perceive the world around them.

The image of a robot visualizing its surroundings solely using heat signature cameras remains in the realm of sci-fi for a reason—basic physics. Although objects are constantly emitting thermal radiation, those particles subsequently diffuse into their nearby environments, resulting in heat vision’s trademark murky, textureless imagery, an issue understandably referred to as “ghosting.”

[Related: Stanford researchers want to give digital cameras better depth perception.]

Researchers at Purdue University and Michigan State University have remarkably solved this persistent problem using machine learning algorithms, according to their paper published in Nature on July 26. Employing AI trained specifically for the task, the team was able to derive the physical properties of objects and surroundings from information captured by commercial infrared cameras. HADAR cuts through the optical clutter to detect temperature, material composition, and thermal radiation patterns—regardless of visual obstructions like fog, smoke, and darkness. HADAR’s depth and texture renderings thus create incredibly detailed, clear images no matter the time of day or environment.

AI photo
HADAR versus ‘ghosted’ thermal imaging. Credit: Nature

“Active modalities like sonar, radar and LiDAR send out signals and detect the reflection to infer the presence/absence of any object and its distance. This gives extra information of the scene in addition to the camera vision, especially when the ambient illumination is poor,” Zubin Jacob, a professor of electrical and computer engineering at Purdue and article co-author, tells PopSci. “HADAR is fundamentally different, it uses invisible infrared radiation to reconstruct a night-time scene with clarity like daytime.”

One look at HADAR’s visual renderings makes it clear (so to speak) that the technology could soon become a vital part of AI systems within self-driving vehicles, autonomous robots, and even touchless security screenings at public events. That said, a few hurdles remain before cars can navigate 24/7 thanks to heat sensors—HADAR is currently expensive, requires real-time calibration, and is still susceptible to environmental barriers that detract from its accuracy. Still, researchers are confident these barriers can be overcome in the near future, allowing HADAR to find its way into everyday systems. Still, HADAR is already proving beneficial to at least one of its creators.

“To be honest, I am afraid of the dark. Who isn’t?” writes Jacob. “It is great to know that thermal photons carry vibrant information in the night similar to daytime. Someday we will have machine perception using HADAR which is so accurate that it does not distinguish between night and day.”

The post A new kind of thermal imaging sees the world in striking colors appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How epic wind tunnels on Earth make us better at flying through space https://www.popsci.com/science/nasa-wind-tunnel-langley/ Tue, 25 Jul 2023 10:00:00 +0000 https://www.popsci.com/?p=558839
A rotor in the middle of a large wind tunnel.
The Tiltrotor Test Rig, a test bed developed by NASA to study advanced designs for rotor blades, is seen in the 40- by 80-foot test section of the National Full-Scale Aerodynamics Complex in November 2017. NASA/Ames Research Center/Dominic Hart

Experimental Mars spacecraft will face down the elements in NASA's newest wind tunnel.

The post How epic wind tunnels on Earth make us better at flying through space appeared first on Popular Science.

]]>
A rotor in the middle of a large wind tunnel.
The Tiltrotor Test Rig, a test bed developed by NASA to study advanced designs for rotor blades, is seen in the 40- by 80-foot test section of the National Full-Scale Aerodynamics Complex in November 2017. NASA/Ames Research Center/Dominic Hart

Before a spacecraft lands on Mars, or futuristic cargo planes soar above our cities, they have to be designed and rigorously tested in wind tunnels. Even passenger airliners, such as Boeing’s 747 jets used by major airlines, are subject to such tests. These facilities allow engineers to “fly” aircraft and spacecraft just a few feet off the ground. NASA, which has a 100-year history of using the machines, is finally building a new one, updated for the 21st century—the agency’s first new wind tunnel in over 40 years.

The NASA Flight Dynamic Research Facility (FDRF), slated to open in 2025 at the Langley Research Center in Virginia, will be over 100 feet tall. NASA leaders think it’s going to be key for creating the spacecraft of the future. The agency plans to use the new wind tunnel to prepare for human spaceflight to the Moon and Mars, plus robotic missions to two solar system worlds with thick atmospheres: Venus and Titan, Saturn’s methane-rich moon. It will also be key for the next generation of Earth-bound aircraft, which NASA hopes to make more sustainable, in line with its goal of net-zero emissions by 2050

“What we’re going to do with this facility is literally change the world,” said Clayton Turner, director of NASA Langley Research Center, in a press release from the facility’s groundbreaking ceremony. “The humble spirit of our researchers and this effort will allow us to reach for new heights, to reveal the unknown, for the betterment of humankind.” 

Wind tunnels push air past a stationary object, usually using huge fans, to simulate the motion of air around, over, and under flying craft. This allows engineers to tweak their designs based on what they see in the experiment, making vehicles more stable and aerodynamic. The wind tunnel is a safe place to try out new technologies, and a key step in testing the safety of any craft before a human jumps aboard. It’s also key for rockets and spacecraft, where engineers must ensure the vehicle can safely traverse a planet’s atmosphere. (Biologists have even used wind tunnels—though not NASA’s—to observe flying geese.)

Langley’s most recently built wind tunnel is the National Transonic Facility, constructed in 1980. That will remain in operation, but the FDRF will replace two existing wind tunnels, both near 80 years old: the 12-foot Low-Speed Spin Tunnel from 1939, and the 20-foot Vertical Spin Tunnel from 1940. The flying machines tested in the new facility will be beyond what the original builders could have dreamed. “We haven’t tested anything with a propeller on it in decades,” joked NASA Langley chief engineer Charles “Mike” Fremaux at a recent community lecture about the project.

[Related: How to build a massive wind farm]

The first NASA wind tunnel (which was the US government’s first wind tunnel) was built all the way back in 1921 at Langley. It was basically a glorified box with some powerful fans. Since then, the agency has built more than 40 wind tunnels, many with specialized purposes. Some are tiny, meant only for miniature models, and some are large enough to fit a whole jet. Each produces a different temperature, pressure, and speed of wind, meant to simulate the different conditions a craft might encounter in the real world. Some wind tunnels can move air at over 4,000 miles per hour, significantly quicker than a 747’s usual cruising speed of around 600 mph.

Many famous missions have started their journeys in a wind tunnel. The Curiosity rover’s parachute, for example, was first tested in the National Full-Scale Aerodynamics Complex at NASA Ames in California, long before it ballooned open in the Red Planet’s atmosphere. In the past few years, key parts of NASA’s Artemis missions, which aim to return Americans to the moon, including the Orion crew capsule and the SLS rocket, were tested in wind tunnels.

A wind tunnel tests a NASA parachute concept in 2007.
An early parachute design for the Mars Science Laboratory landing system was tested in October 2007 at the National Full-Scale Aerodynamics Complex wind tunnel. NASA/JPL/Pioneer Aerospace

The new wind tunnel at the FDRF will be more efficient than past facilities, cutting down on costs. Plus, it’ll be safer for the staff running the wind tunnel tests, who used to run the risk of getting sucked into the machine as they deployed models. “Just like we do now…a very skilled technician is going to launch the models by hand. That’s not a joke,” said Fremaux in his presentation. In the past, there have only been some minor injuries, and most accidents just damage the facility itself. But, now there will be more fail-safes to minimize the risks.

It really might even pave the way for flying cars, too, by testing the tech for vertical takeoff, as demonstrated by Back to the Future’s hover cars or a classic Jetsons’-style flying car. Those are far-out ideas, but they’d never be able to take off without the help of the time-tested wind tunnel.

The post How epic wind tunnels on Earth make us better at flying through space appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The rocky history of a missing 26,000-foot Himalayan peak https://www.popsci.com/science/himalaya-mountain-landslide/ Thu, 06 Jul 2023 16:30:00 +0000 https://www.popsci.com/?p=553788
The base camp at Annapurna in the foreground, with the peak behind it.
The base camp at Annapurna in Nepal, one of the tallest mountains on Earth. Depositphotos

A massive mountain summit crumbled around 1190 CE, leaving evidence in the plains below.

The post The rocky history of a missing 26,000-foot Himalayan peak appeared first on Popular Science.

]]>
The base camp at Annapurna in the foreground, with the peak behind it.
The base camp at Annapurna in Nepal, one of the tallest mountains on Earth. Depositphotos

Earth is home to 14 “eight-thousanders,” summits that top off at more than 8,000 meters, or 26,247 feet, above sea level. All of these grand mountains tower over the Himalayas, the highest place in the world.

But our planet is dynamic—could there have been additional peaks like these, since lost? “We wanted to know whether, 830 years ago, the Earth and the Himalayas had one more,” says Jérôme Lavé, a geomorphologist at the National Centre for Scientific Research (CNRS) and the University of Lorraine in France.

The answer, according to Lavé and his colleagues, appears to be yes. In a new paper, published in the journal Nature on July 6, they’ve found evidence of an ancient landslide that reshaped South Asia’s geography—and linked that to the collapse of a peak that would have once been one of the tallest mountains on Earth.

Lavé says his team first spotted the fingerprints of this medieval landslide not in the Himalayas, but far to the south, near the India-Nepal border, in the flat plains around the Narayani River.

To look for missing mountains, these plains are prime land for geomorphologists—scientists who study the evolution of the land under our feet (or, in this case, the land towering well above everyone but the hardiest mountaineers). Rivers like the Narayani carry sediments downslope, and those sediments can reveal much about the mountains where they originated.

For instance, Lavé and colleagues found medieval sediments with a carbonate content five times higher than average. This mineral fingerprint indicated that something had disrupted the Narayani’s flow. “A giant landslide occurring…seemed to me the most obvious avenue to explore,” Lavé says.

[Related: How to start mountain biking this summer]

They began plying uphill to find out more. The Narayani flows through the city of Pokhara, nestled in a valley less than 3,000 feet above sea level. But this is one of the steepest landscapes on Earth: looming over Pokhara is the Annapurna massif, a section of the Himalayas. (The massif’s crown jewel is its tallest peak: also named Annapurna, a proud member of the eight-thousand club.)

By studying images of the Annapurna massif, the team found geographic signs of an old landslide. In one subsection of the massif, called the Sabche cirque, they spotted strange features like pillars and pinnacles, markers of erosion.

The authors needed more samples. Collecting fragments from the plains is one thing. It was another to gather wood and rock from the Sabche cirque—they ventured up into the massif by helicopter. From these parts, they began to build the hazy image of a mountain that existed, long ago, until one catastrophic day around 1190 CE.

“They really managed to capture this event…both at the source as well as at the far sink of these sediments,” says Wolfgang Schwanghart, a geomorphologist at the University of Potsdam in Germany, who was not an author of the paper.

This is what Lavé and colleagues think happened: There once rose a second eight-thousander from the Annapurna massif. Then, it collapsed. The resulting rockslide thoroughly eroded the Himalayan landscape and poured sediment into the valley that now contains Pokhara, from where waters carried it downstream. This event played a major role in eroding the rock, reshaping the massif closer to what we see today.

The paper suggests that large, dramatic landslides may be a significant driver of erosion at high altitudes like this. “This is a mechanism that still needs to be further investigated, but this hypothesis may open new insights,” says Odin Marc, a geomorphologist at CNRS who was also not involved in the research.

What caused the mountain to collapse isn’t clear. A warming medieval climate might have melted mountaintop permafrost that otherwise strengthens the peak. Schwanghart, who has also studied the region’s geology, believes the answer may be earthquakes. He says the chronology indicates that three earthquakes struck Nepal around the time that Lavé and colleagues suggested the mountain collapsed, and one of them may have caused the mountain to topple in the first place.

[Related: There might be underground ‘mountains’ near Earth’s core]

Whatever happened, the new report reinforces the fact that mountains are constantly changing environments. We might see summits as eternal fixtures on the landscape, but if anything, they are the complete opposite.

After all, Himalayan landslides aren’t consigned to the past. In 2021, an avalanche and rockslide careened down a mountainside in Uttarakhand, India, around 300 miles northwest of Annapurna. The disaster burst a dam, and the resulting flood left some 200 people dead or missing.

If such a rockslide were to happen to Pokhara today, the results could be devastating. Pokhara is Nepal’s second-largest city (after the capital Kathmandu) and home to more than half a million people. Moreover, globally, evidence is mounting that a warming climate exacerbates the risk of mountain landslides. Just last month, the Alpine summit of Fluchthorn, nestled on the Swiss-Austrian border, abruptly collapsed in an event that scientists ascribed to thawing permafrost.

Mountain collapses like these may be more common than we realize. “In Alaska, you would find similar events—but often they go unnoticed, because there is no one around,” says Schwanghart.

The post The rocky history of a missing 26,000-foot Himalayan peak appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cold fusion is making a scientific comeback https://www.popsci.com/science/cold-fusion-low-energy-nuclear-reaction/ Mon, 03 Jul 2023 18:00:00 +0000 https://www.popsci.com/?p=552986
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles.
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles. ESRF/P. Jayet

A US agency is funding low-energy nuclear reactions to the tune of $10 million.

The post Cold fusion is making a scientific comeback appeared first on Popular Science.

]]>
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles.
The ringed building is the European Synchrotron Radiation Facility in France, where LENR researchers are studying palladium nanoparticles. ESRF/P. Jayet

Earlier this year, ARPA-E, a US government agency dedicated to funding advanced energy research, announced a handful of grants for a field it calls “low-energy nuclear reactions,” or LENR. Most scientists likely didn’t take notice of the news. But, for a small group of them, the announcement marked vindication for their specialty: cold fusion.

Cold fusion, better known by its practitioners as LENR, is the science—or, perhaps, the art—of making atomic nuclei merge and, ideally, harnessing the resultant energy. All of this happens without the incredible temperatures, on the scale of millions of degrees, that you need for “traditional” fusion. In a dream world, successful cold fusion could provide us with a boundless supply of clean, easily attainable energy.

Tantalizing as it sounds, for the past 30 years, cold fusion has largely been a forgotten specter of one of science’s most notorious controversies, when a pair of chemists in 1989 claimed to achieve the feat—which no one else could replicate. There is still no generally accepted theory that supports cold fusion; many still doubt that it’s possible at all. But those physicists and engineers who work on LENR believe the new grants are a sign that their field is being taken seriously after decades in the wilderness.

“It got a bad start and a bad reputation,” believes David Nagel, an engineer at George Washington University, “and then, over the intervening years, the evidence has piled up.”

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

Igniting fusion involves pressing the hearts of atoms together, creating larger nuclei and a fountain of energy. This isn’t easy. The protons inside a nucleus give it a positive charge, and like-charged nuclei electrically repel each other. Physicists must force the atoms to crash together anyway. 

Normally, breaking this limit needs an immense amount of energy, which is why stars, where fusion happens naturally, and Earthbound experiments reach extreme heat. But what if there were another, lower-temperature way?

Scientists had been theorizing such methods since the early 20th century, and they’d found a few tedious, extremely inefficient ways. But in the 1980s, two chemists thought they’d made one method work to great success. 

The duo, Martin Fleischmann and Stanley Pons, had placed the precious metal palladium in a bath of heavy water: a form of H2O whose hydrogen atoms have an extra neutron, a form known as deuterium, commonly used in nuclear science. When Fleischmann and Pons switched on an electrical current through their apparatus and left it running, they began to see abrupt heat spikes, or so they claimed, and particles like neutrons.

Those heat spikes and particles, according to them, could not be explained by any chemical process. What could explain them were the heavy water’s deuterium nuclei fusing, just as they would in a star.

If Fleischmann and Pons were right, fusion could be achievable at room temperature in a relatively basic chemistry lab. If you think that sounds too good to be true, you’re far from alone. When the pair announced their results in 1989, what followed was one of the most spectacular firestorms in the history of modern science. Scientist after scientist tried to recreate their experiment, and no one could reliably replicate their results.

[Related: Nuclear power’s biggest problem could have a small solution]

Pons and Fleischmann are remembered as fraudsters. It likely didn’t help that they were chemists trying to make a mark on a field dominated by physicists. Whatever they had seen, “cold fusion” found itself at respectable science’s margins. 

Still, in the shadows, LENR experiments continued. (Some researchers tried variations on Fleischmann and Pons’ themes. Others, especially in Japan, sought LENR as a means of cleaning up nuclear waste by transforming radioactive isotopes into less dangerous ones.) A few experiments showed oddities such as excess heat or alpha particles—anomalies that might best be explained if atomic nuclei were reacting behind the scenes.

“The LENR field has somehow, miraculously, due to the convictions of all these people involved, has stayed alive and has been chugging along for 30 years,” says Jonah Messinger, an analyst at the Breakthrough Institute think tank and a graduate student at MIT.

Fleischmann and Pons’ fatal flaw—that their results could not be replicated—continues to cast a pall over the field. Even some later experiments that seemed to show success could not be replicated. But this does not deter LENR’s current proponents. “Science has a reproducibility problem all the time,” says Florian Metzler, a nuclear scientist at MIT.

In the absence of a large official push, the private sector had provided much of LENR’s backing. In the late 2010s, for instance, Google poured several million dollars into cold fusion research to limited success. But government funding agencies are now starting to pay attention. The ARPA-E program joins European Union projects, HERMES and CleanHME, which both kicked off in 2020. (Messinger and Metzler are members of an MIT team that will receive ARPA-E grant funds.)

By the standards of other energy research funding, none of the grants are particularly eye-watering. The European Union programs and ARPA-E total up to around $10 million each: a pittance compared to the more than $1 billion the US government plans to spend in 2023 on mainstream fusion.

But that money will be used in important ways, its proponents say. The field has two pressing priorities. One is to attract attention with a high-quality research paper that clearly demonstrates an anomaly, ideally published in a reputable journal like Nature or Science. “Then, I think, there will be a big influx of resources and people,” says Metzler.

A second, longer-term goal is to explain how cold fusion might work. The laws of physics, as scientists understand them today, do not have a consensus answer for why cold fusion could happen at all.

Metzler doesn’t see that open question as a problem. “Sometimes people have made these arguments: ‘Oh, cold fusion contradicts established physics,’ or something like that,” he says. But he believes there are many unanswered questions in nuclear physics, especially with larger atoms. “We have an enormous amount of ignorance when it comes to nuclear systems,” he says.

Yet answers would have major benefits, other experts argue. “As long as it’s not understood, a lot of people in the scientific community are put off,” says Nagel. “They’re not willing to pay any attention to it.”

It is, of course, entirely possible that cold fusion is an illusion. If that’s the case, then ARPA-E’s grants may give researchers more proof that nothing is there. But it’s also possible that something is at work behind the scenes.

And, LENR proponents say, the Fleischmann and Pons saga is now fading as younger researchers enter the field with no memory of 1989. Perhaps that will finally be what lets LENR emerge from the pair’s shadow.“If there is a nuclear anomaly that occurs,” says Messinger, “my hope is that the wider physics community is ready to listen.”

The post Cold fusion is making a scientific comeback appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Euclid space telescope begins its search through billions of galaxies for dark matter and energy https://www.popsci.com/science/euclid-space-telescope-dark-matter/ Fri, 30 Jun 2023 15:43:20 +0000 https://www.popsci.com/?p=552596
Euclid Space Telescope mounted on SpaceX Falcon 9 rocket in a holding facility before dark energy and dark matter mission launch
On June 23, Euclid was secured to the adaptor of a SpaceX Falcon 9 rocket. The new ESA cosmological mission is getting ready for lift-off with a target launch date of July 1 from Cape Canaveral in Florida. SpaceX

The two-ton telescope will take up orbit near JWST to help us decipher the expanding universe.

The post Euclid space telescope begins its search through billions of galaxies for dark matter and energy appeared first on Popular Science.

]]>
Euclid Space Telescope mounted on SpaceX Falcon 9 rocket in a holding facility before dark energy and dark matter mission launch
On June 23, Euclid was secured to the adaptor of a SpaceX Falcon 9 rocket. The new ESA cosmological mission is getting ready for lift-off with a target launch date of July 1 from Cape Canaveral in Florida. SpaceX

It’s an exhilarating and sobering thought: All the planets, galaxies, starlight, and other objects that we can see and measure in the universe make up just 5 percent of existence. The other 95 percent are eaten up by two enigmas, dark matter and dark energy, known to scientists by their apparent gravitational effects on the surrounding universe, but not directly detectible.

On July 1, however, a new European Space Agency mission could help scientists get a little closer to solving the twin mysteries of dark matter and dark energy. The Euclid space telescope will take flight from Cape Canaveral Space Force Station no earlier than 11:11 a.m. EDT atop a SpaceX Falcon 9 rocket. NASA will live stream the launch beginning at 10:30 a.m.

Following blastoff, Euclid will take about 30 days to reach its operational orbit around Lagrange Point 2 (L2), an area a million miles toward the outer solar system where Euclid can maintain a constant position relative to Earth. The James Webb Space Telescope also orbits L2.

[Related: A super pressure balloon built by students is cruising Earth’s skies to find dark matter]

Once on location and operational, Euclid will begin what is expected to be a six-year mission where it will survey around a third of the sky, carefully measuring the shapes of billions of galaxies up to 10 billion light-years away to catch a glimpse at the ways dark matter and dark energy shape our cosmos. To do that, the roughly 4,600-pound space telescope will use its four-foot-wide primary mirror to collect and focus visible and near-infrared wavelengths of light on two instruments: the VISible instrument camera and Near-Infrared Spectrometer and Photometer, which helps determine the distance to far off galaxies.

“The awesomeness of how many galaxies Euclid will be able to measure and at what amazing precision—it’s just an amazing feat of human engineering,” says Lindley Winslow, a professor of physics at MIT who designs experiments to detect dark matter, but is not directly involved with this mission. “The fact that we can do precision cosmology is awesome.”

Dark energy and dark matter shaping the expanding universe. Illustration.
The European Space Agency’s Cosmic Vision aims to better define dark energy, dark matter, and their role in universal expansion. NASA/ESA/ESO/W. Freudling (ST-ECF)

Cosmologists, who study the formation, evolution, and structure of the universe, have a model called Lambda-CDM that might explain why everything is the way it is. Lambda is the cosmological constant, the force that appears to be causing the universe to expand at an accelerating rate and which scientists believe is related to or manifests in mysterious dark energy. CDM stands for “cold dark matter,” which interacts with normal matter gravitationally.

”Those are the two ingredients that have sculpted the universe that we know,” Winslow says. Dark energy drives universal expansion, while “in the early universe, it was this cold dark matter that pulled visible matter that we see now into potential wells, that then allowed it to contract and form galaxies and stars.”

Lambda-CDM helps us construe a lot of the large-scale universe, according to Winslow, but it doesn’t tell us how it fits together with the theory that explains how the small scale universe works: the Standard Model of particle physics. Euclid is one of several attempts to learn more about how the universe expands and revise Lambda-CDM.

“What we’re really interested in is, can we get more data? Winslow says. “And can we find something that Lambda-CDM doesn’t explain?”

To hunt for that evidence, Euclid will use a technique known as weak gravitational lensing. This is similar to the strong gravitational lensing technique employed by JWST, where the mass of a foreground object, such as a galaxy cluster, is used to magnify a more distant background object. With weak gravitational lensing, scientists are more interested in the way the mass of the foreground objects, including dark matter, creates subtle distortions in the shape of background galaxies.

“We’re using the background galaxies to learn about the matter distribution in the foreground,” says Rachel Mandelbaum, an astrophysicist at Carnegie Mellon University who is a member of the US portion of the Euclid Consortium, a group of thousands of scientists and engineers. “We’re trying to measure the effects of all of the matter between the distinct galaxy shape and us.”

[Related: Astronomers used dead stars to detect a new form of ripple in space-time]

This method will also help them measure the effects of dark energy, Mandelbaum adds. Since dark matter helps all other forms of matter clump together, and dark energy counteracts the gravitational effects of dark matter, by measuring how clumpy matter is over a range of distance from Earth, “we can measure how cosmic structure is growing and use that to infer the effects of dark energy on the matter distribution.”

Euclid will not be the first large sky survey using weak gravitational lensing to search for signs of dark matter and dark energy, but it will be the first survey of its kind in orbit. Previous studies, such as the Dark Energy Survey, have all been conducted by ground-based telescopes, according to Mandelbaum. Being up in space offers a different advantage.

“Ground-based telescopes see blurrier images than space-based telescopes because of the effects of the Earth’s atmosphere on the light of distant stars and galaxies,” Mandelbaum says. Euclid’s view from L2 will be helpful when “we’re trying to measure these very subtle distortions in the shapes of galaxies.” 

But dark matter and dark energy are tough enigmas to crack, and scientists can use all the data they can collect, from as many angles as possible. The Vera Rubin Observatory, currently under construction in Chile and scheduled to open in 2025, will host the ground-based Legacy Survey of Space and Time and scan the entire southern sky for similar phenomena. Efforts like these will help ensure the reproducibility of findings by Euclid, and vice versa, according to Mandelbaum.

”Euclid is a really exciting experiment within a broader landscape of surveys that are trying to get at the same science, but with very different datasets that have different assumptions,” she says. “They’re going to be doing somewhat different things that give us a different approach to answering these really fundamental questions about the universe.”

The post Euclid space telescope begins its search through billions of galaxies for dark matter and energy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Milky Way’s ghostly neutrinos have finally been found https://www.popsci.com/science/neutrinos-milky-way-detection/ Thu, 29 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=552344
The IceCube Lab is seen under a starry night sky, with the Milky Way appearing over low auroras in the background.
The IceCube Neutrino Observatory looks for neutrinos exiting Earth at the South Pole. Yuya Makino, IceCube/NSF

There should be plenty of these subatomic particles emerging from our galaxy, but until now they'd never been detected.

The post The Milky Way’s ghostly neutrinos have finally been found appeared first on Popular Science.

]]>
The IceCube Lab is seen under a starry night sky, with the Milky Way appearing over low auroras in the background.
The IceCube Neutrino Observatory looks for neutrinos exiting Earth at the South Pole. Yuya Makino, IceCube/NSF

Neutrinos fill the universe, but you wouldn’t know that with your eyes. These subatomic particles are small—so small that physicists once thought they had no mass at all—and they have no electric charge. Even though trillions of neutrinos enter your body every second, the vast majority of them pass through without a trace.

Yet astronomers crave glimpses of neutrinos. That’s because neutrinos are products of cosmic rays, cryptic high-energy particles constantly streaming throughout the universe from all directions. Astronomers aren’t sure where cosmic rays come from, how they behave in flight, or—in many cases—what they’re made of.

Neutrinos can help researchers find out, but they have to see those particles first. Until now, astronomers had only confirmed that they’d found neutrinos originating from outside our galaxy. But in a paper published in Science today, a global team of astronomers announced a long-sought goal: the first neutrinos that hail from the plane of the Milky Way itself.

“This is a very important discovery,” says Luigi Antonio Fusco, an astronomer at the University of Salerno in Italy, who was not an author of the paper.

Seeing these ghosts is a tricky task. A neutrino observatory looks nothing like a telescope or a radio dish. Instead, the paper’s authors worked with an array of holes drilled more than a mile into the South Pole ice: the IceCube Neutrino Observatory. Down those shafts, deep in the frozen darkness, IceCube’s detectors watch for light trails from the particles that neutrinos spawn when they collide with matter.

[Related: This ghostly particle may be why dark matter keeps eluding us]

In water or ice, light only travels around three-quarters of its speed limit. Particles can move through those substances quicker than that (though not faster than speed of light in a vacuum). If they do, they shoot out cones of bright light called Cherenkov radiation, equivalent to a sonic boom. Some neutrino observatories, such as ANTARES at the bottom of the Mediterranean, look for Cherenkov radiation in water. IceCube uses, well, ice.

Even after scientists tabulated how to find those neutrinos, they faced another problem: noise. Neutrino detectors constantly detect the result of cosmic rays careening into the upper atmosphere, pumping subatomic particles into the planet. How, then, do you find the needles of cosmic neutrinos in that high-energy haystack?

The answer is by examining the direction. Neutrinos from afar have higher energies and can more easily shoot through our planet. If your detector spots a neutrino that seems to be coming from the ground, there’s a healthy chance it came from space and passed through Earth before striking your detector.

In 2013, IceCube detected the first cosmic neutrinos. In the years since, they’ve been able to narrow neutrino sources down to individual galaxies. “We have been detecting extragalactic neutrinos for 10 years now,” says Francis Halzen, a physicist at the University of Wisconsin and a member of the IceCube collaboration.

[Related: Dark energy fills the cosmos. But what is it?]

But one important source was missing: neutrinos from within our own galaxy. Astronomers believe that there should be plenty of neutrinos emerging from the Milky Way’s plane. IceCube scientists had tried to find those neutrinos before, but they’d never been able to confidently pinpoint their origins.

“What changed in this analysis is that we really improved the methods that we’re using,” says Mirco Hünnefeld, a scientist at the Technical University of Dortmund in Germany and a member of the IceCube collaboration.

The IceCube team sharpened their artificial intelligence tools. Today’s neural networks can pluck out neutrinos from the noise with keener discretion than ever before. Astronomers analyzed more than 59,000 IceCube detections collected between 2011 and 2021 and compared them against predicted models of neutrino sources.

As a result, they’re confident that their detections can be explained by neutrinos streaming off of the Milky Way’s flat plane and, in particular, the galactic center.

Now, astronomers want to narrow down the points in the sky where those neutrinos actually come from. More sensitive neutrino detectors can help with that task. IceCube will get upgrades later this decade, and a new generation of neutrino observatories—such as KM3NeT in the Mediterranean and GVD under Russia’s Lake Baikal—will expand astronomers’ neutrino-seeing toolbox.

“The IceCube signal is kind of a diffuse haze,” says Fusco. “With the next generation, I think we can really push to try to point out which are the individual sources of this signal.” If astronomers can do that, they can learn more about the sources that create cosmic rays, which could potentially be supernovae

“Only cosmic rays make neutrinos, so if you see neutrinos, you see cosmic ray sources,” says Halzen. “The goal of neutrino physics, the prime goal, is to solve the 100-year-old cosmic ray problem.”

“That will help us disentangle a lot of the mysteries out there,” says Hünnefeld, “that we couldn’t do before.”

The post The Milky Way’s ghostly neutrinos have finally been found appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Astronomers used dead stars to detect a new form of ripple in space-time https://www.popsci.com/science/gravitational-waves-nanograv/ Thu, 29 Jun 2023 00:00:00 +0000 https://www.popsci.com/?p=551972
Low-frequency gravitation waves emerging from black hole collision in space. Illustration.
Two giant black holes, in the upper left, collide and distort the bright pulsars around them in this illustration. NANOGrav/Sonoma State University/Aurore Simonnet

Longer gravitational waves from colliding black holes could help explain why galaxies grow and change.

The post Astronomers used dead stars to detect a new form of ripple in space-time appeared first on Popular Science.

]]>
Low-frequency gravitation waves emerging from black hole collision in space. Illustration.
Two giant black holes, in the upper left, collide and distort the bright pulsars around them in this illustration. NANOGrav/Sonoma State University/Aurore Simonnet

Today, a humongous team of astronomers called the NANOGrav Collaboration announced something remarkable: the first evidence for a background hum of gravitational waves that permeates our universe. The research group, based in the US and Canada, used dead stars across the galaxy as a Milky-Way-sized measurement device to find these distorting undulations.

“We’ve been on a mission for the last 15 years” to find this background, said Stephen Taylor, Chair of NANOGrav and Vanderbilt University astronomy, in a press briefing. “And we’re very happy to announce that our hard work has paid off.” These waves can reveal the kinds of black holes scattered across the cosmos, which will help astronomers figure out how galaxies grow and change.

Gravitational waves are ripples in the fabric of space-time itself. Just like telescopes are specialized for different parts of the electromagnetic spectrum, gravitational wave experiments are sensitive to different wavelengths, too. The LIGO experiment, which found the first gravitational waves in 2016, can detect shorter waves, like those made when two star-sized black holes smash together. But it can’t feel longer, lower frequency waves, which some astronomers think are key to unlocking the universe’s history. 

“If we want to observe the largest black holes to further understand galaxy evolution, as well as test theories at the frontiers of modern physics, we need to be able to observe low-frequency gravitational waves,” says Vanderbilt University astronomer William Lamb, who is part of the NANOGrav team.

[Related: Astronomers recorded a whopping 35 gravitational wave events in just 5 months]

Such waves come from the most massive black holes, which should be merging all across the universe to create background noise, like cosmic TV static. For a while, astronomers worried these monster black holes could never get close enough to combine into one galactic center, which would be a big problem for our understanding of galaxies’ evolution—and would result in a quieter gravitational wave background. Since NANOGrav has heard the signal, now astronomers know these black holes do collide, and they can figure out the details of how galaxies merge. Gravitational waves from right after the Big Bang might also contribute to this background, offering one way to probe the first seconds of the universe.

To detect such a low-frequency signal, astronomers needed an experiment larger than the entire Earth—possibly something the size of the whole galaxy. Luckily, nature provided just the tool: pulsars. Pulsars are the dead cores of the heaviest stars, which spew out jets of light and spin unbelievably fast. Like watching the beams from a lighthouse, we see them pulse brighter when their jet spins toward us—and somehow nature is the best lighthouse keeper, since pulsars are as predictable in their timing as atomic clocks.

When these pulsars ride the swell of a gravitational wave, though, the space-time ripple distorts this precision. Pulsar timing arrays (PTAs), collections of radio telescopes that record pulsars across the galaxy and can measure these minute deviations in pulsars’ otherwise super-accurate clocks. Together, the many tiny shifts in pulsars’ periods paint a picture of how a long, low-frequency gravitational wave propagates throughout the galaxy. To make these measurements, NANOGrav used telescopes across North America: Puerto Rico’s famed Arecibo Telescope (which has since collapsed), the Green Bank Telescope in West Virginia, the Very Large Array in New Mexico, and the CHIME experiment in Canada.

A large telescope array in the foreground with West Virginia hills behind it.
The Green Bank Telescope, in West Virginia, was one of several observatories used in the new experiment. Jay Young for Green Bank Observatory

Pulsar timing “is fundamentally different from how LIGO detects gravitational waves,” says University of Mississippi astronomer Sumeet Kulkarni, who was not involved in the new work. “What I find particularly amazing about this discovery is the coordination involved” between the numerous telescopes and contributors, he adds.

This new result uses 15 years of data from NANOGrav’s PTAs, but it isn’t quite robust enough for the team to call it an official detection. Instead, the researchers are using the term “strong evidence.” But because the signal builds up with time, they’re confident that they’ll have a clear-cut detection in a few years. “We’ll be able to produce better and better maps of the gravitational wave sky,” said Luke Kelley, a University of California, Berkeley astronomer and NANOGrav team member, in a press briefing.

[Related: Gravitational waves just showed us something even cooler than black holes]

Full implications of this detection are yet to be understood, but studies of these low-frequency waves are only beginning. Members of the International Pulsar Timing Array have similar data from across the world, including Australia, China, and India. These measurements will be even more powerful when astronomers bring them all together, possibly within the next year. 

The post Astronomers used dead stars to detect a new form of ripple in space-time appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Microsoft sets its sights on a quantum supercomputer https://www.popsci.com/technology/microsoft-quantum-supercomputer/ Thu, 22 Jun 2023 22:00:00 +0000 https://www.popsci.com/?p=550613
Microsoft's quantum computing infrastructure
Topological qubits are part of Microsoft's quantum strategy. Microsoft Azure

It’s also pioneering a new type of "topological" qubit.

The post Microsoft sets its sights on a quantum supercomputer appeared first on Popular Science.

]]>
Microsoft's quantum computing infrastructure
Topological qubits are part of Microsoft's quantum strategy. Microsoft Azure

Earlier this week, Microsoft said that it had achieved a big milestone towards building their version of a quantum supercomputer—a massive machine that can tackle certain problems better than classical computers owing to the quirks of quantum mechanics. 

These quirks would allow the computer to represent information as zero, one, or a combination of both. The unit of computing, in this case, would no longer be a classical zero or one bit. It would be a qubit—the quantum equivalent of the classical bit.  

There are many ways to build a quantum computer. Google and IBM are using an approach with superconducting qubits, and some smaller companies are looking into neutral atom quantum computers. Microsoft’s strategy is different. It wants to work with a new kind of qubit called topological qubits, which involves some innovative and experimental physics. To make them, you need semiconductor and superconductor materials, electrostatic gates, special particles, nanowires, and a magnetic field. 

[Related: Quantum computers are starting to become more useful]

The milestone that Microsoft claimed to have passed has to do with the fact that they proved in a peer-reviewed paper in the journal Physical Review B that their topological qubits can act as small, fast, and reliable units for computing. (A full video explaining how the physics behind topological qubits work is available in a company blog post.)

Microsoft’s next steps will involve integrating these units with controllable hardware and a less error-prone system. The qubits currently being researched tend to be fragile and finicky, prompting scientists to come up with methods to correct or work with the errors that might arise due to interference from the environment that make the qubits fall out of their quantum states.  

Alongside ensuring that the system can come together cohesively, Microsoft researchers will also improve the qubits themselves so they can achieve properties like entanglement through a process called “braiding.” This characteristic will allow the device to take on more complex operations. 

Then, the company will work to scale up the number of qubits that are linked together. Microsoft told TechCrunch that it aims to have a full quantum supercomputer made of these topological qubits within 10 years.

The post Microsoft sets its sights on a quantum supercomputer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The secret to better folding phones might hinge on mussels https://www.popsci.com/science/mussels-hinge-engineering/ Thu, 22 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=550459
A cockscomb pearl mussel.
The shell of a cockscomb pearl mussel, which pops open thanks to a protein cushion and biological wires. Prof. Yu's Team

Materials scientists are studying why these shellfish are so great at opening and closing.

The post The secret to better folding phones might hinge on mussels appeared first on Popular Science.

]]>
A cockscomb pearl mussel.
The shell of a cockscomb pearl mussel, which pops open thanks to a protein cushion and biological wires. Prof. Yu's Team

The function of folding phones and handheld game consoles like the Nintendo 3DS hinges on, well, their hinges. Open and shut such gadgets enough times that the hinges start to fail, and you might find yourself wishing for a far better joint.

As it happens, the animal realm may have fulfilled that wish. Sources of inspiration take the form of bivalves: clams, oysters, cockles, mussels, and a whole host of other two-shelled organisms. Over a bivalve’s life, its shells can open and shut hundreds of thousands of times, seemingly without taking damage.

Now, a team of biologists and materials scientists have worked together to examine the case of one particular bivalve, Cristaria plicata, the cockscomb pearl mussel. In their paper, published in the journal Science today, they’ve not only reverse-engineered a mussel’s hinge, they’ve recreated it with glass fibers and other modern materials.

Cockscomb pearl mussels, the study’s starring bivalves, are found in fresh waters across northeast Asia. Ancient Chinese craftspeople grew pearls within this mussel’s shells. By opening the mussel, inserting a small object like a bead or a tiny Buddha inside, closing the animal, and letting it be for a year, they could retrieve the object afterward—now coated in iridescent mother-of-pearl.

Mother-of-pearl, also known as nacre, has long drawn the attention of materials scientists for far more than its beauty. Although nacre is made from a brittle calcium carbonate mineral called aragonite, its structure—aragonite “bricks” glued together by a protein “mortar”—gives the substance incredible strength and resilience

“A lot of researchers have replicated various aspects of its brick-and-mortar structure to try to create stiff, tough, and strong materials, says Rachel Crane, a biomechanist at the University of California, Davis, who was not an author of the new paper.

[Related: This new material is as strong as steel—but lighter]

In the process of studying nacre, some scientists couldn’t help notice the mussel’s hinge. Despite also being made from the same brittle aragonite, the hinge both bends and stretches without breaking. “This exceptional performance impressed us greatly, and we decided to figure out the underlying reason,” says Shu-Hong Yu, a materials scientist at the University of Science and Technology of China, and one of the paper’s authors.

Biologists had studied hinges and the differences between them to classify bivalve species as early as the 19th century. But they didn’t have the technology to peel apart these living joints’ inner structures. Yu and his colleagues, though, extracted the hinges and examined them under a battery of microscopes and analyzers.

They found that the bivalve’s hinge consists of two key parts. The first is at the hinge’s core: a folding part shaped like a paper fan. The fan’s “ribs” are an array of tiny aragonite wires, shrouded in a soft protein cushion. The second part is a ligament, an elastic layer over the fan’s outer edge.

As the hinge closes, the protein matrix helps keep the wires straight, preventing them from bending and breaking. Meanwhile, the outer ligament absorbs the tension from the hinge unfurling. Together, this configuration makes the hinge particularly hardy.

The authors placed hinges extracted from mussels in a machine that repeatedly forced them open and shut. This tested their prowess under long-term, repeated stress. Even after 1.5 million cycles, the authors found no sign of damage. In other words, if the mussel opened and shut its shells once a minute, every minute, for three years on end, its hinge would stay perfectly functional.

This makes the mussel’s hinge super-resistant to what engineers call “fatigue.” Everything from nuts and bolts to bridge supports builds up wear and tear from repeated use, just as your legs might feel tired if you’ve recently run a marathon. And, just like a pair of exhausted legs, a fatigued part is more likely to fail—with crippling consequences. “The bivalve shell hinge is particularly interesting not only for its fatigue resistance, but also for its ability to bend,” says Crane.

It’s surely tempting to imagine bizarre biopunk doors that open and close on the backs of indefatigable mussel hinges. While that’s almost certainly impractical, the authors believe that these hinges could inspire human-engineered parts that serve our purposes well.

[Related: Recycling one of the planet’s trickiest plastics just got a little easier]

In fact, inspired by the structure they found, Yu and his colleagues fashioned their own hinge from glass fibers embedded like fan ribs in a polymer matrix. They put their artificial hinge to the test, and found that it held up like the genuine, organic article—while other hinges, one with disorganized glass fibers and another with glass spheres, began to break and crack

Yu says that their early effort isn’t meant for regular human use. But it demonstrates that we could create a mussel-like bend if we needed to. For instance, what if a mobile phone designer wants to make a folding touch-screen phone that needs a brittle material like glass?

“The fan-shaped-region-inspired design strategy provides a promising way to address this challenge,” Yu says. His group now plans to examine what those soft proteins do in the hinge.

But evolution and engineering play by different rules. It isn’t necessarily easy to emulate materials that have evolved over millions of years. “The finest-scale patterns in biological structures are often challenging and costly to replicate,” says Crane.

The post The secret to better folding phones might hinge on mussels appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How does electricity work? Let’s demystify the life-changing physics. https://www.popsci.com/technology/how-does-electricity-work/ Mon, 19 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=549308
Tesla coil experiment to demonstrate how electricity works.
A Tesla coil gives off current electricity, where the negatively charged electrons continuously move, just like they would through an electrical wire. Depositphotos

How current is your knowledge?

The post How does electricity work? Let’s demystify the life-changing physics. appeared first on Popular Science.

]]>
Tesla coil experiment to demonstrate how electricity works.
A Tesla coil gives off current electricity, where the negatively charged electrons continuously move, just like they would through an electrical wire. Depositphotos

To the uninitiated, electricity might seem like a sort of hidden magic. It plays by laws of physics we can’t necessarily perceive with our eyes.

But most of our lives run on electricity. Anyone who has ever lived through a power outage knows how inconvenient it is. On a broader level, it’s hard to understate just how vital the flow of electricity is to powering the functions of modern society.

“If I lose electricity, I lose telecommunications. I lose the financial sector. I lose water treatment. I can’t milk the cows. I can’t refrigerate food,” says Mark Petri, an electrical grid researcher at Argonne National Laboratory in Illinois. 

[Related: How to save electricity this summer]

Which makes it all the more important to know how electricity works, where it comes from, and how it gets to our homes.

How does electricity work?

The universe as we know it is governed by four fundamental forces: the strong nuclear force (which holds subatomic particles together inside atoms), the weak nuclear force (which guides some types of radioactivity), gravity, and electromagnetism (which governs the intrinsically linked concepts of electricity and magnetism). 

One of electromagnetism’s key tenets is that the subatomic particles that make up the cosmos can have either a positive or negative charge. To use them as a form of energy, we have to make them flow as electric current. The electricity we have on Earth is mostly from the movement of negatively charged electrons. 

But it takes more than a charge to keep electrons flowing. The particles don’t travel far before they run into an obstacle, such as a neighboring atom. That means electricity needs a material whose atoms have loose electrons, which can be knocked away to keep the current going. This type of material is known as a conductor. Most metals have conductive qualities, such as the copper that forms a lot of electrical wires.

Other materials, called insulators, have far more tightly bound electrons that aren’t easily pushed around. The plastic that coats most wires is an insulator, which is why you don’t get a nasty shock when you touch a cord or plug.

Some scientists and engineers think of electricity as a bit like water streaming through a pipe. The volume of water passing through a pipe section at a given time compares to the number of electrons flowing through a particular strand of wire, which scientists measure in amps. The water pressure that helps to push the fluid through is like the electrical voltage. When you multiply amps by volts, you compute the power or the amount of energy passing through the wire every second, which electricians measure in watts. The wattage of your microwave, then, is approximately the amount of electrical energy it uses per second.

How electrons carry voltage through wires

Based on the law of electromagnetism, if a wire is caught in a magnetic field and that magnetic field shifts, it induces an electric current in the wire. This is why most of the world’s electricity is born from generators, which are typically rotating magnetic apparatuses. As a generator spins, it sends electricity shooting through a wire coiled around it.

[Related: The best electric generators for your home]

Powering a whole city calls for a colossal generator, potentially the size of a building. But it takes energy to make energy from that generator. In most fossil fuel and nuclear plants, the fuel source boils water into steam, which causes turbines to spin their respective generators. Hydro and wind generators take advantage of nature’s own motion, redirecting water or gusts of wind to do the spinning. Solar panels, meanwhile, work differently because they don’t need moving magnets at all. When light strikes a solar cell, it excites the electrons within the atoms of the material, causing them to flow out in a current.

It’s easier to transfer energy with lots of volts and fewer amps. As such, long-distance power lines use thousands of volts to carry electricity away from power plants. That’s far too high for most buildings, so power grids rely on substations to lower the voltage for regular outlets and home electronics. North American buildings typically set their voltage to 120 volts; most of the rest of the world uses between 220 and 240 volts.

Current also doesn’t flow one way—instead, it constantly switches direction back and forth, which engineers call alternating current. This enables it to travel stretches of up to several thousands of miles. North American wires flip from one current direction to the other 60 times every second. In other parts of the globe, particularly in Europe and Africa, they alternate back and forth 50 times every second.

That brings the current to your building’s breaker box. But how does that power actually get to your electronic devices? 

[Related: Why you need an uninterruptible power supply]

To keep a continuous flow of electricity, a system needs a complete circuit. Buildings everywhere are wired with incomplete circuits. A two-hole socket contains one “live” wire and one “neutral” wire. When you plug in a lamp, kitchen appliance, or phone charger, you’re completing that circuit, allowing electricity to flow from the live wire, through the device, and back through the neutral wire to deliver energy. 

Put another way, if you stick a finger into a live socket, you’re temporarily completing the circuit with your body (somewhat painfully).

An electrical worker suspended on high-voltage power lines in China against the sunset
An electrician carries out maintenance work on electric wires of a high-voltage power line project on September 28, 2022, in Lianyungang, China. Geng Yuhe / VCG via Getty Images

The future of electricity

Not long ago, electricity was still a luxury. In the late 1990s, nearly one-third of the world’s population lived in homes without electrical access. We’ve since cut that proportion by more than half—but nearly a billion people, mainly concentrated in sub-Saharan Africa, still don’t have a current.

Historically, almost all electricity started at large power plants and ended at homes and businesses. But the transition to renewable energy is altering that process. On average, solar and wind farms are smaller than hulking coal plants and dams. On rainy and calm days, giant batteries can back them up with stored power.

“What we have been seeing, and what we can expect to see in the future, is a major evolution of the grid,” says Petri.

[Related: Why hasn’t Henry Ford’s power grid become a reality?]

The infrastructure we build around electricity makes a difference, both for the health of the planet and people. In 2020, only 39 percent of the world’s electricity came from clean sources like nuclear and hydro, compared to CO2-emitting fossil fuels.

Fortunately, there is plenty of reason for optimism. By some accounts, solar power is now the cheapest energy source in human history, with wind power not far behind. Moreover, a growing number of utility users are installing rooftop solar panels, solar generators, heat pumps, and the like. “People’s homes are not just taking power from the grid,” says Petri. “They’re putting power back on the grid. It’s a much more complex system.”

The laws of electricity don’t change depending on where we choose to draw our current from. But the consequences of our decisions on how to use that power do matter.

The post How does electricity work? Let’s demystify the life-changing physics. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum computers are starting to become more useful https://www.popsci.com/technology/quantum-computer-error-technique/ Sat, 17 Jun 2023 11:00:00 +0000 https://www.popsci.com/?p=549382
cryostat infrastructure of a quantum computer
This method helps quantum computers give more useful answers. IBM

A new development could be promising for researchers in fields like material science, physics, and more.

The post Quantum computers are starting to become more useful appeared first on Popular Science.

]]>
cryostat infrastructure of a quantum computer
This method helps quantum computers give more useful answers. IBM

This week, IBM researchers published the results of a study in the journal Nature that demonstrated the ways in which they were able to use a 100-plus-qubit quantum computer to square up against a classical supercomputer. They pitted the two against one another with the task of simulating physics. 

“One of the ultimate goals of quantum computing is to simulate components of materials that classical computers have never efficiently simulated,” IBM said in a press release. “Being able to model these is a crucial step towards the ability to tackle challenges such as designing more efficient fertilizers, building better batteries, and creating new medicines.” 

Quantum computers, which can represent information as zero, one, or both at the same time, have been speculated to be more effective than classical computers at solving certain problems such as optimization, searching through an unsorted database, and simulating nature. But making a useful quantum computer has been hard, in part due to the delicate nature of qubits (the quantum equivalent of bits, the ones and zeros of classical computing). These qubits are super sensitive to noise, or disturbances from their surroundings, which can create errors in the calculations. As quantum processors get bigger, these small infractions can add up. 

[Related: How a quantum computer tackles a surprisingly difficult airport problem]

One way to get around the errors is to build a fault-tolerant quantum computer. The other is to somehow work with the errors by either mitigating them, correcting them, or canceling them out

In the experiment publicized this week, IBM researchers worked with a 127-qubit Eagle quantum processor to model the spin dynamics of a material to predict properties such as how it responds to magnetic fields. In this simulation, they were able to generate large, entangled states where certain simulated atoms are correlated with one another. By using a technique called zero noise extrapolation, the team was able to separate the noise and elucidate the true answer. To confirm that the answers they were getting from the quantum computer were reliable, another team of scientists at UC Berkeley performed these same simulations on a set of classical computers—and the two matched up. 

[Related: In photos: Journey to the center of a quantum computer]

However, classical computers have an upper limit when it comes to these types of problems, especially when the models become more complex. Although IBM’s quantum processor is still a ways away from quantum supremacy—where it can reliably outperform a classical computer on the same task—proving that it can provide useful answers even in the presence of noise is a notable accomplishment.  

“This is the first time we have seen quantum computers accurately model a physical system in nature beyond leading classical approaches,” Darío Gil, senior vice president and director of IBM Research, said in the press release. “To us, this milestone is a significant step in proving that today’s quantum computers are capable, scientific tools that can be used to model problems that are extremely difficult – and perhaps impossible – for classical systems, signaling that we are now entering a new era of utility for quantum computing.”

The post Quantum computers are starting to become more useful appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What happens if AI grows smarter than humans? The answer worries scientists. https://www.popsci.com/science/ai-singularity/ Mon, 12 Jun 2023 10:00:00 +0000 https://www.popsci.com/?p=547500
Agent Smith clones from The Matrix to show the concept of singularity in AI and AGI
With each iteration, Singularity could get more invincible—and dangerous. Warner Bros.

Some AI experts have begun to confront the 'Singularity.' What they see scares them.

The post What happens if AI grows smarter than humans? The answer worries scientists. appeared first on Popular Science.

]]>
Agent Smith clones from The Matrix to show the concept of singularity in AI and AGI
With each iteration, Singularity could get more invincible—and dangerous. Warner Bros.

In 1993, computer scientist and sci-fi author Vernor Vinge predicted that within three decades, we would have the technology to create a form of intelligence that surpasses our own. “Shortly after, the human era will be ended,” Vinge said.

As it happens, 30 years later, the idea of an artificially created entity that can surpass—or at least match—human capabilities is no longer the domain of speculators and authors. Ranks of AI researchers and tech investors are seeking what they call artificial general intelligence (AGI): an entity capable of human-level performance at all kinds of intellectual tasks. If humans produce a successful AGI, some researchers now believe, “the end of the human era” will no longer be a vague, distant possibility.  

[Related: No, the AI chatbots still aren’t sentient]

Futurists often credit Vinge with popularizing what many commentators have called “the Singularity.” He believed that technological progress could eventually spawn an entity with capabilities surpassing the human brain. Its introduction to society would warp the world beyond recognition—a “change comparable to the rise of human life on Earth,” in Vinge’s own words.

Perhaps it’s easiest to imagine Singularity as a powerful AI, but Vinge envisioned it in other ways. Biotech or electronic enhancements might tweak the human brain to be faster and smarter, combining, say, the human mind’s intuition and creativity with a computer’s processor and information access to perform superhuman feats. Or as a more mundane example, consider how the average smartphone user has powers that would awe a time traveler from 1993.

“The whole point is that, once machines take over the process of doing science and engineering, the progress is so quick, you can’t keep up,” says Roman Yampolskiy, a computer scientist at the University of Louisville.

Already, Yampolskiy sees a microcosm of that future in his own field, where AI researchers are publishing an incredible amount of work at a rapid rate. “As an expert, you no longer know what the state of the art is,” he says. “It’s just evolving too quickly.”

What is superhuman intelligence?

While Vinge didn’t lay out any one path to the Singularity, some experts think AGI is the key to getting there through computer science. Others contest that the term is a meaningless buzzword. In general, it describes a system that matches human performance in any intellectual task.

If we develop AGI, it might open the door to a future of creating a superhuman intelligence. When applied to research, that intelligence could then produce its own new discoveries and new technologies at a breakneck pace. For instance, imagine a hypothetical AI system better than any real-world computer scientist. Now, imagine that system in turn tasked with designing better AI systems. The result, some researchers believe, could be an exponential acceleration of AI’s capabilities.

[Related: Engineers finally peeked inside a deep neural network]

That may pose a problem, because we don’t fully understand why many AI systems behave in the ways they do—a problem that may never disappear. Yampolskiy’s work suggests that we will never be able to reliably predict what an AGI will be able to do. Without that ability, in Yampolskiy’s mind, we will be unable to reliably control it. The consequences of that could be catastrophic, he says.

But predicting the future is hard, and AI researchers around the world are far from unified on the issue. In mid-2022, the think tank AI Impact surveyed 738 researchers’ opinions on the likelihood of a Singularity-esque scenario. They found a split: 33 percent replied that such a fate was “likely” or “quite likely,” while 47 percent replied it was “unlikely” or “quite unlikely.”

“I feel like it’s taking away from the problems that actually matter.”

Sameer Singh, computer scientist

Sameer Singh, a computer scientist at the University of California, Irvine, says that the lack of a consistent definition for AGI—and Singularity, for that matter—makes the concepts difficult to empirically examine. “Those are interesting academic things to be thinking about,” he explains. “But, from an impact point of view, I think there is a lot more that could happen in society that’s not just based on this threshold-crossing.”

Indeed, Singh worries that focusing on possible futures obscures the very real impacts that AI’s failures or follies are already having. “When I hear of resources going to AGI and these long-term effects, I feel like it’s taking away from the problems that actually matter,” he says. It’s already well established that the models can create racist, sexist, and factually incorrect output. From a legal point of view, AI-generated content often clashes with copyright and data privacy laws. Some analysts have begun blaming AI for inciting layoffs and displacing jobs.

“It’s much more exciting to talk about, ‘we’ve reached this science-fiction goal,’ rather than talk about the actual realities of things,” says Singh. “That’s kind of where I am, and I feel like that’s kind of where a lot of the community that I work with is.” 

Do we need AGI?

Reactions to an AI-powered future reflect one of many broader splits in the community building, fine-tuning, expanding, and monitoring models. Computer science pioneers Geoffrey Hinton and Yoshua Bengio both recently expressed regrets and a loss of direction over a field they see as spiraling out of control. Some researchers have called for a six-month moratorium on developing AI systems more powerful than GPT-4. 

Yampolskiy backs the call for a pause, but he doesn’t believe half a year—or one year, or two, or any timespan—is enough. He is unequivocal in his judgment: “The only way to win is not to do it.”

The post What happens if AI grows smarter than humans? The answer worries scientists. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The ISS’s latest delivery includes space plants and atmospheric lightning monitors https://www.popsci.com/technology/iss-spacex-experiments-june-2023/ Tue, 06 Jun 2023 16:00:00 +0000 https://www.popsci.com/?p=546234
Computer illustration of ISS with docket spacecraft
A SpaceX Dragon cargo craft docked with 7,000 pounds of material. NASA

SpaceX's Dragon craft autonomously docked with the ISS early Tuesday morning.

The post The ISS’s latest delivery includes space plants and atmospheric lightning monitors appeared first on Popular Science.

]]>
Computer illustration of ISS with docket spacecraft
A SpaceX Dragon cargo craft docked with 7,000 pounds of material. NASA

The International Space Station received roughly 7,000 pounds of supplies and scientific experiment materials early Tuesday morning following the successful autonomous docking of a SpaceX Dragon cargo spacecraft. According to NASA, the Dragon will remain attached to the ISS for about three weeks before returning back to Earth with research and cargo. In addition to a pair of International Space Station Roll Out Solar Arrays (IROSAs) designed to expand the microgravity complex’s energy-production ability, ISS crew members are receiving materials for a host of new and ongoing experiments.

[Related: Microgravity tomatoes, yogurt bacteria, and plastic eating microbes are headed to the ISS.]

THOR, an aptly named investigation courtesy of the European Space Agency, will observe Earth’s thunderstorms from above the atmosphere to examine and document electrical activity. Researchers plan to specifically analyze the “inception, frequency, and altitude of recently discovered blue discharges,” i.e. lightning occurring within the upper atmosphere. Scientists still know very little about such phenomena’s effects on the planet’s climate and weather, but the upcoming observations could potentially shed more light on the processes.

Meanwhile, researchers are hoping to stretch out telomeres in microgravity via Genes in Space-10, part of an ongoing national contest for students in grades 7 through 12 to develop their own biotech experiments. These genetic structures protect humans’ chromosomes, but generally shorten over time as they age. Observing telomere lengthening in ISS microgravity will give scientists a chance to determine if their size change relates to stem cell proliferation. Results could help NASA and other researchers better understand effects on astronauts’ health during long-term missions, a particularly topical subject given their hopes for upcoming excursions to the moon and Mars.

ISS will also deploy the Educational Space Science and Engineering CubeSat Experiment (ESSENCE), a tiny satellite housing a wide-angle camera capable of monitoring ice and permafrost thawing within the Canadian Arctic. This satellite comes alongside another student collaboration project called Iris, which is meant to observe geological samples’ weathering upon exposure to direct solar and background cosmic radiation.

[Related: The ISS’s latest arrivals: a 3D printer, seeds, and ovarian cow cells.]

Finally, a set of plants that germinated from seeds first produced in space and subsequently traveled to Earth are returning to the ISS as part of Plant Habitat-03. According to NASA, plantlife often adapts to the environmental stresses imposed on them via spaceflight, but it’s still unclear if these changes are genetically passed on to future generations. PH-03 will hopefully help scientists better understand these issues, which could prove critical to food generation during future space missions and exploration efforts.

The post The ISS’s latest delivery includes space plants and atmospheric lightning monitors appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists take first-ever X-rays of single atoms https://www.popsci.com/science/one-atom-x-ray-characterization/ Fri, 02 Jun 2023 18:00:00 +0000 https://www.popsci.com/?p=545645
Argonne National Laboratory's Advanced Photon Source.
The particle accelerator at Argonne National Laboratory provided the intense X-rays needed to image single atoms. Argonne National Laboratory/Flickr

This technique could help materials scientists control chemical reactions with better precision.

The post Physicists take first-ever X-rays of single atoms appeared first on Popular Science.

]]>
Argonne National Laboratory's Advanced Photon Source.
The particle accelerator at Argonne National Laboratory provided the intense X-rays needed to image single atoms. Argonne National Laboratory/Flickr

Perhaps you think of X-rays as the strange, lightly radioactive waves that phase through your body to scan broken bones or teeth. When you get an X-ray image taken, your medical professionals are essentially using it to characterize your body.

Many scientists use X-rays in a very similar role—they just have different targets. Instead of scanning living things (which likely wouldn’t last long when exposed to the high-powered research X-rays), they scan molecules or materials. In the past, scientists have X-rayed batches of atoms, to understand what they are and predict how those atoms might fare in a particular chemical reaction.

But no one has been able to X-ray an individual atom—until now. Physicists used X-rays to study the insides of two different single atoms, in work published in the journal Nature on Wednesday.

“The X-ray…has been used in so many different ways,” says Saw-Wai Hla, a physicist at Ohio University and Argonne National Laboratory, and an author of the paper. “But it’s amazing what people don’t know. We cannot measure one atom—until now.”

Beyond atomic snapshots

Characterizing an atom doesn’t mean just snapping a picture of it; scientists first did that way back in 1955. Since the 1980s, atom-photographers’ tool of choice has been the scanning tunneling microscope (STM). The key to an STM is its bacterium-sized tip. As scientists move the tip a millionth of a hair’s breadth above the atom’s surface, electrons tunnel through the space in between, creating a current. The tip detects that current, and the microscope transforms it into an image. (An STM can drag and drop atoms, too. In 1989, two scientists at IBM became the first STM artists, spelling the letters “IBM” with xenon atoms.)

But actually characterizing an atom—scanning the lone object, sorting it by its element, decoding its properties, understanding how it will behave in chemical reactions—is a far more complex endeavor. 

X-rays allow scientists to characterize larger batches of atoms. When X-rays strike atoms, they transfer their energy into those atoms’ electrons, exciting them. All good things must end, of course, and when those electrons come down, they release their newfound energy as, again, X-rays. Scientists can study that fresh radiation to study the properties of the atoms in between.

[Related: How scientists managed to store information in a single atom]

That’s a fantastic tool, and it’s been a boon to scientists who need to tinker with molecular structures. X-ray spectroscopy, as the process is called, helped create COVID-19 vaccines, for instance. The technique allows scientists to study a group of atoms—identifying which elements are in a batch and what their electron configurations are in general—but it doesn’t enable scientists to match them up to individual atoms. “We might be able to see, ‘Oh, there’s a whole team of soccer players,’ and ‘There’s a whole team of dancers,’ but we weren’t able to identify a single soccer player or a single dancer,” says Volker Rose, a physicist at Argonne National Laboratory and another of the authors.

Peering with high-power beams

You can’t create a molecule-crunching machine with the X-ray source at your dentist’s office. To reach its full potential, you need a beam that is far brighter, far more powerful. You’ve got to go to a particle accelerator known as a synchrotron.

The device the Nature authors used is located at Argonne National Laboratory, which zips electrons around a ring in the plains of Illinois, two-thirds of a mile long. Rather than crashing particles into each other, however, a synchrotron sends its high-speed electrons through an undulating magnetic gauntlet. As the electrons pass through, they unleash much of their energy as an X-ray beam.

Physics photo
A diagram showing X-rays illuminating a single iron atom (the red ball marked Fe), which provides elemental and chemical information when the tip detects excited electron. Saw-Wai Hla

The authors combined the power of such an X-ray beam with the precision of an STM. In this case, the X-rays energized the atom’s electrons. The STM, however, pulled some of the electrons out, giving scientists a far closer look. Scientists have given this process a name that wouldn’t feel out of place in a PlayStation 1 snowboarding game: synchrotron X-ray scanning tunneling microscopy (SX-STM).

[Related: How neutral atoms could help power next-gen quantum computers]

Combining X-rays and STM isn’t so simple. More than simple technical tinkering, they’re two separate technologies used by two completely separate batches of scientists. Getting them to work together took years of work.

Using SX-STM, the authors successfully detected the electron arrangement within two different atoms: one of iron; and another of terbium, a rare-earth element (number 65) that’s often used in electronic devices that contain magnets as well as in green fluorescent lamps. “That’s totally new, and wasn’t possible before,” says Rose.

The scientists believe that their technique can find use in a broad array of fields. Quantum computers can store information in atoms’ electron states; researchers could use this technique to read them. If the technique catches on, materials scientists might be able to control chemical reactions with far greater precision.

Hla believes that SX-STM characterization can build upon the work that X-ray science already does. “The X-ray has changed many lives in our civilization,” he says. For instance, knowing what specific atoms do is critical to creating better materials and to studying proteins, perhaps for future immunizations. 

Now that Hla and his colleagues have proven it’s possible to examine one or two atoms at a time, he says the road is clear for scientists to characterize whole batches of them at once. “If you can detect one atom,” Hla says, “you can detect 10 atoms and 20 atoms.”

The post Physicists take first-ever X-rays of single atoms appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Danish painters used beer to create masterpieces, but not the way you think https://www.popsci.com/science/beer-byproducts-danish-art/ Thu, 25 May 2023 10:00:00 +0000 https://www.popsci.com/?p=543346
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer.
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer. Statens Museum for Kunst

Nineteenth-century craftspeople made do with what they had. In Denmark, they had beer leftovers.

The post Danish painters used beer to create masterpieces, but not the way you think appeared first on Popular Science.

]]>
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer.
C.W. Eckersberg's painting "The 84-Gun Danish Warship Dronning Marie in the Sound” contains beer byproducts in its canvas primer. Statens Museum for Kunst

Behind a beautiful oil-on-canvas painting is, well, its canvas. To most art museum visitors, that fabric might be no more than an afterthought. But the canvas and its chemical composition are tremendously important to scientists and conservators who devote their lives to studying and caring for works of art.

When they examine a canvas, sometimes those art specialists are surprised by what they find. For instance, few conservators expected a 200-year-old canvas to contain proteins from yeast and fermented grains: the fingerprints of beer-brewing.

But those very proteins sit in the canvases of paintings from early 19th century Denmark. In a paper published on Wednesday in the journal Science Advances, researchers from across Europe say that Danes may have applied brewing byproducts as a base layer to a canvas before painters had their way with it.

“To find these yeast products—it’s not something that I have come across before,” says Cecil Krarup Andersen, an art conservator at the Royal Danish Academy, and one of the authors. “For us also, as conservators, it was a big surprise.”

The authors did not set out in search of brewing proteins. Instead, they sought traces of animal-based glue, which they knew was used to prepare canvases. Conservators care about animal glue since it reacts poorly with humid air, potentially cracking and deforming paintings over the decades.

[Related: 5 essential apps for brewing your own beer]

The authors chose 10 paintings created between 1828 and 1837 by two Danes: Christoffer Wilhelm Eckersberg, the so-called “Father of Danish Painting,” fond of painting ships and sea life; and Christen Schiellerup Købke, one of Eckersberg’s students at the Royal Danish Academy of Fine Arts, who went on to become a distinguished artist in his own right.

The authors tested the paintings with protein mass spectrometry: a technique that allows scientists to break a sample down into the proteins within. The technique isn’t selective, meaning that the experimenters could find substances they weren’t seeking.

Mass spectrometry destroys its sample. Fortunately, conservators in the 1960s had trimmed the paintings’ edges during a preservation treatment. The National Gallery of Denmark—the country’s largest art museum—had preserved the scraps, allowing the authors to test them without actually touching the original paintings.

Scraps from eight of the 10 paintings contained structural proteins from cows, sheep, or goats, whose body parts might have been reduced into animal glue. But seven paintings also contained something else: proteins from baker’s yeast and from fermented grains—wheat, barley, buckwheat, rye.

[Related: Classic Mexican art stood the test of time with the help of this secret ingredient]

That yeast and those grains feature in the process of brewing beer. While beer does occasionally turn up in recipes for 19th century house-paint, it’s alien to works of fine art.

“We weren’t even sure what they meant,” says study author Fabiana Di Gianvincenzo, a biochemist at the University of Copenhagen in Denmark and the University of Ljubljana in Slovenia.

The authors considered the possibility that stray proteins might have contaminated the canvas from the air. But three of the paintings contained virtually no brewer’s proteins at all, while the other seven contained too much protein for contamination to reasonably explain.

“It was not something random,” says Enrico Cappellini, a biochemist at the University of Copenhagen in Denmark, and another of the authors.

To learn more, the authors whipped up some mock substances containing those ingredients: recipes that 19th-century Danes could have created. The yeast proved an excellent emulsifier, creating a smooth, glue-like paste. If applied to a canvas, the paste would create a smooth base layer that painters could beautify with oil colors.

A mock primer made in the laboratory.
Making a paint paste in the lab, 19th-century style. Mikkel Scharff

Eckersberg, Købke, and their fellow painters likely didn’t interact with the beer. The Royal Danish Academy of Fine Arts provided its professors and students with pre-prepared art materials. Curiously, the paintings that contained grain proteins all came from earlier in the time period, between 1827 and 1833. Købke then left the Academy and produced the three paintings that didn’t contain grain proteins, suggesting that his new source of canvases didn’t use the same preparation method.

The authors aren’t certain how widespread the brewer’s method might have been. If the technique was localized to early 19th century Denmark or even to the Academy, art historians today could use the knowledge to authenticate a painting from that era, which historians sometimes call the Danish Golden Age. 

This was a time of blossoming in literature, in architecture, in sculpture, and, indeed, in painting. In art historians’ reckoning, it was when Denmark developed its own unique painting tradition, which vividly depicted Norse mythology and the Danish countryside. The authors’ work lets them glimpse lost details of the society under that Golden Age. “Beer is so important in Danish culture,” says Cappellini. “Finding it literally at the base of the artwork that defined the origin of modern painting in Denmark…is very meaningful.” 

[Related: The world’s art is under attack—by microbes]

The work also demonstrates how craftspeople repurposed the materials they had. “Denmark was a very poor country at the time, so everything was reused,” says Andersen. “When you have scraps of something, you could boil it to glue, or you could use it in the grounds, or use it for canvas, to paint on.”

The authors are far from done. For one, they want to study their mock substances as they age. Combing through the historical record—artists’ diaries, letters, books, and other period documents—might also reveal tantalizing details of who used the yeast and how. Their work, then, makes for a rather colorful crossover of science with art conservation. “That has been the beauty of this study,” says Andersen. “We needed each other to get to this result.”

This story has been updated to clarify the source of canvases for Købke’s later works.

The post Danish painters used beer to create masterpieces, but not the way you think appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How ‘The Legend of Zelda: Tears of the Kingdom’ plays with the rules of physics https://www.popsci.com/science/legend-of-zelda-physics/ Sun, 14 May 2023 17:00:00 +0000 https://www.popsci.com/?p=541013
Link falls based on a version of physics in Tears of the Kingdom.
Gravity plays a big role in the new 'Zelda' game, as Link soars and jumps from great heights. Nintendo

Link's world is based on our reality, but its natural laws get bent for magic and fun.

The post How ‘The Legend of Zelda: Tears of the Kingdom’ plays with the rules of physics appeared first on Popular Science.

]]>
Link falls based on a version of physics in Tears of the Kingdom.
Gravity plays a big role in the new 'Zelda' game, as Link soars and jumps from great heights. Nintendo

Video games are back in a big way. The Legend of Zelda: Tears of the Kingdom is one of the most anticipated games this year, sure to draw in hardcore players and casual fans alike. From the trailers and teasers, you can see how Tears of the Kingdom will feature ways to make flying machines and manipulate time. It’s a natural question, then, to wonder how Zelda’s laws of nature line up with real-world physics.

In one of the trailers, Link takes flight on a paraglider, dropping from any height and exploring ravines and chasms at speeds that could kill a human in real-life. Like its predecessor, Breath of the Wild, Tears of the Kingdom offers an immersive, somewhat realistic world that still incorporates plenty of magical and superhuman abilities. Game developers say that this bending of the rules that we know adds to the game’s overall level of fun and to the player’s enjoyment.

Charles Pratt, assistant arts professor at NYU Game Center, who has used physics when developing games, says that the reason why the fantastical elements of Zelda still work is because they “follow people’s intuitions about physics” and use their understanding of real-life rules as a jumping off point.

“Gravity isn’t exactly gravity, right?” Pratt says. “Gravity gets applied in certain cases, and not in others to make it feel like you’re bounding through the air. Because jumping is really fun.”

Breath of Wild, which came out in 2017, was a smash hit. It sold 29.81 million copies and shaped a whole generation of video games, pushing developers to make more open-world titles—and arguably influencing Pokémon to open up its borders and feature a wild area for players to ride around and explore.

Aspects of Link’s world line up with the Earth that we inhabit and recognize, and, like our reality, it follows the basic rules of physics. The first Legend of Zelda: Breath of the Wild game follows the same natural laws as Tears. The general force of gravity still exist. Projectile objects fly on a curved trajectory and need to be aimed with skill to hit their targets. Items lose durability and break over time.

[Related: Marvel’s Spider-Man PS4 game twists physics to make web-swinging super fun]

Breath of the Wild also played around with the elements. Metal objects conduct electricity and attract lightning during a storm. Link took damage when entering a cold environment without wearing the right clothes. And setting enemies on fire deals them damage over time, and opens up the possibility of setting the nearby area on fire.

“If you drop a stone, it falls, and if you drop a piece of wood in water, it floats, but unlike the real world, it looks like you will have access to jetpacks and magical objects that our world doesn’t,” says Lasse Astrup, lead designer on the new Apple Arcade game What the Car?, which features its own unusual physics, in which players can drive cars that have multiple human legs or propel into the sky as rockets. Astrup, who is no stranger to exploring physics in video games, says he plans to buy the new Zelda game and spend days playing it—and then seeing what kinds of creations other gamers come up with.

Using weird physics games—whether it’s in Zelda games or one of Astrup’s creations—adds more fun to the titles, Astrup says. “You never have full control over what happens in the scene or which way a thing flies when it explodes,” he says. “This allows for emergent gameplay where players can explore and find their own solutions.”

“It continues to be a beautifully coded game.”

Lindley Winslow, MIT physicist

Other ways that Tears of the Kingdom defies our laws of physics include how Link can stand on a fast-accelerating platform without falling over, when a regular human in our world would have been knocked over by the accelerating force.

Lindley Winslow, an experimental nuclear and particle physicist at the Massachusetts Institute of Technology, says that, based on the trailer, “It continues to be a beautifully coded game. The details are what make it compelling, the movement of the grass, the air moving off the paraglider.”

[Related: Assassin’s Creed Valhalla avoids Dark Age cliches thanks to intense research (and Google Earth)]

Winslow adds, “The power comes from the fact that the physics are correct until it is fantastical. This allows us to immerse ourselves in the world and believe in the fantastical. My favorite is the floating islands.” Magic also exists in Tears of the Kingdom: Link can use his extraordinary powers to stop time, use magnets, and lift extremely heavy objects.

Alex Rose, an indie game developer who is also a physics programmer and lecturer at the University of Applied Science Vienna, points out that there’s plenty of accurate physics in Tears of the Kingdom, too. Link’s terminal velocity drops after he spreads out his body slows even further when he releases his parachute. 

Tears of the Kingdom introduces a system to concoct fanciful machines: platforms lifted into the air by balloons and jetpack-like rockets affixed to Link’s shield. In our world, a person riding a platform would get sent flying by inertia when the vehicle they were riding turned the corner quickly. But Link, being a video game character, is able to stay on the platform, even during quick turns, Rose notes. He’s also somehow able to sling around an arm rocket without losing his limbs.

“In the real world, even the best gymnast would be sent flying to the ground,” Rose says, “like an old firecracker stunt from a certain MTV show.”

The post How ‘The Legend of Zelda: Tears of the Kingdom’ plays with the rules of physics appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How a 14-year-old kid became the youngest person to achieve nuclear fusion https://www.popsci.com/science/article/2012-02/boy-who-played-fusion/ Mon, 18 Mar 2019 21:22:34 +0000 https://www.popsci.com/uncategorized/science-article-2012-02-boy-who-played-fusion/
Taylor Wilson, the boy who built a nuclear reactor as a kid, in his kitchen with his family
Taylor Wilson moved to suburban Reno, Nevada, with his parents, Kenneth and Tiffany, and his brother Joey to attend Davidson Academy, a school for gifted students. Bryce Duffy

Taylor Wilson always dreamed of creating a star. Then he became one.

The post How a 14-year-old kid became the youngest person to achieve nuclear fusion appeared first on Popular Science.

]]>
Taylor Wilson, the boy who built a nuclear reactor as a kid, in his kitchen with his family
Taylor Wilson moved to suburban Reno, Nevada, with his parents, Kenneth and Tiffany, and his brother Joey to attend Davidson Academy, a school for gifted students. Bryce Duffy

This story from the March 2012 issue of Popular Science covered the nuclear fusion experiments of Taylor Wilson, who was then 16. Wilson is currently 28 and a nuclear physicist who’s collaborated with multiple US agencies on developing reactors and defense technology. The author of this profile, Tom Clynes, went on to write a book about Wilson titled The Boy Who Played With Fusion.

“PROPULSION,” the nine-year-old says as he leads his dad through the gates of the U.S. Space and Rocket Center in Huntsville, Alabama. “I just want to see the propulsion stuff.”

A young woman guides their group toward a full-scale replica of the massive Saturn V rocket that brought America to the moon. As they duck under the exhaust nozzles, Kenneth Wilson glances at his awestruck boy and feels his burden beginning to lighten. For a few minutes, at least, someone else will feed his son’s boundless appetite for knowledge.

Then Taylor raises his hand, not with a question but an answer. He knows what makes this thing, the biggest rocket ever launched, go up.

And he wants—no, he obviously needs—to tell everyone about it, about how speed relates to exhaust velocity and dynamic mass, about payload ratios, about the pros and cons of liquid versus solid fuel. The tour guide takes a step back, yielding the floor to this slender kid with a deep-Arkansas drawl, pouring out a torrent of Ph.D.-level concepts as if there might not be enough seconds in the day to blurt it all out. The other adults take a step back too, perhaps jolted off balance by the incongruities of age and audacity, intelligence and exuberance.

As the guide runs off to fetch the center’s director—You gotta see this kid!—Kenneth feels the weight coming down on him again. What he doesn’t understand just yet is that he will come to look back on these days as the uncomplicated ones, when his scary-smart son was into simple things, like rocket science.

This is before Taylor would transform the family’s garage into a mysterious, glow-in-the-dark cache of rocks and metals and liquids with unimaginable powers. Before he would conceive, in a series of unlikely epiphanies, new ways to use neutrons to confront some of the biggest challenges of our time: cancer and nuclear terrorism. Before he would build a reactor that could hurl atoms together in a 500-million-degree plasma core—becoming, at 14, the youngest individual on Earth to achieve nuclear fusion.

WHEN I MEET Taylor Wilson, he is 16 and busy—far too busy, he says, to pursue a driver’s license. And so he rides shotgun as his father zigzags the family’s Land Rover up a steep trail in the Virginia Mountains north of Reno, Nevada, where they’ve come to prospect for uranium.

From the backseat, I can see Taylor’s gull-like profile, his forehead plunging from under his sandy blond bangs and continuing, in an almost unwavering line, along his prominent nose. His thinness gives him a wraithlike appearance, but when he’s lit up about something (as he is most waking moments), he does not seem frail. He has spent the past hour—the past few days, really—talking, analyzing, and breathlessly evangelizing about nuclear energy. We’ve gone back to the big bang and forward to mutually assured destruction and nuclear winter. In between are fission and fusion, Einstein and Oppenheimer, Chernobyl and Fukushima, matter and antimatter.

“Where does it come from?” Kenneth and his wife, Tiffany, have asked themselves many times. Kenneth is a Coca-Cola bottler, a skier, an ex-football player. Tiffany is a yoga instructor. “Neither of us knows a dang thing about science,” Kenneth says.

Almost from the beginning, it was clear that the older of the Wilsons’ two sons would be a difficult child to keep on the ground. It started with his first, and most pedestrian, interest: construction. As a toddler in Texarkana, the family’s hometown, Taylor wanted nothing to do with toys. He played with real traffic cones, real barricades. At age four, he donned a fluorescent orange vest and hard hat and stood in front of the house, directing traffic. For his fifth birthday, he said, he wanted a crane. But when his parents brought him to a toy store, the boy saw it as an act of provocation. “No,” he yelled, stomping his foot. “I want a real one.”

This is about the time any other father might have put his own foot down. But Kenneth called a friend who owns a construction company, and on Taylor’s birthday a six-ton crane pulled up to the party. The kids sat on the operator’s lap and took turns at the controls, guiding the boom as it swung above the rooftops on Northern Hills Drive.

To the assembled parents, dressed in hard hats, the Wilsons’ parenting style must have appeared curiously indulgent. In a few years, as Taylor began to get into some supremely dangerous stuff, it would seem perilously laissez-faire. But their approach to child rearing is, in fact, uncommonly intentional. “We want to help our children figure out who they are,” Kenneth says, “and then do everything we can to help them nurture that.”

Looking up, they watched as a small mushroom cloud rose, unsettlingly, over the Wilsons’ yard.

At 10, Taylor hung a periodic table of the elements in his room. Within a week he memorized all the atomic numbers, masses and melting points. At the family’s Thanksgiving gathering, the boy appeared wearing a monogrammed lab coat and armed with a handful of medical lancets. He announced that he’d be drawing blood from everyone, for “comparative genetic experiments” in the laboratory he had set up in his maternal grandmother’s garage. Each member of the extended family duly offered a finger to be pricked.

The next summer, Taylor invited everyone out to the backyard, where he dramatically held up a pill bottle packed with a mixture of sugar and stump remover (potassium nitrate) that he’d discovered in the garage. He set the bottle down and, with a showman’s flourish, ignited the fuse that poked out of the top. What happened next was not the firecracker’s bang everyone expected, but a thunderous blast that brought panicked neighbors running from their houses. Looking up, they watched as a small mushroom cloud rose, unsettlingly, over the Wilsons’ yard.

For his 11th birthday, Taylor’s grandmother took him to Books-A-Million, where he picked out The Radioactive Boy Scout, by Ken Silverstein. The book told the disquieting tale of David Hahn, a Michigan teenager who, in the mid-1990s, attempted to build a breeder reactor in a backyard shed. Taylor was so excited by the book that he read much of it aloud: the boy raiding smoke detectors for radioactive americium . . . the cobbled-together reactor . . . the Superfund team in hazmat suits hauling away the family’s contaminated belongings. Kenneth and Tiffany heard Hahn’s story as a cautionary tale. But Taylor, who had recently taken a particular interest in the bottom two rows of the periodic table—the highly radioactive elements—read it as a challenge. “Know what?” he said. “The things that kid was trying to do, I’m pretty sure I can actually do them.”

Taylor Wilson in a red sweater looking to the right of the camera
Both Wilson boys both went to a science and mathematics school for gifted students. Bryce Duffy

A rational society would know what to do with a kid like Taylor Wilson, especially now that America’s technical leadership is slipping and scientific talent increasingly has to be imported. But by the time Taylor was 12, both he and his brother, Joey, who is three years younger and gifted in mathematics, had moved far beyond their school’s (and parents’) ability to meaningfully teach them. Both boys were spending most of their school days on autopilot, their minds wandering away from course work they’d long outgrown.

David Hahn had been bored too—and, like Taylor, smart enough to be dangerous. But here is where the two stories begin to diverge. When Hahn’s parents forbade his atomic endeavors, the angry teenager pressed on in secret. But Kenneth and Tiffany resisted their impulse to steer Taylor toward more benign pursuits. That can’t be easy when a child with a demonstrated talent and fondness for blowing things up proposes to dabble in nukes.

Kenneth and Tiffany agreed to let Taylor assemble a “survey of everyday radioactive materials” for his school’s science fair. Kenneth borrowed a Geiger counter from a friend at Texarkana’s emergency-management agency. Over the next few weekends, he and Tiffany shuttled Taylor around to nearby antique stores, where he pointed the clicking detector at old
radium-dial alarm clocks, thorium lantern mantles and uranium-glazed Fiesta plates. Taylor spent his allowance money on a radioactive dining set.

Drawn in by what he calls “the surprise properties” of radioactive materials, he wanted to know more. How can a speck of metal the size of a grain of salt put out such tremendous amounts of energy? Why do certain rocks expose film? Why does one isotope decay away in a millionth of a second while another has a half-life of two million years?

As Taylor began to wrap his head around the mind-blowing mysteries at the base of all matter, he could see that atoms, so small but potentially so powerful, offered a lifetime’s worth of secrets to unlock. Whereas Hahn’s resources had been limited, Taylor found that there was almost no end to the information he could find on the Internet, or to the oddities that he could purchase and store in the garage.

On top of tables crowded with chemicals and microscopes and germicidal black lights, an expanding array of nuclear fuel pellets, chunks of uranium and “pigs” (lead-lined containers) began to appear. When his parents pressed him about safety, Taylor responded in the convoluted jargon of inverse-square laws and distance intensities, time doses and roentgen submultiples. With his newfound command of these concepts, he assured them, he could master the furtive energy sneaking away from those rocks and metals and liquids—a strange and ever-multiplying cache that literally cast a glow into the corners of the garage.

Kenneth asked a nuclear-pharmacist friend to come over to check on Taylor’s safety practices. As far as he could tell, the friend said, the boy was getting it right. But he warned that radiation works in quick and complex ways. By the time Taylor learned from a mistake, it might be too late.

Lead pigs and glazed plates were only the beginning. Soon Taylor was getting into more esoteric “naughties”—radium quack cures, depleted uranium, radio-luminescent materials—and collecting mysterious machines, such as the mass spectrometer given to him by a former astronaut in Houston. As visions of Chernobyl haunted his parents, Taylor tried to reassure them. “I’m the responsible radioactive boy scout,” he told them. “I know what I’m doing.”

One afternoon, Tiffany ducked her head out of the door to the garage and spotted Taylor, in his canary yellow nuclear-technician’s coveralls, watching a pool of liquid spreading across the concrete floor. “Tay, it’s time for supper.”
“I think I’m going to have to clean this up first.”
“That’s not the stuff you said would kill us if it broke open, is it?”
“I don’t think so,” he said. “Not instantly.”

THAT SUMMER, Kenneth’s daughter from a previous marriage, Ashlee, then a college student, came to live with the Wilsons. “The explosions in the backyard were getting to be a bit much,” she told me, shortly before my own visit to the family’s home. “I could see everyone getting frustrated. They’d say something and Taylor would argue back, and his argument would be legitimate. He knows how to out-think you. I was saying, ‘You guys need to be parents. He’s ruling the roost.’ “

“What she didn’t understand,” Kenneth says, “is that we didn’t have a choice. Taylor doesn’t understand the meaning of ‘can’t.’ “

“And when he does,” Tiffany adds, “he doesn’t listen.”

“Looking back, I can see that,” Ashlee concedes. “I mean, you can tell Taylor that the world doesn’t revolve around him. But he doesn’t really get that. He’s not being selfish, it’s just that there’s so much going on in his head.”

Tiffany, for her part, could have done with less drama. She had just lost her sister, her only sibling. And her mother’s cancer had recently come out of remission. “Those were some tough times,” Taylor tells me one day, as he uses his mom’s gardening trowel to mix up a batch of yellowcake (the partially processed uranium that’s the stuff of WMD infamy) in a five-gallon bucket. “But as bad as it was with Grandma dying and all, that urine sure was something.”

Taylor looks sheepish. He knows this is weird. “After her PET scan she let me have a sample. It was so hot I had to keep it in a lead pig.

“The other thing is . . .” He pauses, unsure whether to continue but, being Taylor, unable to stop himself. “She had lung cancer, and she’d cough up little bits of tumor for me to dissect. Some people might think that’s gross, but I found it scientifically very interesting.”

What no one understood, at least not at first, was that as his grandmother was withering, Taylor was growing, moving beyond mere self-centeredness. The world that he saw revolving around him, the boy was coming to believe, was one that he could actually change.

The problem, as he saw it, is that isotopes for diagnosing and treating cancer are extremely short-lived. They need to be, so they can get in and kill the targeted tumors and then decay away quickly, sparing healthy cells. Delivering them safely and on time requires expensive handling—including, often, delivery by private jet. But what if there were a way to make those medical isotopes at or near the patients? How many more people could they reach, and how much earlier could they reach them? How many more people like his grandmother could be saved?

As Taylor stirred the toxic urine sample, holding the clicking Geiger counter over it, inspiration took hold. He peered into the swirling yellow center, and the answer shone up at him, bright as the sun. In fact, it was the sun—or, more precisely, nuclear fusion, the process (defined by Einstein as E=mc2) that powers the sun. By harnessing fusion—the moment when atomic nuclei collide and fuse together, releasing energy in the process—Taylor could produce the high-energy neutrons he would need to irradiate materials for medical isotopes. Instead of creating those isotopes in multimillion-dollar cyclotrons and then rushing them to patients, what if he could build a fusion reactor small enough, cheap enough and safe enough to produce isotopes as needed, in every hospital in the world?

At that point, only 10 individuals had managed to build working fusion reactors. Taylor contacted one of them, Carl Willis, then a 26-year-old Ph.D. candidate living in Albuquerque, and the two hit it off. But Willis, like the other successful fusioneers, had an advanced degree and access to a high-tech lab and precision equipment. How could a middle-school kid living on the Texas/Arkansas border ever hope to make his own star?

Taylor Wilson in a hazmat suit and gas mask in his nuclear lab
The teen set up a nuclear laboratory in the family garage. Occasionally he uses it to process uranium ore into yellowcake. Bryce Duffy

When Taylor was 13, just after his grandmother’s doctor had given her a few weeks to live, Ashlee sent Tiffany and Kenneth an article about a new school in Reno. The Davidson Academy is a subsidized public school for the nation’s smartest and most motivated students, those who score in the top 99.9th percentile on standardized tests. The school, which allows students to pursue advanced research at the adjacent University of Nevada–Reno, was founded in 2006 by software entrepreneurs Janice and Robert Davidson. Since then, the Davidsons have championed the idea that the most underserved students in the country are those at the top.

On the family’s first trip to Reno, even before Taylor and Joey were accepted to the academy, Taylor made an appointment with Friedwardt Winterberg, a celebrated physicist at the University of Nevada who had studied under the Nobel Prize–winning quantum theorist Werner Heisenberg. When Taylor told Winterberg that he wanted to build a fusion reactor, also called a fusor, the notoriously cranky professor erupted: “You’re 13 years old! And you want to play with tens of thousands of electron volts and deadly x-rays?” Such a project would be far too technically challenging and hazardous, Winterberg insisted, even for most doctoral candidates. “First you must master calculus, the language of science,” he boomed. “After that,” Tiffany said, “we didn’t think it would go anywhere. Kenneth and I were a bit relieved.”

But Taylor still hadn’t learned the word “can’t.” In the fall, when he began at Davidson, he found the two advocates he needed, one in the office right next door to Winterberg’s. “He had a depth of understanding I’d never seen in someone that young,” says atomic physicist Ronald Phaneuf. “But he was telling me he wanted to build the reactor in his garage, and I’m thinking, ‘Oh my lord, we can’t let him do that.’ But maybe we can help him try to do it here.”

Phaneuf invited Taylor to sit in on his upper-division nuclear physics class and introduced him to technician Bill Brinsmead. Brinsmead, a Burning Man devotee who often rides a wheeled replica of the Little Boy bomb through the desert, was at first reluctant to get involved in this 13-year-old’s project. But as he and Phaneuf showed Taylor around the department’s equipment room, Brinsmead recalled his own boyhood, when he was bored and unchallenged and aching to build something really cool and difficult (like a laser, which he eventually did build) but dissuaded by most of the adults who might have helped.

Rummaging through storerooms crowded with a geeky abundance of electron microscopes and instrumentation modules, they came across a high-vacuum chamber made of thick-walled stainless steel, capable of withstanding extreme heat and negative pressure. “Think I could use that for my fusor?” Taylor asked Brinsmead. “I can’t think of a more worthy cause,” Brinsmead said.

NOW IT’S TIFFANY who drives, along a dirt road that wends across a vast, open mesa a few miles south of the runways shared by Albuquerque’s airport and Kirkland Air Force Base. Taylor has convinced her to bring him to New Mexico to spend a week with Carl Willis, whom Taylor describes as “my best nuke friend.” Cocking my ear toward the backseat, I catch snippets of Taylor and Willis’s conversation.

“The idea is to make a gamma-ray laser from stimulated decay of dipositronium.”

“I’m thinking about building a portable, beam-on-target neutron source.”

“Need some deuterated polyethylene?”

Willis is now 30; tall and thin and much quieter than Taylor. When he’s interested in something, his face opens up with a blend of amusement and curiosity. When he’s uninterested, he slips into the far-off distractedness that’s common among the super-smart. Taylor and Willis like to get together a few times a year for what they call “nuclear tourism”—they visit research facilities, prospect for uranium, or run experiments.

Earlier in the week, we prospected for uranium in the desert and shopped for secondhand laboratory equipment in Los Alamos. The next day, we wandered through Bayo Canyon, where Manhattan Project engineers set off some of the largest dirty bombs in history in the course of perfecting Fat Man, which leveled Nagasaki.

Today we’re searching for remnants of a “broken arrow,” military lingo for a lost nuclear weapon. While researching declassified military reports, Taylor discovered that a Mark 17 “Peacemaker” hydrogen bomb, which was designed to be 700 times as powerful as the bomb detonated over Hiroshima, was accidentally dropped onto this mesa in May 1957. For the U.S. military, it was an embarrassingly Strangelovian episode; the airman in the bomb bay narrowly avoided his own Slim Pickens moment when the bomb dropped from its gantry and smashed the B-36’s doors open. Although its plutonium core hadn’t been inserted, the bomb’s “spark plug” of conventional explosives and radioactive material detonated on impact, creating a fireball and a massive crater. A grazing steer was the only reported casualty.

Tiffany parks the rented SUV among the mesquite, and we unload metal detectors and Geiger counters and fan out across the field. “This,” says Tiffany, smiling as she follows her son across the scrubland, “is how we spend our vacations.”

Taylor Wilson walking in front of a snowy Nevada mountain range while hunting for radioactive material
Taylor has one of the most extensive collections of radioactive material in the world, much of which he found himself. Bryce Duffy

Willis says that when Taylor first contacted him, he was struck by the 12-year-old’s focus and forwardness—and by the fact that he couldn’t plumb the depth of Taylor’s knowledge with a few difficult technical questions. After checking with Kenneth, Willis sent Taylor some papers on fusion reactors. Then Taylor began acquiring pieces for his new machine.

Through his first year at Davidson, Taylor spent his afternoons in a corner of Phaneuf’s lab that the professor had cleared out for him, designing the reactor, overcoming tricky technical issues, tracking down critical parts. Phaneuf helped him find a surplus high-voltage insulator at Lawrence Berkeley National Laboratory. Willis, then working at a company that builds particle accelerators, talked his boss into parting with an extremely expensive high-voltage power supply.

With Brinsmead and Phaneuf’s help, Taylor stretched himself, applying knowledge from more than 20 technical fields, including nuclear and plasma physics, chemistry, radiation metrology and electrical engineering. Slowly he began to test-assemble the reactor, troubleshooting pesky vacuum leaks, electrical problems and an intermittent plasma field.

Shortly after his 14th birthday, Taylor and Brinsmead loaded deuterium fuel into the machine, brought up the power, and confirmed the presence of neutrons. With that, Taylor became the 32nd individual on the planet to achieve a nuclear-fusion reaction. Yet what would set Taylor apart from the others was not the machine itself but what he decided to do with it.

While still developing his medical isotope application, Taylor came across a report about how the thousands of shipping containers entering the country daily had become the nation’s most vulnerable “soft belly,” the easiest entry point for weapons of mass destruction. Lying in bed one night, he hit on an idea: Why not use a fusion reactor to produce weapons-sniffing neutrons that could scan the contents of containers as they passed through ports? Over the next few weeks, he devised a concept for a drive-through device that would use a small reactor to bombard passing containers with neutrons. If weapons were inside, the neutrons would force the atoms into fission, emitting gamma radiation (in the case of nuclear material) or nitrogen (in the case of conventional explosives). A detector, mounted opposite, would pick up the signature and alert the operator.

He entered the reactor, and the design for his bomb-sniffing application, into the Intel International Science and Engineering Fair. The Super Bowl of pre-college science events, the fair attracts 1,500 of the world’s most switched-on kids from some 50 countries. When Intel CEO Paul Otellini heard the buzz that a 14-year-old had built a working nuclear-fusion reactor, he went straight for Taylor’s exhibit. After a 20-minute conversation, Otellini was seen walking away, smiling and shaking his head in what looked like disbelief. Later, I would ask him what he was thinking. “All I could think was, ‘I am so glad that kid is on our side.’ “

For the past three years, Taylor has dominated the international science fair, walking away with nine awards (including first place overall), overseas trips and more than $100,000 in prizes. After the Department of Homeland Security learned of Taylor’s design, he traveled to Washington for a meeting with the DHS’s Domestic Nuclear Detection Office, which invited Taylor to submit a grant proposal to develop the detector. Taylor also met with then–Under Secretary of Energy Kristina Johnson, who says the encounter left her “stunned.”

“I would say someone like him comes along maybe once in a generation,” Johnson says. “He’s not just smart; he’s cool and articulate. I think he may be the most amazing kid I’ve ever met.”

And yet Taylor’s story began much like David Hahn’s, with a brilliant, high-flying child hatching a crazy plan to build a nuclear reactor. Why did one journey end with hazmat teams and an eventual arrest, while the other continues to produce an array of prizes, patents, television appearances, and offers from college recruiters?

The answer is, mostly, support. Hahn, determined to achieve something extraordinary but discouraged by the adults in his life, pressed on without guidance or oversight—and with nearly catastrophic results. Taylor, just as determined but socially gifted, managed to gather into his orbit people who could help him achieve his dreams: the physics professor; the older nuclear prodigy; the eccentric technician; the entrepreneur couple who, instead of retiring, founded a school to nurture genius kids. There were several more, but none so significant as Tiffany and Kenneth, the parents who overcame their reflexive—and undeniably sensible—inclinations to keep their Icarus-like son on the ground. Instead they gave him the wings he sought and encouraged him to fly up to the sun and beyond, high enough to capture a star of his own.

After about an hour of searching across the mesa, our detectors begin to beep. We find bits of charred white plastic and chunks of aluminum—one of which is slightly radioactive. They are remnants of the lost hydrogen bomb. I uncover a broken flange with screws still attached, and Taylor digs up a hunk of lead. “Got a nice shard here,” Taylor yells, finding a gnarled piece of metal. He scans it with his detector. “Unfortunately, it’s not radioactive.”

“That’s the kind I like,” Tiffany says.

Willis picks up a large chunk of the bomb’s outer casing, still painted dull green, and calls Taylor over. “Wow, look at that warp profile!” Taylor says, easing his scintillation detector up to it. The instrument roars its approval. Willis, seeing Taylor ogling the treasure, presents it to him. Taylor is ecstatic. “It’s a field of dreams!” he yells. “This place is loaded!”

Suddenly we’re finding radioactive debris under the surface every five or six feet—even though the military claimed that the site was completely cleaned up. Taylor gets down on his hands and knees, digging, laughing, calling out his discoveries. Tiffany checks her watch. “Tay, we really gotta go or we’ll miss our flight.”

“I’m not even close to being done!” he says, still digging. “This is the best day of my life!” By the time we manage to get Taylor into the car, we’re running seriously late. “Tay,” Tiffany says, “what are we going to do with all this stuff?”

“For $50, you can check it on as excess baggage,” Willis says. “You don’t label it, nobody knows what it is, and it won’t hurt anybody.” A few minutes later, we’re taping an all-too-flimsy box shut and loading it into the trunk. “Let’s see, we’ve got about 60 pounds of uranium, bomb fragments and radioactive shards,” Taylor says. “This thing would make a real good dirty bomb.”

In truth, the radiation levels are low enough that, without prolonged close-range exposure, the cargo poses little danger. Still, we stifle the jokes as we pull up to curbside check-in. “Think it will get through security?” Tiffany asks Taylor.

“There are no radiation detectors in airports,” Taylor says. “Except for one pilot project, and I can’t tell you which airport that’s at.”

As the skycap weighs the box, I scan the “prohibited items” sign. You can’t take paints, flammable materials or water on a commercial airplane. But sure enough, radioactive materials are not listed.

We land in Reno and make our way toward the baggage claim. “I hope that box held up,” Taylor says, as we approach the carousel. “And if it didn’t, I hope they give us back the radioactive goodies scattered all over the airplane.” Soon the box appears, adorned with a bright strip of tape and a note inside explaining that the package has been opened and inspected by the TSA. “They had no idea,” Taylor says, smiling, “what they were looking at.”

APART FROM THE fingerprint scanners at the door, Davidson Academy looks a lot like a typical high school. It’s only when the students open their mouths that you realize that this is an exceptional place, a sort of Hogwarts for brainiacs. As these math whizzes, musical prodigies and chess masters pass in the hallway, the banter flies in witty bursts. Inside humanities classes, discussions spin into intellectual duels.

Although everyone has some kind of advanced obsession, there’s no question that Taylor is a celebrity at the school, where the lobby walls are hung with framed newspaper clippings of his accomplishments. Taylor and I visit with the principal, the school’s founders and a few of Taylor’s friends. Then, after his calculus class, we head over to the university’s physics department, where we meet Phaneuf and Brinsmead.

Taylor’s reactor, adorned with yellow radiation-warning signs, dominates the far corner of Phaneuf’s lab. It looks elegant—a gleaming stainless-steel and glass chamber on top of a cylindrical trunk, connected to an array of sensors and feeder tubes. Peering through the small window into the reaction chamber, I can see the golf-ball-size grid of tungsten fingers that will cradle the plasma, the state of matter in which unbound electrons, ions and photons mix freely with atoms and molecules.

“OK, y’all stand back,” Taylor says. We retreat behind a wall of leaden blocks as he shakes the hair out of his eyes and flips a switch. He turns a knob to bring the voltage up and adds in some gas. “This is exactly how me and Bill did it the first time,” he says. “But now we’ve got it running even better.”

Through a video monitor, I watch the tungsten wires beginning to glow, then brightening to a vivid orange. A blue cloud of plasma appears, rising and hovering, ghostlike, in the center of the reaction chamber. “When the wires disappear,” Phaneuf says, “that’s when you know you have a lethal radiation field.”

I watch the monitor while Taylor concentrates on the controls and gauges, especially the neutron detector they’ve dubbed Snoopy. “I’ve got it up to 25,000 volts now,” Taylor says. “I’m going to out-gas it a little and push it up.”

Taylor’s reactor, adorned with yellow radiation-warning signs, dominates the far corner of the lab. It looks elegant—a gleaming stainless-steel and glass chamber on top of a cylindrical trunk, connected to an array of sensors and feeder tubes.

Willis’s power supply crackles. The reactor is entering “star mode.” Rays of plasma dart between gaps in the now-invisible grid as deuterium atoms, accelerated by the tremendous voltages, begin to collide. Brinsmead keeps his eyes glued to the neutron detector. “We’re getting neutrons,” he shouts. “It’s really jamming!”

Taylor cranks it up to 40,000 volts. “Whoa, look at Snoopy now!” Phaneuf says, grinning. Taylor nudges the power up to 50,000 volts, bringing the temperature of the plasma inside the core to an incomprehensible 580 million degrees—some 40 times as hot as the core of the sun. Brinsmead lets out a whoop as the neutron gauge tops out.

“Snoopy’s pegged!” he yells, doing a little dance. On the video screen, purple sparks fly away from the plasma cloud, illuminating the wonder in the faces of Phaneuf and Brinsmead, who stand in a half-orbit around Taylor. In the glow of the boy’s creation, the men suddenly look years younger.

Taylor keeps his thin fingers on the dial as the atoms collide and fuse and throw off their energy, and the men take a step back, shaking their heads and wearing ear-to-ear grins.

“There it is,” Taylor says, his eyes locked on the machine. “The birth of a star.”

Read more PopSci+ stories.

The post How a 14-year-old kid became the youngest person to achieve nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The physics of champagne’s fascinating fizz https://www.popsci.com/science/champagne-bubbles-fluid-dynamics/ Wed, 03 May 2023 17:00:00 +0000 https://www.popsci.com/?p=538697
Champagne being poured into two glasses.
Champagne bubbles are known for their neat lines that travel up the glass. Madeline Federle and Colin Sullivan

Effervescent experiments reveal the fluid dynamics behind bubbly beverages.

The post The physics of champagne’s fascinating fizz appeared first on Popular Science.

]]>
Champagne being poured into two glasses.
Champagne bubbles are known for their neat lines that travel up the glass. Madeline Federle and Colin Sullivan

The pop of the cork, the fizz of the pour, and the clink of champagne flutes toasting are the ingredients for a celebration in many parts of the world. champagne itself dates back to Ancient Rome, but the biggest advances in the modern form of the beverage came from a savvy trio of women from the Champagne region of northeastern France in the 19th century. 

Now, scientists are adding another chapter to champagne’s bubbly history by discovering why the little effervescent bubbles of joy fizz upwards in a straight line.

[Related: Popping a champagne cork creates supersonic shockwaves.]

In a study published May 3 in the journal Physical Review Fluids, a team found that the stable bubble chains in champagne and other sparkling wines occur because of ingredients in it that act similar to soap-like compounds called surfactants. The surfactant-like molecules help reduce the tensions between the liquid and the gas bubbles, creating the smooth rise to the top. 

Champagne bubbles form neat single file lines. CREDIT: Madeline Federle and Colin Sullivan.

In this new study, a team conducted both numerical and physical experiments on four carbonated drinks to investigate the stability of the bubble chains. Depending on the drink, the fluid mechanics are quite different. For example, champagne and sparkling wine have gas bubbles that continuously appear to rise rapidly to the top of the glass in a single-file line like little ants—and they keep doing so for some time. In beer and soda, the bubbles veer off to the side and the bubble chains are not as stable. 

To observe the bubble chains, the team poured glasses of carbonated beverages including Pellegrino sparkling water, Tecate beer, Charles de Cazanove champagne, and a Spanish-style sparkling wine called brut.

They then filled small rectangular plexiglass containers with liquid and pumped in gas to create different kinds of bubble chains. They gradually added surfactants or increased the bubble size. They found that the larger bubbles could become stable even without the surfactants. When they kept a fixed bubble size with only added surfactants, the chains could go from unstable to stable. 

Beer bubbles are not as tightly bound as champagne bubbles. CREDIT: Madeline Federle and Colin Sullivan.

The authors found that the stability of the bubbles is actually impacted by the size of the bubbles themselves. The chains with large bubbles have a wake similar to that of bubbles with contaminants, which leads to a smooth rise and stable chains.

“The theory is that in Champagne these contaminants that act as surfactants are the good stuff,” co-author and Brown University engineer Roberto Zenit said in a statement. “These protein molecules that give flavor and uniqueness to the liquid are what makes the bubbles chains they produce stable.”

Since bubbles are always pretty small in drinks, surfactants are the key ingredient to producing the straight and stable chains we see in champagne. While beer also contains surfactant-like molecules, the bubbles can rise in straight chains or not depending on the type of beer. The bubbles in carbonated water like seltzer are always unstable because there are no contaminants helping the bubbles move smoothly through the wake of the flows.

[Related: This pretty blue fog only happens in warm champagne.]

“This wake, this velocity disturbance, causes the bubbles to be knocked out,” said Zenit. “Instead of having one line, the bubbles end up going up in more of a cone.”

The findings could add a better understanding of how fluid mechanics work, particularly the formation of clusters in bubbly flow, which has economic and societal value. The global carbonated drink market was valued at a whopping $221.6 billion in 2020

The technologies that use bubble-induced mixing, like aeration tanks at water treatment facilities and in wine making, could benefit greatly from better knowledge of how bubbles cluster, their origins, and how to predict their appearance. Understanding these flows may also help better explain ocean seeps, when methane and carbon dioxide emerge from the bottom of the ocean.

“This is the type of research that I’ve been working out for years,” said Zenit. “Most people have never seen an ocean seep or an aeration tank but most of them have had a soda, a beer or a glass of Champagne. By talking about Champagne and beer, our master plan is to make people understand that fluid mechanics is important in their daily lives.”

The post The physics of champagne’s fascinating fizz appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This supermassive black hole sucks big time https://www.popsci.com/science/m87-black-hole-jets/ Wed, 26 Apr 2023 22:41:45 +0000 https://www.popsci.com/?p=537095
Closeup of vent horizon around M87, a supermassive black hole and the first black hole image
An image of the shadow of the supermassive black hole M87 (inset) and a powerful jet of matter and energy being projected away from it. R.-S. Lu (SHAO) and E. Ros (MPIfR), S.Dagnello (NRAO/AUI/NSF)

We knew M87, the first black hole to be seen by humans, was powerful. But not this powerful.

The post This supermassive black hole sucks big time appeared first on Popular Science.

]]>
Closeup of vent horizon around M87, a supermassive black hole and the first black hole image
An image of the shadow of the supermassive black hole M87 (inset) and a powerful jet of matter and energy being projected away from it. R.-S. Lu (SHAO) and E. Ros (MPIfR), S.Dagnello (NRAO/AUI/NSF)

Black holes remain among the most enigmatic objects in the universe, but the past few years have seen astronomers develop techniques to directly image these powerful vacuums. And they keep getting better at it.

The Event Horizon Telescope (EHT) collaboration, the international team that took the first picture of a black hole in 2017, followed up that work with observations highlighting the black hole’s magnetic field. And just this month, another team of astronomers created an AI-sharpened version of the same image.

Now a new study published today in the journal Nature describes how images of that black hole, named after its galaxy, Messier 87 (M87), has a much larger circle of debris around it than the 2017 observations would suggest. 

Though long hypothesized to exist in theory, for many decades astronomers could only find indirect evidence of black holes in the sky. For instance, they would look for signs of the immense gravity of a black hole influencing other objects, such as when stars follow especially tight or fast orbits that imply the presence of another massive, but invisible partner.

But that all changed in 2017, when the EHT’s global network of radio telescopes captured the first visible evidence of a black hole, the supermassive black hole at the heart of a galaxy 57 million light-years away from Earth. When the image was released in 2019, the orange ring of fire around a central black void drew comparisons to “The Eye of Sauron” from Lord of the Rings.

EHT would go on to directly image Sagittarius A*, the supermassive black hole at the heart of the Milky Way galaxy, releasing another image of a fiery orange doughnut around a black center in May 2022.

Such supermassive black holes, which are often billions of times more massive than our sun—M87 is estimated to be 6.5 billion times bigger and Sagittarius A*  4 million times bigger—are thought to exist at the centers of most galaxies. The intense gravity of all that mass pulls on any gas, dust, and other excess material that comes too close, accelerating it to incredible speeds as it falls toward the lip of the black hole, known as the event horizon.

[Related: What would happen if you fell into a black hole?]

Like water circling a drain, the falling material spirals and is condensed into a flat ring known as an accretion disk. But unlike water around a drain, the incredible speed and pressures in the accretion disk heat the inflating material to the point where it emits powerful X-ray radiation. The disk propels jets of radiation and gas out and away from the black hole at nearly the speed of light.  

The EHT team already figured that M87 produced forcible jets. But the second set of results show that the ring-like structure of collapsing material around the black hole is 50 percent larger than they originally estimated.

“This is the first image where we are able to pin down where the ring is, relative to the powerful jet escaping out of the central black hole,” Kazunori Akiyama, an MIT Haystack Observatory research scientist and EHT collaboration member, said in a statement. “Now we can start to address questions such as how particles are accelerated and heated, and many other mysteries around the black hole, more deeply.”

The new observations were made in 2018 using the Global Millimeter VLBI Array, a network of a dozen radio telescopes running east to west across Europe and the US. To get the resolution necessary for more accurate measurements, however, the researchers also included observatories in the North and South: the Greenland Telescope along with the Atacama Large Millimetre/submillimetre Array, which consists of 66 radio telescopes in the Chilean high desert.

“Having these two telescopes [as part of] the global array resulted in a boost in angular resolution by a factor of four in the north-south direction,” Lynn Matthews, an EHT collaboration member at the MIT Haystack Observatory, said in a media statement. “This greatly improves the level of detail we can see. And in this case, a consequence was a dramatic leap in our understanding of the physics operating near the black hole at the center of the M87 galaxy.”

[Related: Construction starts on the world’s biggest radio telescope]

The more recent study focused on radio waves around 3 millimeters long, as opposed to 1.3 millimeters like the original 2017 one. That may have brought the larger, more distant ring structure into focus in a way the 2017 observations could not.

“That longer wavelength is usually associated with lower energies of the emitting electrons,” says Harvard astrophysicist Avi Loeb, who was not involved with the new study. “It’s possible that you get brighter emission at longer wavelengths farther out from the black hole.”

Going forward, astronomers plan to observe the black hole at other wavelengths to highlight different parts and layers of its structure, and better understand how such cosmic behemoths form at the hearts of galaxies and contribute to galactic evolution.

Just how supermassive black holes generate jets is “not a well-understood process,” Loeb says. “This is the first time we have observations of what may be the base of the jet. It can be used by theoretical physicists to model how the M87 jet is being launched.” 

He adds that he would like to see future observations capture the sequence of events in the accretion disk. That is, to essentially make a movie out of what’s happening at M87.

“There might be a hotspot that we can track that is moving either around or moving towards the jet,” Loeb says, which in turn, could explain how a beast like a black hole gets fed.

The post This supermassive black hole sucks big time appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Alien civilizations could send us messages by 2029 https://www.popsci.com/science/aliens-contact-earth-2029/ Tue, 25 Apr 2023 10:00:00 +0000 https://www.popsci.com/?p=536305
NASA Deep Space Network radiotelescope sending radio waves to spacecraft, stars, and maybe aliens
NASA's Deep Space Network helps Earth make long-distance calls. NASA

NASA sends powerful radio transmissions into space. Who's listening, and when will they respond?

The post Alien civilizations could send us messages by 2029 appeared first on Popular Science.

]]>
NASA Deep Space Network radiotelescope sending radio waves to spacecraft, stars, and maybe aliens
NASA's Deep Space Network helps Earth make long-distance calls. NASA

Humans have used radio waves to communicate across Earth for more than 100 years. Those waves also leak out into space, a fingerprint of our presence propagating through the cosmos. In more recent years, humans have also sent out a stronger signal beyond our planet: communications with our most distant probes, like the famous Voyager spacecraft.

Scientists recently traced the paths of these powerful radio transmissions from Earth to multiple far-away spacecraft and determined which stars—along with any planets with possible alien life around them—are best positioned to intercept those messages. 

The research team created a list of stars that will encounter Earth’s signals within the next century and found that alien civilizations (if they’re out there) could send a return message as soon as 2029. Their results were published on March 20 in the journal Publications of the Astronomical Society of the Pacific.

“This is a famous idea from Carl Sagan, who used it as a plot theme in the movie Contact,” explains Howard Isaacson, a University of California, Berkeley astronomer and co-author of the new work. 

[Related: UFO research is stigmatized. NASA wants to change that.]

However, it’s worth taking any study involving extraterrestrial life with a grain of salt. Kaitlin Rasmussen, an astrobiologist at the University of Washington not affiliated with the paper, calls this study “an interesting exercise, but unlikely to yield results.” The results, in this case, would be aliens contacting Earth within a certain timeframe.

As radio signals travel through space, they spread out and become weaker and harder to detect. Aliens parked around a nearby star probably won’t notice the faint leakage from TVs and other small devices. However, the commands we send to trailblazing probes at the edge of the solar system—Voyager 1, Voyager 2, Pioneer 10, Pioneer 11, and New Horizons—require a much more focused and powerful broadcast from NASA’s Deep Space Network (DSN), a global array of radio dishes designed for space communications.

NASA Deep Space Network radiotelescopes on a grassy hill
The DSN can receive signals if it’s pointed in the right direction. NASA

The DSN signals don’t magically stop at the spacecraft they’re targeting: They continue into interstellar space where they eventually reach other stars. But electromagnetic waves like radio transmissions and light can only travel so fast—that’s why we use light-years to measure distances across the universe. The researchers used this law of physics to estimate how long it will take for DSN signals to reach nearby stars, and for alien life to return the message. 

The process revealed several insights. For example, according to their calculations, a signal sent to Pioneer 10 reached a dead star known as a white dwarf around 27 light-years away in 2002. The study team estimates a return message from any alien life near this dead star could reach us as soon as 2029, but no earlier. 

[Related: Nothing can break the speed of light]

More opportunities for return messages will pop up in the next decade. Signals sent to Voyager 2 around 1980 and 1983 reached two stars in 2007: one that’s 26 light-years away and a brown dwarf that’s 24 light-years away, respectively. If aliens sent a message right back from either, it could reach Earth in the early 2030s.

This work “gives Search for Extraterrestrial Intelligence researchers a more narrow group of stars to focus on,” says lead author Reilly Derrick, a University of California, Los Angeles engineering student.  

Derrick and Isaacson propose that radio astronomers could use their star lists to listen for return messages at predetermined times. For example, in 2029 they may want to point some of Earth’s major radio telescopes towards the white dwarf that received Pioneer 10’s message.

But other astronomers are skeptical. “If a response were to be sent, our ability to detect it would depend on many factors,” says Macy Huston, an astronomer at Penn State not involved in the new study. These factors include “how long or often we monitor the star for a response, and how long or often the return signal is transmitted.”

Our radio transmissions have only reached one-millionth of the volume of the Milky Way. 

There are still many unknowns when considering alien life. In particular, astronomers aren’t certain the stars in this study even have planets—although based on other exoplanet studies, it’s likely that at least a fraction of them do. The signals from the DSN are also still incredibly weak at such large distances, so it’s unclear how plausible it is for other stars to detect our transmissions.

“Our puny and infrequent transmissions are unlikely to yield a detection of humanity by extraterrestrials,” says Jean-Luc Margot, a University of California, Los Angeles radio astronomer who was not involved in the recent paper. He explains that our radio transmissions have only reached one-millionth of the volume of the Milky Way. 

“The probability that another civilization resides in this tiny bubble is extraordinarily small unless there are millions of civilizations in the Milky Way,” he says. But if they’re out there, there might be a time and place to capture the evidence.

The post Alien civilizations could send us messages by 2029 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Ancient Maya masons had a smart way to make plaster stronger https://www.popsci.com/science/ancient-maya-plaster/ Wed, 19 Apr 2023 18:16:42 +0000 https://www.popsci.com/?p=535272
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

Up close, the Mayas' timeless recipe from Copán looks similar to mother-of-pearl.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

]]>
Ancient Maya idol in Copán, Guatemala
The idols, pyramids, and dwellings in the ancient Maya city of Copán have lasted longer than a thousand years. DEA/V. Giannella/Contributor via Getty Images

An ancient Maya city might seem an unlikely place for people to be experimenting with proprietary chemicals. But scientists think that’s exactly what happened at Copán, an archaeological complex nestled in a valley in the mountainous rainforests of what is now western Honduras.

By historians’ reckoning, Copán’s golden age began in 427 CE, when a king named Yax Kʼukʼ Moʼ came to the valley from the northwest. His dynasty built one of the jewels of the Maya world, but abandoned it by the 10th century, leaving its courts and plazas to the mercy of the jungle. More than 1,000 years later, Copán’s buildings have kept remarkably well, despite baking in the tropical sun and humidity for so long. 

The secret may lie in the plaster the Maya used to coat Copán’s walls and ceilings. New research suggests that sap from the bark of local trees, which Maya craftspeople mixed into their plaster, helped reinforce its structures. Whether by accident or by purpose, those Maya builders created a material not unlike mother-of-pearl, a natural element of mollusc shells.

“We finally unveiled the secret of ancient Maya masons,” says Carlos Rodríguez Navarro, a mineralogist at the University of Granada in Spain and the paper’s first author. Rodríguez Navarro and his colleagues published their work in the journal Science Advances today.

[Related: Scientists may have solved an old Puebloan mystery by strapping giant logs to their foreheads]

Plaster makers followed a fairly straightforward recipe. Start with carbonate rock, such as limestone; bake it at over 1,000 degrees Fahrenheit; mix in water with the resulting quicklime; then, set the concoction out to react with carbon dioxide from the air. The final product is what builders call lime plaster or lime mortar. 

Civilizations across the world discovered this process, often independently. For example, Mesoamericans in Mexico and Central America learned how to do it by around 1,100 BCE. While ancient people found it useful for covering surfaces or holding together bricks, this basic lime plaster isn’t especially durable by modern standards.

Ancient Maya pyramid in Copán, Guatemala, in aerial photo
Copán, with its temples, squares, terraces and other characteristics, is an excellent representation of Classic Mayan civilization. Xin Yuewei/Xinhua via Getty Images

But, just as a dish might differ from town to town, lime plaster recipes varied from place to place. “Some of them perform better than others,” says Admir Masic, a materials scientist at the Massachusetts Institute of Technology who wasn’t part of the study. Maya lime plaster, experts agree, is one of the best.

Rodríguez Navarro and his colleagues wanted to learn why. They found their first clue when they examined brick-sized plaster chunks from Copán’s walls and floors with X-rays and electron microscopes. Inside some pieces, they found traces of organic materials like carbohydrates. 

That made them curious, Rodríguez Navarro says, because it seemed to confirm past archaeological and written records suggesting that ancient Maya masons mixed plant matter into their plaster. The other standard ingredients (lime and water) wouldn’t account for complex carbon chains.

To follow this lead, the authors decided to make the historic plaster themselves. They consulted living masons and Maya descendants near Copán. The locals referred them to the chukum and jiote trees that grow in the surrounding forests—specifically, the sap that came from the trees’ bark.

Jiote or gumbo-limbo tree in the Florida Everglades
Bursera simaruba, sometimes locally known as the jiobe tree. Deposit Photos

The authors tested the sap’s reaction when mixed into the plaster. Not only did it toughen the material, it also made the plaster insoluble in water, which partly explains how Copán survived the local climate so well.

The microscopic structure of the plant-enhanced plaster is similar to nacre or mother-of-pearl: the iridescent substance that some molluscs create to coat their shells. We don’t fully understand how molluscs make nacre, but we know that it consists of crystal plates sandwiching elastic proteins. The combination toughens the sea creatures’ exteriors and reinforces them against weathering from waves.

A close study of the ancient plaster samples and the modern analog revealed that they also had layers of rocky calcite plates and organic sappy material, giving the materials the same kind of resilience as nacre. “They were able to reproduce what living organisms do,” says Rodríguez Navarro. 

“This is really exciting,” says Masic. “It looks like it is improving properties [of regular plaster].”

Now, Rodríguez Navarro and his colleagues are trying to answer another question: Could other civilizations that depended on masonry—from Iberia to Persia to China—have stumbled upon the same secret? We know, for instance, that Chinese lime-plaster-makers mixed in a sticky rice soup for added strength.

Plaster isn’t the only age-old material that scientists have reconstructed. Masic and his colleagues found that ancient Roman concrete has the ability to “self-heal.” More than two millennia ago, builders in the empire may have added quicklime to a rocky aggregate, creating microscopic structures within the material that help fill in pores and cracks when it’s hit by seawater.

[Related: Ancient architecture might be key to creating climate-resilient buildings]

If that property sounds useful, modern engineers think so too. There exists a blossoming field devoted to studying—and recreating—materials of the past. Standing structures from archaeological sites already prove they can withstand the test of time. As a bonus, ancient people tended to work with more sustainable methods and use less fuel than their industrial counterparts.

“The Maya paper…is another great example of this [scientific] approach,” Masic says.

Not that Maya plaster will replace the concrete that’s ubiquitous in the modern world—but scientists say it could have its uses in preserving and upgrading the masonry found in pre-industrial buildings. A touch of plant sap could add centuries to a structure’s lifespan.

The post Ancient Maya masons had a smart way to make plaster stronger appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
An Einstein-backed method could help us find smaller exoplanets than ever before https://www.popsci.com/science/exoplanets-gravitational-microlensing/ Tue, 18 Apr 2023 16:34:47 +0000 https://www.popsci.com/?p=534889
Exoplanet KMT-2021-BLG-1898L b is a gas giant that looks like Jupiter but orbits a separate star. Illustration.
KMTNet astronomers identified exoplanet KMT-2021-BLG-1898L b in 2022. An artist's concept of the gas giant shows it completing a 3.8-year-long orbit around its star in a solar system far from ours. NASA/KMTNet

Astronomy is entering the golden age of exoplanet discoveries.

The post An Einstein-backed method could help us find smaller exoplanets than ever before appeared first on Popular Science.

]]>
Exoplanet KMT-2021-BLG-1898L b is a gas giant that looks like Jupiter but orbits a separate star. Illustration.
KMTNet astronomers identified exoplanet KMT-2021-BLG-1898L b in 2022. An artist's concept of the gas giant shows it completing a 3.8-year-long orbit around its star in a solar system far from ours. NASA/KMTNet

Since 1995 scientists have found more than 5,000 exoplanets—other worlds beyond our solar system. But while space researchers have gotten very good at discovering big planets, smaller ones have evaded detection.

However, a novel astronomy detection technique known as microlensing is starting to fill in the gaps. Experts who are a part of the Korea Microlensing Telescope Network (KMTNet) recently used this method to locate three new exoplanets about the same sizes as Jupiter and Saturn. They announced these findings in the journal Astronomy & Astrophysics on April 11. 

How does microlensing work?

Most exoplanets have been found through the transit method. This is when scientists use observatories like the Kepler Space Telescope and the James Webb Space Telescope to look at dips in the amount of light coming from a star. 

Meanwhile, gravitational microlensing (usually just called microlensing) involves searching for increases in brightness in deep space. These brilliant flashes are from a planet and its star bending the light of a more distant star, magnifying it according to Einstein’s rules for relativity. You may have heard of gravitational lensing for galaxies, which pretty much relies on the same physics, but on a much bigger scale.

Credit: NASA Scientific Visualization Studio

The new discoveries were particularly unique because they were found in partial data, where astronomers only observed half the event.

“Microlensing events are sort of like supernovae in that we only get one chance to observe them,” says Samson Johnson, an astronomer at the NASA Jet Propulsion Lab who was not affiliated with the study. 

Because astronomers only have one chance and don’t always know when events will happen, they sometimes miss parts of the show. “This is sort of like making a cake with only half of the recipe,” adds Johnson.

[Related: Sorry, Star Trek fans, the real planet Vulcan doesn’t exist]

The three new planets have long serial-number-like strings of letters and numbers for names: KMT-2021-BLG-2010Lb, KMT-2022-BLG-0371Lb, and KMT-2022-BLG-1013Lb. Each of these worlds revolves around a different star. They weigh as much as Jupiter, Saturn, and a little less than Saturn, respectively. 

Even though the researchers only observed part of the microlensing events for each of these planets, they were able to rule out other scenarios that could confidently explain the signals. This work “does show that even with incomplete data, we can learn interesting things about these planets,” says Scott Gaudi, an Ohio State University astronomer who was not involved in the published paper.

The exoplanet search continues

Microlensing is “highly complementary” to other exoplanet-hunting techniques, says Jennifer Yee, a co-author of the new study and researcher at The Center for Astrophysics | Harvard & Smithsonian. It can scope out planets that current technologies can’t, including worlds as small as Jupiter’s moon Ganymede or even a few times the mass of Earth’s moon, according to Gaudi.

The strength of microlensing is that “it’s a demographics machine, so you can detect lots of planets,” says Gaudi. This ability to detect planets of all sizes is crucial for astronomers as they complete their sweeping exoplanet census to determine the most common type of planet and the uniqueness of our own solar system. 

Credit: NASA Scientific Visualization Studio

Astronomers are honing their microlensing skills with new exoplanet discoveries like those from KTMNet, ensuring that they know how to handle this kind of data before new space telescopes come online in the next few years. For example, microlensing will be a large part of the Roman Space Telescope’s planned mission when it launches mid-decade

“We’ll increase the number of planets we know by several thousand with Roman, maybe even more,” says Gaudi. “We went from Kepler being the star of the show to TESS [NASA’s Transiting Exoplanet Survey Satellite] being the star of the show … For its time period, Roman [and microlensing] will be the star of the show.”

The post An Einstein-backed method could help us find smaller exoplanets than ever before appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
How the Tonga eruption rang Earth ‘like a bell’ https://www.popsci.com/science/tonga-volcano-tsunami-simulation/ Fri, 14 Apr 2023 18:00:00 +0000 https://www.popsci.com/?p=534151
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

A detailed simulation of underwater shockwaves changes what we know about the Hunga Tonga-Hunga Ha’apai eruption.

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

]]>
Satellite image of the powerful eruption.
Earth-observing satellites captured the powerful eruption. NASA Earth Observatory

When the Hunga Tonga–Hunga Haʻapai volcano in Tonga exploded on January 15, 2022—setting off a sonic boom heard as far north as Alaska—scientists instantly knew that they were witnessing history. 

“In the geophysical record, this is the biggest natural explosion ever recorded,” says Ricky Garza-Giron, a geophysicist at the University of California at Santa Cruz. 

It also spawned a tsunami that raced across the Pacific Ocean, killing two people in Peru. Meanwhile, the disaster devastated Tonga and caused four deaths in the archipelago. While tragic, experts anticipated an event of this magnitude would cause further casualties. So why didn’t it?

Certainly, the country’s disaster preparations deserve much of the credit. But the nature of the eruption itself and how the tsunami it spawned spread across Tonga’s islands, also saved Tonga from a worse outcome, according to research published today in the journal Science Advances. By combining field observations with drone and satellite data, the study team was able to recreate the event through a simulation.

2022 explosion from Hunga-Tonga volcano captured by satellites
Satellites captured the explosive eruption of the Hunga Tonga-Hunga Ha’apai volcano. National Environmental Satellite Data and Information Service

It’s yet another way that scientists have studied how this eruption shook Tonga and the whole world. For a few hours, the volcano’s ash plume bathed the country and its surrounding waters with more lightning than everywhere else on Earth—combined. The eruption spewed enough water vapor into the sky to boost the amount in the stratosphere by around 10 percent. 

[Related: Tonga’s historic volcanic eruption could help predict when tsunamis strike land]

The eruption shot shockwaves into the ground, water, and air. When Garza-Giron and his colleagues measured those waves, they found that the eruption released an order of magnitude more energy than the 1980 eruption of Mount St Helens.

“It literally rang the Earth like a bell,” says Sam Purkis, a geoscientist at the University of Miami in Florida and the Khaled bin Sultan Living Oceans Foundation. Purkis is the first author of the new paper. 

The aim of the simulation is to present a possible course of events. Purkis and his colleagues began by establishing a timeline. Scientists agree that the volcano erupted in a sequence of multiple bursts, but they don’t agree on when or how many. Corroborating witness statements with measurements from tide gauges, the study team suggests a quintet of blasts, each steadily increasing in strength up to a climactic fifth blast: measuring 15 megatons, equivalent to a hydrogen bomb.

Credit: Steven N. Ward Institute of Geophysics and Planetary Physics, University of California Santa Cruz, U.S.A.

Then, the authors simulated what those blasts may have done to the ocean—and how fearsome the waves they spawned were as they battered Tonga’s other islands. The simulation suggests the isle of Tofua, about 55 miles northeast of the eruption, may have fared worst: bearing waves more than 100 feet tall.

But there’s a saving grace: Tofua is uninhabited. The simulation also helps explain why Tonga’s capital and largest city, Nuku’alofa, was able to escape the brunt of the tsunami. It sits just 40 miles south of the eruption, and seemingly experienced much shallower waves. 

[Related: Tonga is fighting multiple disasters after a historic volcanic eruption]

The study team thinks geography is partly responsible. Tofua, a volcanic caldera, sits in deep waters and has sharp, mountainous coasts that offer no protection from an incoming tsunami. Meanwhile, Nuku’alofa is surrounded by shallower waters and a lagoon, giving a tsunami less water to displace. Coral reefs may have also helped protect the city from the tsunami. 

Researchers believed that reefs could cushion tsunamis, Purkis says, but they didn’t have the real-world data to show it. “You don’t have a real-world case study where you have waves which are tens of meters high hitting reefs,” says Purkis.

We do know of volcanic eruptions more violent than Hunga Tonga–Hunga Haʻapai: for instance, Tambora in 1815 (which famously caused a “Year Without a Summer”) and Krakatau in 1883. But those occurred before the 1960s when geophysicists started deploying the worldwide net of sensors and satellites they can use today.

Ultimately, the study authors write that this eruption resulted in a “lucky escape.” It occurred under the most peculiar circumstances: At the time of its eruption, Tonga had shut off its borders due to Covid-19, reducing the number of overseas tourists visiting the islands. Scientists credit this as another reason for the low death toll. But the same closed borders meant scientists had to wait to get data.

Ash cloud from Hunga-Tonga volcano over the Pacific ocean seen from space
Ash over the South Pacific could be seen from space. NASA

That’s part of why this paper came out 15 months after the eruption. Other scientists had been able to simulate the tsunami before, but Purkis and his colleagues bolstered theirs with data from the ground. Not only did this help them reconstruct a timeline, it also helped them to corroborate their simulation with measurements from more than 100 sites along Tonga’s coasts. 

The study team argues that the eruption serves as a “natural laboratory” for the Earth’s activity. Understanding this tsunami can help humans plan how to stay safe from them. There are many other volcanoes like Hunga Tonga–Hunga Haʻapai, and volcanoes located underwater can devastate coastal communities if they erupt at the wrong time.

Garza-Giron is excited about the possibility of comparing the new study’s results with prior studies, such as his own, about seismic activity—in addition to other data sources, likethe sounds of the ocean—to create a more complete picture of what happened that day.

“It’s not very often that we can see the Earth acting as a whole system, where the atmosphere, the ocean, and the solid earth are definitely interacting,” says Garza-Giron. “That, to me, was one of the most fascinating things about this eruption.”

The post How the Tonga eruption rang Earth ‘like a bell’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
You saw the first image of a black hole. Now see it better with AI. https://www.popsci.com/science/first-black-hole-image-ai/ Fri, 14 Apr 2023 17:00:00 +0000 https://www.popsci.com/?p=534170
M87 black hole Event Horizon Telescope image sharpened by AI with PRIMO algorithm. The glowing event horizon is now clearer and thinner and the black hole at the center darker.
AI, enhance. Medeiros et al., 2023

Mix general relativity with machine learning, and an astronomical donut starts to look more like a Cheerio.

The post You saw the first image of a black hole. Now see it better with AI. appeared first on Popular Science.

]]>
M87 black hole Event Horizon Telescope image sharpened by AI with PRIMO algorithm. The glowing event horizon is now clearer and thinner and the black hole at the center darker.
AI, enhance. Medeiros et al., 2023

Astronomy sheds light on the far-off, intangible phenomena that shape our universe and everything outside it. Artificial intelligence sifts through tiny, mundane details to help us process important patterns. Put the two together, and you can tackle almost any scientific conundrum—like determining  the relative shape of a black hole. 

The Event Horizon Telescope (a network of eight radio observatories placed strategically around the globe) originally captured the first image of a black hole in 2017 in the Messier 87 galaxy. After processing and compressing more than five terabytes of data, the team released a hazy shot in 2019, prompting people to joke that it was actually a fiery donut or a screenshot from Lord of the Rings. At the time, researchers conceded that the image could be improved with more fine-tuned observations or algorithms. 

[Related: How AI can make galactic telescope images ‘sharper’]

In a study published on April 13 in The Astrophysical Journal Letters, physicists from four US institutions used AI to sharpen the iconic image. This group fed the observatories’ raw interferometry data into an algorithm to produce a sharper, more accurate depiction of the black hole. The AI they used, called PRIMO, is an automated analysis tool that reconstructs visual data at higher resolutions to study gravity, the human genome, and more. In this case, the authors trained the neural network with simulations of accreting black holes—a mass-sucking process that produces thermal energy and radiation. They also relied on a mathematical technique called Fourier transform to turn energy frequencies, signals, and other artifacts into information the eye can see.

Their edited image shows a thinner “event horizon,” the glowing circle formed when light and accreted gas crosses into the gravitational sink. This could have “important implications for measuring the mass of the central black hole in M87 based on the EHT images,” the paper states.

M87 black hole original image next to M87 black hole sharpened image to show AI difference
The original image of M87 from 2019 (left) compared to the PRIMO reconstruction (middle) and the PRIMO reconstruction “blurred” to EHT’s resolution (right). The blurring occurs such that the image can match the resolution of EHT and the algorithm doesn’t add resolution when it is filling in gaps that the EHT would not be able to see with its true resolution. Medeirois et al., 2023

One thing’s for sure: The subject at the center of the shot is extremely dark, potent, and powerful. It’s even more clearly defined in the AI-enhanced version, backing up the claim that the supermassive black hole is up to 6.5 billion times heftier than our sun. Compare that to Sagittarius A*—the black hole that was recently captured in the Milky Way—which is estimated at 4 million times the sun’s mass.

Sagittarius A* could be another PRIMO target, Lia Medeiros, lead study author and astrophysicist at the Institute for Advanced Study, told the Associated Press. But the group is not in a rush to move on from the more distant black hole located 55 million light-years away in Messier 87. “It feels like we’re really seeing it for the first time,” she added in the AP interview. The image was a feat of astronomy, and now, people can gaze on it with more clarity.

Watch an interview where the researchers discuss their AI methods more in-depth below:

The post You saw the first image of a black hole. Now see it better with AI. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum computers can’t teleport things—yet https://www.popsci.com/technology/wormhole-teleportation-quantum-computer-simulation/ Fri, 07 Apr 2023 12:28:09 +0000 https://www.popsci.com/?p=532454
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

It's almost impossible to simulate a good wormhole without more qubits.

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

]]>
Google Sycamore processor for quantum computer hanging from a server room with gold and blue wires
Google's Sycamore quantum computer processor was recently at the center of a hotly debate wormhole simulation. Rocco Ceselin/Google

Last November, a group of physicists claimed they’d simulated a wormhole for the first time inside Google’s Sycamore quantum computer. The researchers tossed information into one batch of simulated particles and said they watched that information emerge in a second, separated batch of circuits. 

It was a bold claim. Wormholes—tunnels through space-time—are a very theoretical product of gravity that Albert Einstein helped popularize. It would be a remarkable feat to create even a wormhole facsimile with quantum mechanics, an entirely different branch of physics that has long been at odds with gravity. 

And indeed, three months later, a different group of physicists argued that the results could be explained through alternative, more mundane means. In response, the team behind the Sycamore project doubled down on their results.

Their case highlights a tantalizing dilemma. Successfully simulating a wormhole in a quantum computer could be a boon for solving an old physics conundrum, but so far, quantum hardware hasn’t been powerful or reliable enough to do the complex math. They’re getting there very quickly, though.

[Related: Journey to the center of a quantum computer]

The root of the challenge lies in the difference of mathematical systems. “Classical” computers, such as the device you’re using to read this article, store their data and do their computations with “bits,” typically made from silicon. These bits are binary: They can be either zero or one, nothing else. 

For the vast majority of human tasks, that’s no problem. But binary isn’t ideal for crunching the arcana of quantum mechanics—the bizarre rules that guide the universe at the smallest scales—because the system essentially operates in a completely different form of math.

Enter a quantum computer, which swaps out the silicon bits for “qubits” that adhere to quantum mechanics. A qubit can be zero, one—or, due to quantum trickery, some combination of zero and one. Qubits can make certain calculations far more manageable. In 2019, Google operators used Sycamore’s qubits to complete a task in minutes that they said would have taken a classical computer 10,000 years.

There are several ways of simulating wormholes with equations that a computer can solve. The 2022 paper’s researchers used something called the Sachdev–Ye–Kitaev (SYK) model. A classical computer can crunch the SYK model, but very ineffectively. Not only does the model involve particles interacting at a distance, it also features a good deal of randomness, both of which are tricky for classical computers to process.

Even the wormhole researchers greatly simplified the SYK model for their experiment. “The simulation they did, actually, is very easy to do classically,” says Hrant Gharibyan, a physicist at Caltech, who wasn’t involved in the project. “I can do it in my laptop.”

But simplifying the model opens up new questions. If physicists want to show that they’ve created a wormhole through quantum math, it makes it harder for them to confirm that they’ve actually done it. Furthermore, if physicists want to learn how quantum mechanics interact with gravity, it gives them less information to work with.

Critics have pointed out that the Sycamore experiment didn’t use enough qubits. While the chips in your phone or computer might have billions or trillions of bits, quantum computers are far, far smaller. The wormhole simulation, in particular, used nine.

While the team certainly didn’t need billions of qubits, according to experts, they should have used more than nine. “With a nine-qubit experiment, you’re not going to learn anything whatsoever that you didn’t already know from classically simulating the experiment,” says Scott Aaronson, a computer scientist at the University of Texas at Austin, who wasn’t an author on the paper.

If size is the problem, then current trends give physicists reason to be optimistic that they can simulate a proper wormhole in a quantum computer. Only a decade ago, even getting one qubit to function was an impressive feat. In 2016, the first quantum computer with cloud access had five. Now, quantum computers are in the dozens of qubits. Google Sycamore has a maximum of 53. IBM is planning a line of quantum computers that will surpass 1,000 qubits by the mid-2020s.

Additionally, today’s qubits are extremely fragile. Even small blips of noise or tiny temperature fluctuations—qubits need to be kept at frigid temperatures, just barely above absolute zero—may cause the medium to decohere, snapping the computer out of the quantum world and back into a mundane classical bit. (Newer quantum computers focus on trying to make qubits “cleaner.”)

Some quantum computers use individual particles; others use atomic nuclei. Google’s Sycamore, meanwhile, uses loops of superconducting wire. It all shows that qubits are in their VHS-versus-Betamax era: There are multiple competitors, and it isn’t clear which qubit—if any—will become the equivalent to the ubiquitous classical silicon chip.

“You need to make bigger quantum computers with cleaner qubits,” says Gharibyan, “and that’s when real quantum computing power will come.”

[Related: Scientists eye lab-grown brains to replace silicon-based computer chips]

For many physicists, that’s when great intangible rewards come in. Quantum physics, which guides the universe at its smallest scales, doesn’t have a complete explanation for gravity, which guides the universe at its largest. Showing a quantum wormhole—with qubits effectively teleporting—could bridge that gap.

So, the Google users aren’t the only physicists poring over this problem. Earlier in 2022, a third group of researchers published a paper, listing signs of teleportation they’d detected in quantum computers. They didn’t send a qubit through a simulated wormhole—they only sent a classical bit—but it was still a promising step. Better quantum gravity experiments, such as simulating the full SYK model, are about “purely extending our ability to build processors,” Gharibyan explains.

Aaronson is skeptical that a wormhole will ever be modeled in a meaningful form, even in the event that quantum computers do reach thousands of qubits. “There’s at least a chance of learning something relevant to quantum gravity that we didn’t know how to calculate otherwise,” he says. “Even then, I’ve struggled to get the experts to tell me what that thing is.”

The post Quantum computers can’t teleport things—yet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Hotter weather could be changing baseball https://www.popsci.com/environment/baseball-climate-change-weather/ Fri, 07 Apr 2023 12:00:00 +0000 https://www.popsci.com/?p=532290
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts.
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts. Maddie Meyer/Getty Images

'Climate ball' isn't necessarily a good thing.

The post Hotter weather could be changing baseball appeared first on Popular Science.

]]>
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts.
Aaron Judge of the New York Yankees hits a home run against the Boston Red Sox during the eighth inning at Fenway Park on September 13, 2022 in Boston, Massachusetts. Maddie Meyer/Getty Images

As average global temperatures continue to rise, America’s pastime could be entering the “climate-ball era.” A report published April 7 in the Bulletin of the American Meteorological Society found that since 2010, more than 500 home runs can be attributed to higher-than-average temperatures. These higher-than-average temperatures are due to human-made global warming.

While the authors of this study only attribute one percent of recent home runs to climate change, their study found that warmer temperatures could account for 10 percent or more of home runs by 2100, if emissions and climate change continue on their current trajectory.

[Related: What’s really behind baseball’s recent home run surge.]

“Global warming is not just a phenomenon that shows up in hurricanes and heat waves—it’s going to alter every aspect of how we live and play,” study co-author and doctoral candidate in geography at Dartmouth University Chris Callahan tells PopSci in an email. “Reducing human emissions of greenhouse gasses is the only way to prevent these effects from accelerating.”

This study primarily arose because Callahan, a huge baseball fan, was interested in any possible connections between climate change and home runs. “This simple physical mechanism—higher temperatures mean reduced air density, which means less air resistance to batted balls—had been proposed previously, but no one had tested whether it shows up in the large-scale data. It turns out that it does!” Callahan says. 

Callanhan and his team analyzed more than 100,000 Major League Baseball (MLB) games and 220,000 individual hits to correlate the number of home runs with the occurrence of unseasonably warm temperatures during the game. Next, they estimated how much the reduced air density that results from high air temperature was a possible driving force in the number of home runs on one given day compared to other games. 

Other factors, such as performance-enhancing drugs, bat and ball construction, and technology like launch analytics intended to optimize a batter’s power were also taken into account. While the team does not believe that temperature is the dominant factor in the increase in home runs, particularly because present day batters are primed to hit the ball at optimal angles and speeds, temperature does play a factor.

Global Warming photo
Increase in average number of home runs per year for each American major league ballpark with every 2 degree Fahrenheit increase in global average temperature. CREDIT: Christopher Callahan

The team particularly looked at the average number of home runs annually compared to every 2 degrees Fahrenheit increase in local average temperature at every MLB ballpark in the US. They found that the open-air Wrigley Field in Chicago would experience the largest spike (more than 15 home runs per season per 2 degree change), while Tampa Bay’s dome roofed Tropicana Field would stay level at one home run or less regardless of how hot it is outside the stadium. 

[Related: Will baseball ever replace umpires with robots?]

Night games lessened temperature and air density’s potential influence on the distance the ball travels, and covered stadiums would nearly eliminate the influence. Additionally, the study did not name precipitation as a factor, after all, most games are postponed or delayed. The number of runs per season due to temperature could be higher or lower depending on the conditions on each game day.

“I think it was surprising that the [heat’s] effect itself, while intuitive, was so clearly detectable in observations. As a non baseball fan, I was astounded by the data,” study co-author and geographer Justin Mankin tells PopSci. Mankin also noted that some next steps for this kind of research could potentially be looking into how wooden bats should change due to warming and how other ballistics based sports (golf, cricket, etc.) are affected by the increased temperature. 

While more home runs arguably makes for more exciting games, exposure to players and fans to extreme heat is a major risk factor that MLB and its teams will need to consider more frequently as the planet warms. 

“A key question for the organization at large is what’s an acceptable level of heat exposure for everybody and what’s the acceptable cost for maximizing home runs,” Mankin said in a statement. “Home runs are one pathway by which temperature is affecting game play, but there are other pathways that are more concerning because they have human risk attached to them.”

The post Hotter weather could be changing baseball appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dying plants are ‘screaming’ at you https://www.popsci.com/science/do-plants-makes-sounds-stressed/ Thu, 30 Mar 2023 18:00:00 +0000 https://www.popsci.com/?p=524200
Pincushion cactus with pink flowers on a sunny windowsill
Under that prickly exterior, even a cactus has feelings. Deposit Photos

In the future, farmers might use ultrasound to listen to stressed plants vent.

The post Dying plants are ‘screaming’ at you appeared first on Popular Science.

]]>
Pincushion cactus with pink flowers on a sunny windowsill
Under that prickly exterior, even a cactus has feelings. Deposit Photos

While plants can’t chat like people, they don’t just sit in restful silence. Under certain conditions—such as a lack of water or physical damage—plants vibrate and emit sound waves. Typically, those waves are too high-pitched for the human ear and go unnoticed.

But biologists can now hear those sound waves from a distance. Lilach Hadany, a biologist at Tel Aviv University in Israel, and her colleagues even managed to record them. They published their work in the journal Cell today.

Hadany and colleagues’ work is part of a niche but budding field called “plant bioacoustics.” While scientists know plants aren’t just inert decorations in the ecological backdrop— they interact with their surroundings, like releasing chemicals as a defense mechanism—researchers don’t exactly know how plants respond to and produce sounds. Not only could solving this mystery give farmers a new way of tending to their plants, but it might also unlock something wondrous: Plants have senses in a way we never realized.

It’s established that “the sounds emitted by plants are much more prominent after some kind of stress,” says František Baluška, a plant bioacoustics researcher at Bonn University in Germany who wasn’t a part of the new study. But past plant bioacoustics experiments had to listen to plants at a very close distance to measure vibrations. Meanwhile, Hadany and her colleagues managed to pick up plant sounds from across a room.

[Related on PopSci+: Biohacked cyborg plants may help prevent environmental disaster]

The study team first tested out their ideas on tomato and tobacco plants. Some plants were watered regularly, while others were neglected for days—a process that simulated drought-like conditions. Finally, the most unfortunate plants were severed from their roots.

Plants under idyllic conditions seemed to thrive. But the damaged and dehydrated plants did something peculiar: They emitted clicking sounds once every few minutes. 

Of course, if you were to walk through a drought-stricken tomato grove with a machete, chopping every vine you see, you wouldn’t hear a chorus of distressed plants. The plants emit sounds in ultrasound: frequencies too high for the human ear to hear. That’s part of why researchers have only now perceived these clicks.
“Not everybody has the equipment to do ultrasound [or] has the mind to look into these broader frequencies,” says ecologist Daniel Robert, a professor at the University of Bristol in the United Kingdom who wasn’t an author of the paper.

Three tomato plants in a greenhouse with a microphone in front of them
Three tomato plants’ sounds were recorded in a greenhouse. Ohad Lewin-Epstein

The researchers were able to record similar sounds in other plants deprived of water, including wheat, maize, wine grapes, pincushion cactus, and henbit (a common spring weed in the Northern Hemisphere). 

Biologists think the clicks might come from xylem, the “piping” that transports water and nutrients through a plant. Pressure differences cause air bubbles to enter the fluid. The bubbles grow until they pop—and the burst is the noise picked up by scientists. This process is called cavitation. 

Most people who study cavitation aren’t biologists; they’re typically physicists and engineers. For them, cavitation is often a nuisance. Bursting bubbles can damage pumps, propellers, hydraulic turbines, and other devices that do their work underwater. But, on the other hand, we can put cavitation to work for us: for instance, in ultrasound jewelry cleaners.

Although it’s known cavitation occurs in plants under certain conditions, like when they’re dehydrated, scientists aren’t sure that this process can entirely explain the plant sounds they hear. “There might not be only one mechanism,” says Robert.

The authors speculate that their work could eventually help plant growers, who could listen from a distance and monitor the plants in their greenhouse. To support this potential future,  Hadany and her colleagues trained a machine learning model to break down the sound waves and discern what stress caused a particular sound. Instead of being surprised by wilted greens, this type of tech could give horticulturists a heads-up.

[Related: How to water your plants less but still keep them happy]

Robert suspects that—unlike people—animals might already be able to hear plant sounds. Insects searching for landing spots or places to lay their eggs, for instance, might pick and choose plants by listening in and selecting a plant based on their health.

If there is an observable quality like sound (or light or electric fields) in the wild, then some organisms will evolve to use it, explains Robert. “This is why we have ears,” he says

If that’s the case, perhaps it can work the other way—plants may also respond to sounds. Scientists like Baluška have already shown that plants can “hear” external sounds. For example, research suggests some leaf trichomes react to vibrations from worms chewing on them. And in the laboratory, researchers have seen some plants’ root tips grow through the soil in the direction of incoming sounds.

If that’s the case, some biologists think plants may have more sophisticated “senses” than we perhaps believed.

“Plants definitely must be aware of what is around because they must react every second because the environment is changing all the time,” says Baluška. “They must be able to, somehow, understand the environment.”

The post Dying plants are ‘screaming’ at you appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Room-temperature superconductors could zap us into the future https://www.popsci.com/science/room-temperature-superconductor/ Sat, 25 Mar 2023 16:00:00 +0000 https://www.popsci.com/?p=522900
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

Superconductors convey powerful currents and intense magnetic fields. But right now, they can only be built at searing temperatures and crushing pressures.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

]]>
Superconductor cuprate rings lit up in blue and green on a black grid
In this image, the superconducting Cooper-pair cuprate is superimposed on a dashed pattern that indicates the static positions of electrons caught in a quantum "traffic jam" at higher energy. US Department of Energy

Update (November 9, 2023): This week the journal Nature retracted the lutetium superconductivity study on request of some of the co-authors and other physicists who questioned the electrical resistance data. The story below, which focused on the challenge of achieving room-temperature superconductivity and the controversy around the lutetium claims, has been updated to reflect the retraction.

In the future, wires might cross underneath oceans to effortlessly deliver electricity from one continent to another. Those cables would carry currents from giant wind turbines or power the magnets of levitating high-speed trains.

All these technologies rely on a long-sought wonder of the physics world: superconductivity, a heightened physical property that lets metal carry an electric current without losing any juice.

But superconductivity has only functioned at freezing temperatures that are far too cold for most devices. To make it more useful, scientists have to recreate the same conditions at regular temperatures. And even though physicists have known about superconductivity since 1911, a room-temperature superconductor still evades them, like a mirage in the desert.

What is a superconductor?

All metals have a point called the “critical temperature.” Cool the metal below that temperature, and electrical resistivity all but vanishes, making it extra easy to move charged atoms through. To put it another way, an electric current running through a closed loop of superconducting wire could circulate forever. 

Today, anywhere from 8 to 15 percent of mains electricity is lost between the generator and the consumer because the electrical resistivity in standard wires naturally wicks some of it away as heat. Superconducting wires could eliminate all of that waste.

[Related: This one-way superconductor could be a step toward eternal electricity]

There’s another upside, too. When electricity flows through a coiled wire, it produces a magnetic field; superconducting wires intensify that magnetism. Already, superconducting magnets power MRI machines, help particle accelerators guide their quarry around a loop, shape plasma in fusion reactors, and push maglev trains like Japan’s under-construction Chūō Shinkansen.

Turning up the temperature

While superconductivity is a wondrous ability, physics nerfs it with the cold caveat. Most known materials’ critical temperatures are barely above absolute zero (-459 degrees Fahrenheit). Aluminum, for instance, comes in at -457 degrees Fahrenheit; mercury at -452 degrees Fahrenheit; and the ductile metal niobium at a balmy -443 degrees Fahrenheit. Chilling anything to temperatures that frigid is tedious and impractical. 

Scientists made it happen—in a limited capacity—by testing it with exotic materials like cuprates, a type of ceramic that contains copper and oxygen. In 1986, two IBM researchers found a cuprate that superconducted at -396 degrees Fahrenheit, a breakthrough that won them the Nobel Prize in Physics. Soon enough, others in the field pushed cuprate superconductors past -321 degrees Fahrenheit, the boiling point of liquid nitrogen—a far more accessible coolant than the liquid hydrogen or helium they’d otherwise need. 

“That was a very exciting time,” says Richard Greene, a physicist at the University of Maryland. “People were thinking, ‘Well, we might be able to get up to room temperature.’”

Now, more than 30 years later, the search for a room-temperature superconductor continues. Equipped with algorithms that can predict what a material’s properties will look like, many researchers feel that they’re closer than ever. But some of their ideas have been controversial.

The replication dilemma

One way the field is making strides is by turning the attention away from cuprates to hydrates, or materials with negatively charged hydrogen atoms. In 2015, researchers in Mainz, Germany, set a new record with a sulfur hydride that superconducted at -94 degrees Fahrenheit. Some of them then quickly broke their own record with a hydride of the rare-earth element lanthanum, pushing the mercury up to around -9 degrees Fahrenheit—about the temperature of a home freezer.

But again, there’s a catch. Critical temperatures shift when the surrounding pressure changes, and hydride superconductors, it seems, require rather inhuman pressures. The lanthanum hydride only achieved superconductivity at pressures above 150 gigapascals—roughly equivalent to conditions in the Earth’s core, and far too high for any practical purpose in the surface world.

[Related: How the small, mighty transistor changed the world]

So imagine the surprise when mechanical engineers at the University of Rochester in upstate New York presented a hydride made from another rare-earth element, lutetium. According to their results, which have since been retracted, the lutetium hydride superconducts at around 70 degrees Fahrenheit and 1 gigapascal. That’s still 10,000 times Earth’s air pressure at sea level, but low enough to be used for industrial tools.

“It is not a high pressure,” says Eva Zurek, a theoretical chemist at the University at Buffalo. “If it can be replicated, [this method] could be very significant.”

Scientists, however, have seen this kind of an attempt before. In 2020, the same research group claimed they’d found room-temperature superconductivity in a hydride of carbon and sulfur. After the initial fanfare, many of their peers pointed out that they’d mishandled their data and that their work couldn’t be replicated. Eventually, the University of Rochester engineers caved and retracted that paper as well.

Now, they’re facing the same questions with their lutetium superconductor. “It’s really got to be verified,” says Greene. The early signs are inauspicious: A team from Nanjing University in China recently tried to replicate the experiment, without success.

“Many groups should be able to reproduce this work,” Greene adds. “I think we’ll know very quickly whether this is correct or not.”

But if the new hydride does mark the first room-temperature superconductor—what next? Will engineers start stringing power lines across the planet tomorrow? Not quite. First, they have to understand how this new material behaves under different temperatures and other conditions, and what it looks like at smaller scales.

“We don’t know what the structure is yet. In my opinion, it’s going to be quite different from a high-pressure hydride,” says Zurek. 

If the superconductor is viable, engineers will have to learn how to make it for everyday uses. But if they succeed, the result could be a gift for world-changing technologies.

The post Room-temperature superconductors could zap us into the future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Dark energy fills the cosmos. But what is it? https://www.popsci.com/science/what-is-dark-energy/ Mon, 20 Mar 2023 10:00:00 +0000 https://www.popsci.com/?p=520278
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy.
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy. NASA/CXC/ITA/INAF/STScI

We know how dark energy behaves, but its nature is still a mystery.

The post Dark energy fills the cosmos. But what is it? appeared first on Popular Science.

]]>
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy.
A composite image of colliding galaxies, which make up cluster Abell 2744. The blue represents dark matter, a kindred mystery to dark energy. NASA/CXC/ITA/INAF/STScI

The universe has a dark side—it’s filled with dark matter and dark energy. Dark matter is the unseen mass floating around galaxies, which physicists have searched for using giant vats of ice, particle colliders, and other sophisticated techniques. But what about dark matter’s stranger sibling, dark energy? 

Dark energy is the term given to something that is causing the universe to expand faster and faster as time goes on. The great puzzle facing cosmologists today is figuring out the identity of that “something.”

“We can tell you a lot about the properties of dark energy and how it behaves,” says astrophysicist Tamara Davis, a professor at the University of Queensland in Australia. “However, we still don’t know what it is. That’s the big question.”

How do we know dark energy exists?

Astronomers have long known that the universe is expanding. In the early 1900s,  Edwin Hubble observed galaxies in motion and created Hubble’s Law, which relates a galaxy’s velocity to its distance from us. At the end of the 20th century, though, new detections of supernovae in far-off galaxies revealed a conundrum: The expansion of the universe isn’t constant, but is instead speeding up.

“The fact that the universe is accelerating caught us all by surprise,” says University of Texas at Austin astrophysicist Katherine Freese. Unlike the attractive force of gravity, dark energy must create “some sort of repulsive behavior, driving things apart from one another more and more quickly,” adds Freese.

Many observations since the 1990s have confirmed that the universe is accelerating. Exploding stars in distant galaxies appear fainter than they should have been in a steadily-expanding universe. Even the cosmic microwave background—the remnant light from the first clear moments in the universe’s history—shows fingerprints of dark energy’s effects. To explain the observed universe, dark energy is a necessary component of our mathematical models of cosmology.

[Related: Dark matter has never killed anyone, and scientists want to know why]

The term dark energy was coined in 1998 by astrophysicist Michael Turner to match the nomenclature of dark matter. It also conveys that the universe’s accelerating expansion was a crucial, unsolved problem. Many scientists at the time thought that Albert Einstein’s cosmological constant—a “fudge factor” he included in general relativity to make the math work out, also known as lambda—was the perfect explanation for dark energy, since it fit nicely into their models. 

“It was my belief that it was not that simple,” says Turner, now a visiting professor at UCLA. He views the accelerating universe as “the most profound problem” and “the biggest mystery in all of science.” 

Why does dark energy matter?

The Lambda-CDM model, which says we live in a universe that consists of only 5 percent normal matter—everything you’ve ever seen or touched—plus 27 percent dark matter and a whopping 68 percent dark energy, is “the current paradigm in cosmology, says Yale astrophysicist Will Tyndall. It “rather ambitiously seeks to incorporate (and explain) all of cosmic history,” he says. But it still leaves a lot unexplained, including the nature of dark energy. “After all, how can we have so little understanding of something that supposedly constitutes 68 percent of the universe we live in?” adds Tyndall. 

Dark energy is also a major deciding factor in our universe’s ultimate fate. Will the universe be torn apart in a Big Rip, in which everything is shredded apart atom by atom?  Or will it end in a whimper? 

These scenarios depend on whether dark energy changes with time. If dark energy is just the cosmological constant, with no variation, our universe will expand eternally into a very lonely place; in this scenario, all the stars beyond our local cluster of galaxies would be invisible to us, too red to be detected.

If dark energy gets stronger, it might lead to the  event known as the Big Rip. Maybe dark energy weakens, and our universe crunches back down, starting the cycle all over with a new big bang. Physicists won’t know which of these scenarios lies ahead until they have a better handle on the nature of dark energy.

What could dark energy actually be? 

Dark energy shows up in the mathematics of the universe as Einstein’s cosmological constant, but that doesn’t explain what physically causes the universe’s expansion to speed up. A leading theory is a funky feature of quantum mechanics known as the vacuum energy. This is created when pairs of particles and their antiparticles quickly pop into and out of existence, which happens pretty much everywhere all the time. 

It sounds like a great explanation for dark energy. But there’s one big issue: The value of the vacuum energy that scientists measure and the one they predict from theories are wildly and inexplicably different. This is known as the cosmological constant problem. Put another way, particle physicist’s models predict that what we think of as “nothing” should have some weight, Turner says. But measurements find it weighs very little, if anything at all. “Maybe nothing weighs nothing,” he says. 

[Related: An ambitious dark energy experiment just went live in Arizona]

Cosmologists have raised other explanations for dark energy over the years. One, string theory, claims that the universe is made up of tiny little string-like bits, and the value of dark energy that we see just happens to be one possibility within many different multiverses. Many physicists consider this to be pretty human-centric in its logic—we couldn’t exist in a universe with other values of the cosmological constant, so we ended up in this one, even if it’s an outlier compared to the others.

Other physicists have considered changing Einstein’s equations for general relativity altogether, but most of those attempts were ruled out by measurements from LIGO’s pioneering observations of gravitational waves. “In short, we need a brilliant new idea,” says Freese.

How might scientists solve this mystery?

New observations of the cosmos may be able to help astrophysicists measure the properties of dark energy in more detail. For example, astronomers already know the universe’s expansion is accelerating—but has that acceleration always been the same? If the answer to this question is no, then that means dark energy hasn’t been constant, and the lives of physics theorists everywhere will be upended as they scramble to find new explanations.

One project, known as the Dark Energy Spectroscopic Instrument or DESI, is already underway at Kitt Peak Observatory in Arizona. This effort searches for signs of varying acceleration in the universe by cosmic cartography. “It is like laying grid-paper over the universe and measuring how it has expanded and accelerated with time,” says Davis. 

Even more experiments are upcoming, such as the European Euclid mission launching this summer. Euclid will map galaxies as far as 10 billion light-years away—looking backward in time by 10 billion years. This is “the entire period over which dark energy played a significant role in accelerating the expansion of the universe,” as its mission website states. Radio telescopes such as CHIME will be mapping the universe in a slightly different way, tracing how hydrogen spreads across space.

New observations won’t solve everything, though. “Even if we measure the properties of dark energy to infinite precision, it doesn’t tell us what it is,” Davis adds. “The real breakthrough that is needed is a theoretical one.” Astronomers have a timeline for new experiments, which will keep marching forward, recording better and better measurements. But theoretical breakthroughs are unpredictable—it could take one, ten, or even a hundred-plus years. “In science, there are very few true puzzles. A true puzzle means you don’t really know the answer,” says Turner. “And I think dark energy is one of them.”

The post Dark energy fills the cosmos. But what is it? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Clouds of ancient space water might have filled Earth’s oceans https://www.popsci.com/science/water-origin-theory-space/ Fri, 10 Mar 2023 11:00:00 +0000 https://www.popsci.com/?p=518688
Protoplanetary disk and water formation around star V883 Orionis in the Orion constellation. Illustrated in gold, white, and black.
This artist’s impression shows the planet-forming disc around the star V883 Orionis. The inset image shows the two kinds of water molecules studied in this disc: normal water, with one oxygen atom and two hydrogen atoms, and a heavier version where one hydrogen atom is replaced with deuterium, an isotope. ESO/L. Calçada

The molecules that made Earth wet were probably older than our sun.

The post Clouds of ancient space water might have filled Earth’s oceans appeared first on Popular Science.

]]>
Protoplanetary disk and water formation around star V883 Orionis in the Orion constellation. Illustrated in gold, white, and black.
This artist’s impression shows the planet-forming disc around the star V883 Orionis. The inset image shows the two kinds of water molecules studied in this disc: normal water, with one oxygen atom and two hydrogen atoms, and a heavier version where one hydrogen atom is replaced with deuterium, an isotope. ESO/L. Calçada

Water is an essential ingredient for life as we know it, but its origins on Earth, or any other planet, have been a long-standing puzzle. Was most of our planet’s water incorporated in the early Earth as it coalesced out of the material orbiting the young sun? Or was water brought to the surface only later by comet and asteroid bombardments? And where did that water come from originally

A study published on March 7 in the journal Nature provides new evidence to bolster a theory about the ultimate origins of water—namely, that it predates the sun and solar system, forming slowly over time in vast clouds of gas and dust between stars.

”We now have a clear link in the evolution of water. It actually seems to be directly inherited, all the way back from the cold interstellar medium before a star ever formed,” says John Tobin, an astronomer studying star formation at the National Radio Astronomy Observatory and lead author of the paper. The water, unchanged, was incorporated from the protoplanetary disk, a dense, round layer of dust and gas that forms in orbit around newborn stars and from which planets and small space bodies like comets emerge. Tobin says the water gets drawn into comets “relatively unchanged as well.”

Astronomers have proposed different origins story for water in solar systems. In the hot nebular theory, Tobin says, the heat in a protoplanetary disk around a natal star will break down water and other molecules, which form afresh as things start to cool.  

The problem with that theory, according to Tobin, is that when water emerges at relatively warm temperatures in a protoplanetary disk, it won’t look like the water found on comets and asteroids. We know what those molecules look like: Space rocks, such as asteroids and comets act as time capsules, preserving the state of matter in the early solar system. Specifically, water made in the disk wouldn’t have enough deuterium—the hydrogen isotope that contains one neutron and one proton in its nucleus, rather than a single proton as in typical hydrogen. 

[Related: Meteorites older than the solar system contain key ingredients for life]

An alternative to the hot nebular theory is that water forms at cold temperatures on the surface of dust grains in vast clouds in the interstellar medium. This deep chill changes the dynamics of water formation, so that more deuterium is incorporated in place of typical hydrogen atoms in H2O molecules, more closely resembling the hydrogen-to-deuterium ratio seen in asteroids and comets.  

“The surface of dust grains is the only place where you can efficiently form large amounts of water with deuterium in it,” Tobin says. “The other routes of forming water with deuterium and gas just don’t work.” 

While this explanation worked in theory, the new paper is the first time scientists have found evidence that water from the interstellar medium can survive the intense heat during the formation of a protoplanetary disk. 

The researchers used the European Southern Observatory’s Atacama Large Millimeter/submillimeter Array, a radio telescope in Chile, to observe the protoplanetary disk around the young star V883 Orionis, about 1,300 light-years away from Earth in the constellation Orion. 

Radio telescopes such as this one can detect the signal of water molecules in the gas phase. But dense dust found in  protoplanetary disks very close to young stars often turns water into ice, which sticks to grains in ways telescopes cannot observe. 

But V883 Orionis is not a typical young star—it’s been shining brighter than normal due to material from the protoplanetary disk falling onto the star. This increased intensity warmed ice on dust grains farther out than usual, allowing Tobin and his colleagues to detect the signal of deuterium-enriched water in the disk. 

“That’s why it was unique to be able to observe this particular system, and get a direct confirmation of the water composition,” Tobin explains. ”That signature of that level of deuterium gives you your smoking gun.” This suggests Earth’s oceans and rivers are, at a molecular level, older than the sun itself. 

[Related: Here’s how life on Earth might have formed out of thin air and water]

“We obviously will want to do this for more systems to make sure this wasn’t just that wasn’t just a fluke,” Tobin adds. It’s possible, for instance, that water chemistry is somehow altered later in the development of planets, comets, and asteroids, as they smash together in a protoplanetary disk. 

But as an astronomer studying star formation, Tobin already has some follow up candidates in mind. “There are several other good candidates that are in the Orion star-forming region,” he says. “You just need to find something that has a disk around it.”

The post Clouds of ancient space water might have filled Earth’s oceans appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
We might soon lose a full second of our lives https://www.popsci.com/science/negative-leap-second/ Mon, 20 Feb 2023 11:00:00 +0000 https://www.popsci.com/?p=513420
Surrealist digital painting inspired by Dali of flying clocks and chess pieces upside down over sand and the Earth. The motifs symbolize the leap second.
Some tech companies think a negative leap second would turn the world upside down. But it probably won't be that bad. Deposit Photos

The Earth is spinning faster. A negative leap second could help the world's clocks catch up.

The post We might soon lose a full second of our lives appeared first on Popular Science.

]]>
Surrealist digital painting inspired by Dali of flying clocks and chess pieces upside down over sand and the Earth. The motifs symbolize the leap second.
Some tech companies think a negative leap second would turn the world upside down. But it probably won't be that bad. Deposit Photos

The leap second’s days, so to speak, are numbered. Late last year, the world’s timekeepers announced they would abandon the punctual convention in 2035.

That still gives timekeepers a chance to invoke the leap second before its scheduled end—in a more unconventional way. Ever since its creation, they’ve only used positive leap seconds, adding a second to slow down the world’s clocks when they get too far ahead of Earth’s rotation.

As it happens, the world’s clocks aren’t ahead right now; in fact, they’ve fallen behind. If this trend holds up, it’s possible that the next leap second may be a negative one, removing a second to speed the human measure of time back up. That’s uncharted territory.

The majority of humans won’t notice a missing second, just as they wouldn’t with an extra one. Computers and their networks, however, already have problems with positive leap seconds. While their operators can practice for when the world’s clocks skip a second, they won’t know what a negative leap second can do until the big day happens (if it ever does).

“Nobody knows how software systems will react to it,” says Marina Gertsvolf, a researcher at the National Research Council, which is responsible for Canada’s timekeeping.

The second is defined by a process that occurs in the nucleus of a cesium-155 atom—a process that atomic clocks can calculate with stunning accuracy. But a day is based on how long the Earth takes to finish one full spin, which takes 24 hours or 86,400 of those seconds.

Except a day isn’t always precisely 86,400 seconds, because the planet’s rotation isn’t a constant. Everything from the mantle churning to the atmosphere moving to the moon’s gravity pulling can play with it, adding or subtracting a few milliseconds every day. Over time, those differences add up.

[Related: What would happen if the Earth started to spin faster]

An organization called the International Earth Rotation and Space Systems Service (IERS) is responsible for tracking and adjusting for the changes. When the gulf widens by enough, they decree that the final minute of June 30 or December 31—whichever comes next—should be modified with a leap second. Since 1972, these judges of time have added 31 positive leap seconds.

But for the past several months, Earth’s rotation has been pacing ahead of the world’s clocks. If this continues, then it’s possible the next leap second might be negative. At some point in the late 2020s or early 2030s, IERS might decide to peel away the last second of the last minute of June 30 or December 31, resulting in a minute that’s 59 seconds long. Clocks would skip from 23:59:58 right to 00:00:00. And we don’t know what that will do. 

What we do know is the chaos that past positive leap seconds have caused. It’s nothing like the apocalyptic collapse that Y2K preppers feared, but the time tweaks have given systems administrators their fair share of headaches. In 2012, a leap second glitched a server’s Linux operating system and knocked out Reddit at midnight. In 2017, Cloudflare—a major web service provider—experienced a significant outage due to a leap second. Problems invariably arise when one computer or server talks to another computer or server that still might not have accounted for a leap second.

As a result, some of the leap second’s biggest critics have been tech companies who have to deal with the consequences. And at least one of them is not excited about the possibility of a negative leap second. In 2022, two Facebook engineers wrote: “The impact of a negative leap second has never been tested on a large scale; it could have a devastating effect on the software relying on timers or schedulers.”

Timekeepers, however, aren’t expecting a meltdown. “Negative leap seconds aren’t quite as nasty as positive leap seconds,” says Michael Wouters, a researcher at the National Measurement Institute, Australia’s peak measurement body.

[Related: Daylight saving can mess with circadian rhythm]

Still, some organizations have already made emergency plans. Google, for instance, uses a process they call a “smear.” Rather than adding a second, it spread a positive leap second over the course of a day, making every second slightly longer to make up the difference. According to the company, it tested the process for a negative leap second by making every second slightly shorter, amounting to a lost second over the course of the day.

Many servers get their time from constellations of navigation satellites like America’s GPS, Europe’s Galileo, and China’s BeiDou. To read satellite data, servers typically rely on specialized receivers that translate signals into information—including the time. According to Wouters, many of those receivers’ manufacturers have tested to handle negative leap seconds. “I think that there is a lot more awareness of leap seconds than in the past,” says Wouters. 

At the end of the day, the leap second is just an awkward, artificial construct. Human timekeepers use it to force the astronomical cycles that once defined our time back into lockstep with the atomic physics that have replaced the stars. “It removes this idea that time belongs to no country … and no particular industrial interest,” says Gertsvolf.

So, with the blessing of the world’s timekeepers, the leap second is on its way out. If that goes according to plan, then we can let the Earth spin as it wants without having to skip a beat.

The post We might soon lose a full second of our lives appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why is space cold if the sun is hot? https://www.popsci.com/why-is-space-cold-sun-hot/ Tue, 31 Aug 2021 13:04:12 +0000 https://www.popsci.com/uncategorized/why-is-space-cold-sun-hot/
Heat of sun radiating through cold of space
On July 23, 2012, a massive cloud of solar material erupted off the sun's right side, zooming out into space. NASA/STEREO

We live in a universe of extremes.

The post Why is space cold if the sun is hot? appeared first on Popular Science.

]]>
Heat of sun radiating through cold of space
On July 23, 2012, a massive cloud of solar material erupted off the sun's right side, zooming out into space. NASA/STEREO

How cold is space? And how hot is the sun? These are both excellent questions. Unlike our mild habitat here on Earth, our solar system is full of temperature extremes. The sun is a bolus of gas and fire measuring around 27 million degrees Fahrenheit at its core and 10,000 degrees at its surface. Meanwhile, the cosmic background temperature—the temperature of space once you get far enough away to escape Earth’s balmy atmosphere—hovers at -455 F.

But how can one part of our galactic neighborhood be freezing when another is searing? Scholars (and NFL players) have puzzled over this paradox for time eternal.

Well, there’s a reasonable explanation. Heat travels through the cosmos as radiation, an infrared wave of energy that migrates from hotter objects to cooler ones. The radiation waves excite molecules they come in contact with, causing them to heat up. This is how heat travels from the sun to Earth, but the catch is that radiation only heats molecules and matter that are directly in its path. Everything else stays chilly. Take Mercury: the nighttime temperature of the planet can be 1,000 degrees Fahrenheit lower than the radiation-exposed day-side, according to NASA.

Compare that to Earth, where the air around you stays warm even if you’re in the shade—and even, in some seasons, in the dark of night. That’s because heat travels throughout our beautiful blue planet by three methods instead of just one: conduction, convection, and radiation. When the sun’s radiation hits and warms up molecules in our atmosphere, they pass that extra energy to the molecules around them. Those molecules then bump into and heat up their own neighbors. This heat transfer from molecule to molecule is called conduction, and it’s a chain reaction that warms areas outside of the sun’s path.

[Related: What happens to your body when you die in space?]

Space, however, is a vacuum—meaning it’s basically empty. Gas molecules in space are too few and far apart to regularly collide with one another. So even when the sun heats them with infrared waves, transferring that heat via conduction isn’t possible. Similarly, convection—a form of heat transfer that happens in the presence of gravity—is important in dispersing warmth across the Earth, but doesn’t happen in zero-g space.

These are things Elisabeth Abel, a thermal engineer on NASA’s DART project, thinks about as she prepares vehicles and devices for long-term voyages through space. This is especially true when she was working on the Parker Solar Probe, she says.

As you can probably tell by its name, the Parker Solar Probe is part of NASA’s mission to study the sun. It zooms through the outermost layer of the star’s atmosphere, called the corona, collecting data. In April 2021, the probe got within 6.5 million miles of the inferno, the closest a spacecraft has ever been to the sun. The heat shield projected on one side of the probe makes this possible.

“The job of that heat shield,” Abel says, is to make sure “none of the solar radiation [will] touch anything on the spacecraft.” So, while the heat shield is experiencing the extreme heat (around 250 degrees F) of our host star, the spacecraft itself is much colder—around -238 degrees F, she says.

[Related: How worried should we be about solar flares and space weather?]

As the lead thermal engineer for DART—a small spacecraft designed to collide with an asteroid and nudge it off course—Abel takes practical steps to manage the temperatures of deep space. The extreme variation in temperature between the icy void and the boiling heat of the sun poses unique challenges. Some parts of the spacecraft needed help staying cool enough to avoid shorting out, while others required heating elements to keep them warm enough to function.

Preparing for temperature shifts of hundreds of degrees might sound wild, but it’s just how things are out in space. The real oddity is Earth: Amidst the extreme cold and fiery hot, our atmosphere keeps things surprisingly mild—at least for now.

This story has been updated. It was originally published on July 24, 2019.

The post Why is space cold if the sun is hot? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Let’s talk about how planes fly https://www.popsci.com/how-do-planes-fly/ Fri, 02 Nov 2018 19:00:00 +0000 https://www.popsci.com/uncategorized/how-do-planes-fly/
An airplane taking off toward the camera at dusk, with lights along the runway and on the front of the plane, against a cloudy reddish sunset.
Flight isn't magic, it's physics. Josue Isai Ramos Figueroa / Unsplash

How does an aircraft stay in the sky, and how do wings work? Fasten your seatbelts—let's explore.

The post Let’s talk about how planes fly appeared first on Popular Science.

]]>
An airplane taking off toward the camera at dusk, with lights along the runway and on the front of the plane, against a cloudy reddish sunset.
Flight isn't magic, it's physics. Josue Isai Ramos Figueroa / Unsplash

How does an airplane stay in the air? Whether you’ve pondered the question while flying or not, it remains a fascinating, complex topic. Here’s a quick look at the physics involved with an airplane’s flight, as well as a glimpse at a misconception surrounding the subject, too. 

First, picture an aircraft—a commercial airliner, such as a Boeing or Airbus transport jet—cruising in steady flight through the sky. That flight involves a delicate balance of opposing forces. “Wings produce lift, and lift counters the weight of the aircraft,” says Holger Babinsky, a professor of aerodynamics at the University of Cambridge. 

“That lift [or upward] force has to be equal to, or greater than, the weight of the airplane—that’s what keeps it in the air,” says William Crossley, the head of the School of Aeronautics and Astronautics at Purdue University. 

Meanwhile, the aircraft’s engines are giving it the thrust it needs to counter the drag it experiences from the friction of the air around it. “As you’re flying forward, you have to have enough thrust to at least equal the drag—it can be higher than the drag if you’re accelerating; it can be lower than the drag if you’re slowing down—but in steady, level flight, the thrust equals drag,” Crossley notes.

[Related: How high do planes fly?]

Understanding just how the airplane’s wings produce the lift in the first place is a bit more complicated. “The media, in general, are always after a quick and simple explanation,” Babinsky reflects. “I think that’s gotten us into hot water.” One popular explanation, which is wrong, goes like this: Air moving over the curved top of a wing has to travel a longer distance than air moving below it, and because of that, it speeds up to try to keep abreast of the air on the bottom—as if two air particles, one going over the top of the wing and one going under, need to stay magically connected. NASA even has a webpage dedicated to this idea, labeling it as an “incorrect airfoil theory.”

So what’s the correct way to think about it? 

Lend a hand

One very simple way to start thinking about the topic is to imagine that you’re riding in the passenger seat of a car. Stick your arm out sideways, into the incoming wind, with your palm down, thumb forward, and hand basically parallel to the ground. (If you do this in real life, please be careful.) Now, angle your hand upward a little at the front, so that the wind catches the underside of your hand; that process of tilting your hand upward approximates an important concept with wings called their angle of attack.

“You can clearly feel the lift force,” Babinsky says. In this straightforward scenario, the air is hitting the bottom of your hand, being deflected downward, and in a Newtonian sense (see law three), your hand is being pushed upward. 

Follow the curve 

But a wing, of course, is not shaped like your hand, and there are additional factors to consider. Two key points to keep in mind with wings are that the front of a wing—the leading edge—is curved, and overall, they also take on a shape called an airfoil when you look at them in cross-section. 

[Related: How pilots land their planes in powerful crosswinds]

The curved leading edge of a wing is important because airflow tends to “follow a curved surface,” Babinsky says. He says he likes to demonstrate this concept by pointing a hair dryer at the rounded edge of a bucket. The airflow will attach to the bucket’s curved surface and make a turn, potentially even snuffing out a candle on the other side that’s blocked by the bucket. Here’s a charming old video that appears to demonstrate the same idea. “Once the flow attaches itself to the curved surface, it likes to stay attached—[although] it will not stay attached forever,” he notes.

With a wing—and picture it angled up somewhat, like your hand out the window of the car—what happens is that the air encounters the rounded leading edge. “On the upper surface, the air will attach itself, and bend round, and actually follow that incidence, that angle of attack, very nicely,” he says. 

Keep things low-pressure

Ultimately, what happens is that the air moving over the top of the wing attaches to the curved surface and turns, or flows downward somewhat: a low-pressure area forms, and the air also travels faster. Meanwhile, the air is hitting the underside of the wing, like the wind hits your hand as it sticks out the car window, creating a high-pressure area. Voila: the wing has a low-pressure area above it, and higher pressure below. “The difference between those two pressures gives us lift,” Babinsky says. 

This video depicts the general process well:

Babinsky notes that more work is being done by that lower pressure area above the wing than the higher pressure one below the wing. You can think of the wing as deflecting the air flow downwards on both the top and bottom. On the lower surface of the wing, the deflection of the flow “is actually smaller than the flow deflection on the upper surface,” he notes. “Most airfoils, a very, very crude rule of thumb would be that two-thirds of the lift is generated there [on the top surface], sometimes even more,” Babinksy says.

Can you bring it all together for me one last time?

Sure! Gloria Yamauchi, an aerospace engineer at NASA’s Ames Research Center, puts it this way. “So we have an airplane, flying through the air; the air approaches the wing; it is turned by the wing at the leading edge,” she says. (By “turned,” she means that it changes direction, like the way a car plowing down the road forces the air to change its direction to go around it.) “The velocity of the air changes as it goes over the wing’s surface, above and below.” 

“The velocity over the top of the wing is, in general, greater than the velocity below the wing,” she continues, “and that means the pressure above the wing is lower than the pressure below the wing, and that difference in pressure generates an upward lifting force.”

Is your head constantly spinning with outlandish, mind-burning questions? If you’ve ever wondered what the universe is made of, what would happen if you fell into a black hole, or even why not everyone can touch their toes, then you should be sure to listen and subscribe to Ask Us Anything, a podcast from the editors of Popular Science. Ask Us Anything hits AppleAnchorSpotify, and everywhere else you listen to podcasts every Tuesday and Thursday. Each episode takes a deep dive into a single query we know you’ll want to stick around for.

This story has been updated. It was originally published in July, 2022.

The post Let’s talk about how planes fly appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Throwing the perfect football spiral is a feat in science https://www.popsci.com/science/how-to-throw-a-football-spiral/ Mon, 06 Feb 2023 18:38:32 +0000 https://www.popsci.com/?p=510229
Super Bowl-qualifying Philadelphia Eagles quarterback Jalen Hurts throws a perfect football spiral
While the basic mechanics of throwing a perfect football spiral are the same, some quarterbacks, like Philadelphia Eagles' Jalen Hurts, put their own spin on it. Mitchell Leff/Getty Images

Football players don’t break the laws of physics—they take advantage of them. And you can too.

The post Throwing the perfect football spiral is a feat in science appeared first on Popular Science.

]]>
Super Bowl-qualifying Philadelphia Eagles quarterback Jalen Hurts throws a perfect football spiral
While the basic mechanics of throwing a perfect football spiral are the same, some quarterbacks, like Philadelphia Eagles' Jalen Hurts, put their own spin on it. Mitchell Leff/Getty Images

It’s Super Bowl LVII time, and this year the Philadelphia Eagles are squaring off against the Kansas City Chiefs for the championship title. While the Chiefs are returning for their third final in four years, bets are slightly favored towards the Eagles as they’ve kept a strong and consistent offensive line all season, led by quarterback Jalen Hurts. But the Chiefs could defy the odds if quarterback Patrick Mahomes fully recovers from an ankle sprain he sustained more than a week ago against the Cincinnati Bengals. 

[Related: We calculated how much sweat will come out of the Super Bowl]

Ultimately, the game could come down to every single throw. Mahomes has already proven he can hit his mark in most circumstances: His football spirals are the “closest we’ll see to breaking the law of physics,” says Chad Orzel, an associate professor of physics and astronomy at Union College in New York. “He manages to make some amazing passes from bizarre positions that wouldn’t look like they would produce anything good.” Hurts has also been leveled up his game this season through “meteoric improvements” in his throws.

Throwing the perfect football spiral might seem like something reserved for Super Bowl quarterbacks. But with some practice and science know-how, you too can chuck up the perfect spiral.

Why do football players throw spirals?

Unlike baseball or basketball, the American football relies on a spiral rotation because of its prolate spheroid shape. If you make the ball spin fast enough, it will stay in the same axis it’s pointing towards and hit the intended target straight-on, Orzel says. This follows the conservation of angular momentum: an object preserves its rotational speed if no external force is acting on it. 

Think of a spinning top. When you twist the toy and release, it will rotate in the same direction that you wound it up in, and will continue to stay upright in that angle until another external force (like your hand) causes it to stop. “It’s the same idea with football,” explains Orzel. “If you get the ball spinning rapidly around its axis, it’s a little more likely to hold its orientation and fly through [the air] in an aerodynamic shape.” 

[Related: Hitting a baseball is the hardest skill to pull off in sports. Here’s why.]

In a game where you have seconds to pass before you get tackled or intercepted, the biggest priority is to flick the ball with its nose pointed toward you. This confers less air resistance, meaning the ball can travel farther in a straight path (as long as it doesn’t meet outside forces like strong winds), explains John Eric Goff, a professor of physics at the University of Lynchburg in Virginia and author of Gold Medal Physics: The Science of Sports. A wobbly pass will result in more air drag and take longer to reach its destination, he adds. If you have to duck a defender and then pass the ball off quickly, you will get erratic air drag, which also hurts the accuracy of the throw.

How to throw a football spiral

To get a great spiral, you need to master angular momentum, which involves a few key physical factors. First, a person’s grip on the laces of the ball acts as torque—a measure of force applied to an object to rotate on its axis. In other words, the friction from the fingers gives the ball traction to spin. 

Second, you need to perfectly balance the frictional force on the ball and the forward force needed to give the ball velocity. This requires strong core muscles to rotate the body all the way through the shoulder and increase throwing power. “Tom Brady used to practice drills where he would rotate his torso quickly to help develop fast-twitching muscles in his core,” says Goff. 

Third, the hand must also be on the back of the ball to give it forward velocity, but not too far back to prevent the necessary torque for the spin. “A typical NFL spiral rotates at around 600 rotations per minute, which is the low end of a washing machine’s rotational rate and about 30 percent greater rotation rate than that of a helicopter’s rotor blades,” adds Goff. “Pass speeds are typically in the range 45 to 60 mph—the same range for cars entering and driving on highways.” For maximum force, pull the ball back to your ear just above your armpit, then release it with your elbow fully extended. Your wrist should point down at the end of the pass.

Knowing the physics behind a football spiral is only half of the battle. Both physicists emphasize the importance of practice. Practice can be as simple as watching videos of pro footballers, studying their technique using computer simulations, and playing a game of catch at the park with friends. 

Achieving a perfect spiral is challenging but doable. Even your favorite NFL quarterback might have started with a clumsy first toss. But with practice, they’ve become the ideal throwing machines we cheer for every year. 

The post Throwing the perfect football spiral is a feat in science appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Why shooting cosmic rays at nuclear reactors is actually a good idea https://www.popsci.com/science/nuclear-reactor-3d-imaging/ Fri, 03 Feb 2023 19:00:00 +0000 https://www.popsci.com/?p=509775
Marcoule Nuclear Power Plant in France. Workers in protective gear heating glowing nuclear reactor.
The Marcoule Nuclear Power Plant in France was decommissioned in the 1980s. The French government has been trying to take down the structures since, including the G2 reactor. Patrick Robert/Sygma/CORBIS/Sygma via Getty Images

Muons, common and mysterious particles that beam down from space, can go where humans can't. That can be useful for nuclear power plants.

The post Why shooting cosmic rays at nuclear reactors is actually a good idea appeared first on Popular Science.

]]>
Marcoule Nuclear Power Plant in France. Workers in protective gear heating glowing nuclear reactor.
The Marcoule Nuclear Power Plant in France was decommissioned in the 1980s. The French government has been trying to take down the structures since, including the G2 reactor. Patrick Robert/Sygma/CORBIS/Sygma via Getty Images

The electron is one of the most common bits of matter around us—every complete atom in the known universe has at least one. But the electron has far rarer and shadier counterparts, one of them being the muon. We may not think much about muons, but they’re constantly hailing down on Earth’s surface from the edge of the atmosphere

Muons can pass through vast spans of bedrock that electrons can’t cross. That’s good luck for scientists, who can collect the more elusive particles to paint images of objects as if they were X-rays. In the last several decades, they’ve used muons to pierce the veils of erupting volcanoes and peer into ancient tombs, but only in two dimensions. The few three-dimensional images have been limited to small objects.

That’s changing. In a paper published in the journal Science Advances today, researchers have created a fully 3D muon image of a nuclear reactor the size of a large building. The achievement could give experts new, safer ways of inspecting old reactors or checking in on nuclear waste.

“I think, for such large objects, it’s the first time that it’s purely muon imaging in 3D,” says Sébastien Procureur, a nuclear physicist at the Université Paris-Saclay in France and one of the study authors.

[Related: This camera can snap atoms better than a smartphone]

Muon imaging is only possible with the help of cosmic rays. Despite their sunny name, most cosmic rays are the nuclei of hydrogen or helium atoms, descended to Earth from distant galaxies. When they strike our atmosphere, they burst into an incessant rainstorm of radiation and subatomic particles.

Inside the rain is a muon shower. Muons are heavier—about 206 times more massive—than their electron siblings. They’re also highly unstable: On average, each muon lasts for about a millionth of a second. That’s still long enough for around 10,000 of the particles to strike every square meter of Earth per minute.

Because muons are heavier than electrons, they’re also more energetic. They can penetrate the seemingly impenetrable, such as rock more than half a mile deep. Scientists can catch those muons with specially designed detectors and count them. More muons striking from a certain direction might indicate a hollow space lying that way. 

In doing so, they can gather data on spaces where humans cannot tread. In 2017, for instance, researchers discovered a hidden hollow deep inside Khufu’s Great Pyramid in Giza, Egypt. After a tsunami ravaged the Fukushima Daiichi nuclear power station in 2011, muons allowed scientists to gauge the damage from a safe distance. Physicists have also used muons to check nuclear waste casks without risking leakage while opening them up.

However, taking a muon image comes with some downsides. For one, physicists have no control over how many muons drizzle down from the sky, and the millions that hit Earth each day aren’t actually very many in the grand scheme of things. “It can take several days to get a single image in muography,” says Procureur. “You have to wait until you have enough.”

Typically, muon imagers take their snapshots with a detector that counts how many muons are striking it from what directions. But with a single machine, you can only tell that a hollow space exists—not how far away it lies. This limitation leaves most muon images trapped in two dimensions. That means if you scan of a building’s facade, you might see the individual rooms, but not the layout. If you want to explore a space in great detail, the lack of a third dimension is a major hurdle.

In theory, by taking muon images from different perspectives, you can stitch them together into a 3D reconstruction. This is what radiologists do with X-rays. But while it’s easy to take hundreds of X-ray images from different angles, it’s far more tedious and time-consuming to do so with muons. 

Muon detectors around G2 nuclear reactor in France. Two facility photos and four diagrams.
The 3D muon images of the G2 nuclear reactor. Procureur et al., Sci. Adv. 9, eabq8431 (2023)

Still, Procureur and his colleagues gave it a go. The site in question was an old reactor at Marcoule, a nuclear power plant and research facility in the south of France. G2, as it’s called, was built in the 1950s. In 1980, the reactor shut down for good; since then, French nuclear authorities have slowly removed components from the building. Now, preparing to terminally decommission G2, they wanted to conduct another safety check of the structures inside. “So they contacted us,” says Procureur.

Scientists had taken 3D muon images of small objects like tanks before, but G2—located inside a concrete cylinder the size of a small submarine and fitted inside a metal-walled building the size of an aircraft hangar—required penetrating a lot more layers and area.

Fortunately, this cylinder left enough space for Procureur and his colleagues to set up four gas-filled detectors at strategic points around and below the reactor. Moving the detectors around, they were able to essentially snap a total of 27 long-exposure muon images, each one taking days on end to capture.

[Related: Nuclear power’s biggest problem could have a small solution]

But the tricky part, Procureur says, wasn’t actually setting up the muon detectors or even letting them run: It was piecing together the image afterward. To get the process started, the team adapted an algorithm used for stitching together anatomical images in a medical clinic. Though the process was painstaking, they succeeded. In their final images, they could pluck out objects as small as cooling pipes about two-and-a-half feet in diameter.

“What’s significant is they did it,” says Alan Bross, a physicist at Fermilab in suburban Chicago, who wasn’t involved with this research. “They built the detectors, they went to the site, and they took the data … which is really involved.”

The effort, Procureur says, was only a proof of concept. Now that they know what can be accomplished, they’ve decided to move onto a new challenge: imaging nuclear containers at other locations. “The accuracy will be significantly better,” Procureur notes.

Even larger targets may soon be on the horizon. Back in Giza, Bross and some of his colleagues are working to scan the Great Pyramid in three dimensions. “We’re basically doing the same technique,” he explains, but on a far more spectacular scale.

The post Why shooting cosmic rays at nuclear reactors is actually a good idea appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Cosmic cartographers release a more accurate map of the universe’s matter https://www.popsci.com/science/universe-matter-map/ Wed, 01 Feb 2023 14:00:00 +0000 https://www.popsci.com/?p=509007
Two giant, circular ground telescopes with an overlay of a starry night sky.
Scientists have released a new survey of all the matter in the universe, using data taken by the Dark Energy Survey in Chile and the South Pole Telescope. Andreas Papadopoulos

It’s another step in understanding our 13 billion year-old universe.

The post Cosmic cartographers release a more accurate map of the universe’s matter appeared first on Popular Science.

]]>
Two giant, circular ground telescopes with an overlay of a starry night sky.
Scientists have released a new survey of all the matter in the universe, using data taken by the Dark Energy Survey in Chile and the South Pole Telescope. Andreas Papadopoulos

When the universe first began about 13 billion years ago, all of the matter that eventually formed the galaxies, stars, and planets of today was flung around like paint splattering from a paintbrush. 

Now, an international group of over 150 scientists and researchers have released some of the most precise measurements ever made of how all of this matter is distributed across the universe. With a map of that matter in the present, scientists can try to understand the forces that shaped the evolution of the universe.

[Related: A key part of the Big Bang remains troublingly elusive.]

The team combined data from the Dark Energy Survey (DES) and the South Pole Telescope, which conducted two major telescope surveys of the present universe. The analysis was published in the journal Physical Review D as three articles on January 31.

In the analysis, the team found that matter isn’t as “clumpy” as previously believed, adding to a body of evidence that something might be missing from the existing standard model of the universe.

By tracing the path of this matter to see where everything ended up, scientists can try to recreate what happened during the Big Bang and what forces were needed for such a massive explosion. 

To create this map, an enormous amount of data was analyzed from the DES and South Pole Telescope. The DES surveyed the night sky for six years from atop a mountain in Chile, while the South Pole Telescope scoured the universe for faint traces of traveling radiation that date back to the first moments of our universe.

Deep Space photo
By overlaying maps of the sky from the Dark Energy Survey telescope (at left) and the South Pole Telescope (at right), the team could assemble a map of how the matter is distributed—crucial to understand the forces that shape the universe. CREDIT: Yuuki Omori

Scientists were able to infer where all of the universe’s matter ended up and are offering a more accurate matter map by rigorously analyzing both data sets. “It is more precise than previous measurements—that is, it narrows down the possibilities for where this matter wound up—compared to previous analyses,” the authors said.

Combining two different skygazing methods reduced the chance of a measurement error throwing off the results. “It functions like a cross-check, so it becomes a much more robust measurement than if you just used one or the other,” said co-author Chihway Chang, an astrophysicist from the University of Chicago, in a statement

The analyses looked at gravitational lensing, which occurs when some of the light traveling across the universe can be slightly bent when it passes objects like galaxies that contain a lot of gravity. 

Regular matter and dark matter can be caught by this method. Dark matter is an invisible form of matter that makes up most of the universe’s mass, but it is so mysterious that scientists know more about what it isn’t than what it is. It doesn’t emit light, so it can’t be a planet of stars, but it also isn’t a bunch of black holes. 

[Related: A key part of the Big Bang remains troublingly elusive.]

While most of the results fit perfectly with the currently accepted best theory of the universe, there are some signs of a crack in the theory.

“It seems like there are slightly less fluctuations in the current universe, than we would predict assuming our standard cosmological model anchored to the early universe,” said analysis coauthor and University of Hawaii astrophysicist Eric Baxter, in a statement.

Even if something is missing from today’s matter models, the team believes that using information from two different telescope surveys is a promising strategy for the future of astrophysics.

“I think this exercise showed both the challenges and benefits of doing these kinds of analyses,” Chang said. “There’s a lot of new things you can do when you combine these different angles of looking at the universe.”

The post Cosmic cartographers release a more accurate map of the universe’s matter appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Watch this metallic material move like the T-1000 from ‘Terminator 2’ https://www.popsci.com/technology/magnetoactive-liquid-metal-demo/ Wed, 25 Jan 2023 22:00:00 +0000 https://www.popsci.com/?p=507689
Lego man liquid-metal model standing in mock jail cell
Hmm. This scene looks very familiar. Wang and Pan, et al.

A tiny figure made from the magenetoactive substance can jailbreak by shifting phases.

The post Watch this metallic material move like the T-1000 from ‘Terminator 2’ appeared first on Popular Science.

]]>
Lego man liquid-metal model standing in mock jail cell
Hmm. This scene looks very familiar. Wang and Pan, et al.

Sci-fi film fans are likely very familiar with that scene in Terminator 2 when Robert Patrick’s slick, liquid metal T-1000 robot easily congeals itself through the metal bars of a security door. It’s an iconic set piece that relied on then-cutting edge computer visual effects—that’s sort of director James Cameron’s thing, after all. But researchers recently developed a novel substance capable of recreating a variation on that ability. With more experimentation and fine-tuning, this new “magnetoactive solid-liquid phase transitional machine” could provide a host of tools for everything from construction repair to medical procedures.

[Related: ‘Avatar 2’s high-speed frame rates are so fast that some movie theaters can’t keep up.]

So far, researchers have been able to make their substance “jump” over moats, climb walls, and even split into two cooperative halves to move around an object before reforming back into a single entity, as detailed in a new study published on Wednesday in Matter. In a cheeky video featuring some strong T2 callbacks, a Lego man-shaped mold of the magnetoactive solid-liquid can even be seen liquifying and moving through tiny jail cell bars before reforming into its original structure. If that last part seems a bit impossible, well, it is. For now.

“There is some context to the video. It [looks] like magic,” Carmel Majidi, a senior author and mechanical engineer at Carnegie Mellon, explains to PopSci with a laugh. According Majidi, everything leading up to the model’s reformation is as it appears—the shape does liquify before being drawn through the mesh barrier via alternating electromagnetic currents. From there, however, someone pauses the camera to recast the mold into its original shape.

But even without the little cinema history gag, Majidi explains that he and his colleagues’ new material could have major benefits within a host of situations. The  team, made up of experts from The Chinese University of Hong Kong and Carnegie Mellon University have created a “phase-shifting” material by embedding magnetic particles within gallium, a metal featuring an extremely low melting point of just 29.8C, or roughly 85F. To accomplish this, the magnetically infused gallium is exposed to an alternating magnetic field to generate enough heat through induction. Changing the electromagnet’s path can conversely direct the liquified form, all while retaining a far less viscous state than similar phase-changing materials.

[Related: Acrobatic beetle bots could inspire the latest ‘leap’ in agriculture.]

“There’s been a tremendous amount of work on these soft magnetic devices that could be used for biomedical applications,” says Majidi. “Increasingly, those materials [could] used for diagnostics, drug delivery… [and] recovering or removing foreign objects.”

Majidi’s and colleagues’ latest variation, however, stands apart from that amorphous blob of similar substances. “What this endows those systems with is their ability to now change stiffness and change shape, so they can now have even greater mobility within that context.”

[Related: Boston Dynamics’s bipedal robots can throw heavy objects now.]

Majidi cautions, however, that any deployment in doctors’ offices is still far down the road. In the meantime, it’s much closer to being deployed in situations such as circuit assembly and repair, where the material could ooze into hard-to-reach areas before congealing as simultaneously both a conductor and solder.

Further testing needs undertaking to determine the substance’s biocompatibility in humans, but Majidi argues that it’s not hard to imagine patients one day entering an MRI-like machine that can guide ingested versions of the material for medical procedures. For now, however, it looks like modern technology is at least one step closer to catching up with Terminator 2’s visual effects wizardry from over 30 years ago.

The post Watch this metallic material move like the T-1000 from ‘Terminator 2’ appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The Earth’s inner core could be slowing its spin—but don’t panic https://www.popsci.com/science/earth-core-spin/ Tue, 24 Jan 2023 16:00:00 +0000 https://www.popsci.com/?p=507356
The planet's innermost core has a rhythm of its own.
The planet's innermost core has a rhythm of its own. NASA

We could be in the middle of a big shift in how the center of the Earth rotates.

The post The Earth’s inner core could be slowing its spin—but don’t panic appeared first on Popular Science.

]]>
The planet's innermost core has a rhythm of its own.
The planet's innermost core has a rhythm of its own. NASA

In elementary school science class, we learned that the Earth has three main layers: the crust, mantle, and the core. In reality, the core—which is over 4,000 miles wide—has two layers: a liquid outer core and a solid and dense inner core made mostly of iron that actually rotates.

A study published January 23 in the journal Nature Geoscience finds that this rotation may have paused recently—and could possibly be reversing. The team from Peking University in China believe that these findings could indicate that the changes in the rotation occur on a decadal scale and are helping us understand more about how what’s going on deep beneath the Earth affects the surface.

[Related: A rare gas is leaking from Earth’s core. Could it be a clue to the planet’s creation?]

The Earth’s inner core is separated from the rest of the solid Earth by its liquid outer core, so it rotates at a different pace and direction than the planet itself. A magnetic field created by the outer core generates the spin and the mantle’s gravitational effects balance it out. Understanding how the inner core rotates could shed light on how all of the Earth’s layers interact.

In this study, seismologists Yi Yang and Xiaodong Song looked at seismic waves. They analyzed the difference in the waveform and travel time of the waves created during near-identical earthquakes that have passed along similar paths through the Earth’s inner core since the 1960s. They particularly studied the earthquakes that struck between 1995 and 2021.

Before 2009, the inner core appeared to be rotating slightly faster than the surface and mantle, but the rotation began slowing down and paused around 2009. Looking down at the core now wouldn’t reveal any spinning since the inner core and surface are spinning at roughly the same rate.

“That means it’s not a steady rotation as was originally reported some 20 years ago, but it’s actually more complicated,” Bruce Buffett, a professor of earth and planetary science at the University of California, Berkeley, told the New Scientist.

[Related: Scientists wielded giant lasers to simulate an exoplanet’s super-hot core.]

Additionally, the team believes that this could be associated with a reversal of the inner core rotation on a seven-decade schedule. They believe that a previous turning point occurred in the early 1970s and say that this variation does correlate with small changes in geophysical observations at the Earth’s surface, such as the length of a day or changes in magnetic fields.

The authors conclude that this fluctuation in the inner core’s rotation that coincides with some periodic changes in the Earth’s surface system, demonstrates the interactions occurring between Earth’s different layers.

However, scientists are debating the speed of the rotation and whether it varies. This new theory is just one of several models explaining the rotation. “It’s weird that there’s a solid iron ball kind of floating in the middle of the Earth,” John Vidale, a seismologist at the University of Southern California who was not involved with the study, told The New York Times. “No matter which model you like, there’s some data that disagrees with it.”

Since studying the inner core is very difficult and physically going there is almost impossible (unless you’re famed sci-fi author Jules Verne), what’s really going on in the Earth’s core could always remain a mystery.

The post The Earth’s inner core could be slowing its spin—but don’t panic appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The best—and worst—places to shelter after a nuclear blast https://www.popsci.com/science/how-to-survive-a-nuclear-bomb-shockwave/ Fri, 20 Jan 2023 16:53:24 +0000 https://www.popsci.com/?p=506575
Nuclear shelter basement sign on brick building to represent survival tips for a nuclear blast
Basements work well as nuclear shelters as long as they don't have many external opening. Deposit Photos

Avoid windows, doors, and long hallways at all costs.

The post The best—and worst—places to shelter after a nuclear blast appeared first on Popular Science.

]]>
Nuclear shelter basement sign on brick building to represent survival tips for a nuclear blast
Basements work well as nuclear shelters as long as they don't have many external opening. Deposit Photos

In the nightmare scenario of a nuclear bomb blast, you might picture a catastrophic fireball, a mushroom cloud rising into an alien sky overhead, and a pestilent rain of toxic fallout in the days to come. All of these are real, and all of them can kill.

But just as real, and every bit as deadly, is the air blast that comes just instants after. When a nuke goes off, it usually creates a shockwave. That front tears through the air at supersonic speed, shattering windows, demolishing buildings, and causing untold damage to human bodies—even miles from the point of impact.

[Related: How to protect yourself from nuclear radiation]

So, you’ve just seen the nuclear flash, and know that an air blast is soon to follow. You’ve only got seconds to hide. Where do you go?

To help you find the safest spot in your home, two engineers from Cyprus simulated which spaces made winds from a shockwave move more violently—and which spaces slowed them down. Their results were published on January 17 in the journal Physics of Fluids.

During the feverish nuclear paranoia of the Cold War, plenty of scientists studied what nuclear war would do to a city or the world. But most of their research focused on factors like the fireball or the radiation or simulating a nuclear winter, rather than an individual air blast. Moreover, 20th-century experts lacked the sophisticated computational capabilities that their modern counterparts can use. 

Very little is known about what is happening when you are inside a concrete building that has not collapsed,” says Dimitris Drikakis, an engineer at the University of Nicosia and co-author of the new paper. 

[Related: A brief but terrifying history of nuclear weapons]

The advice that he and his colleague Ioannis W. Kokkinakis came up with doesn’t apply to the immediate vicinity of a nuclear blast. If you’re within a shout of ground zero, there’s no avoiding it—you’re dead. Even some distance away, the nuke will bombard you with a bright flash of thermal radiation: a torrent of light, infrared, and ultraviolet that could blind you or cause second- or third-degree burns.

But as you move farther away from ground zero, far enough that the thermal radiation might leave you with minor injuries at most, the airburst will leave most structures standing. The winds will only be equivalent to a very strong hurricane. That’s still deadly, but with preparation, you might just make it.

Drikakis and Kokkinakis constructed a one-story virtual house and simulated striking winds from two different shockwave scenarios—one well above standard air pressure, and one even stronger. Based on their simulations, here are the best—and worst—places to go during a nuclear war.

Worst: by a window

If you catch a glimpse of a nuclear flash, your first instinct might be to run to the nearest window to see what’s just happened. That would be a mistake, as you’d be in the prime place to be hit by the ensuing air blast.

If you stand right in a window facing the blast, the authors found, you might face winds over 300 miles per hour—enough to pick the average human off the ground. Depending on the exact strength of the nuke, you might then strike the wall with enough force to kill you.

Surprisingly, there are more dangerous places in the house when it comes to top wind speed (more on that later). But what really helps make a window deadly is the glass. As it shatters, you’ll be sprayed in the face by high-velocity shards.

Bad: a hallway

You might imagine that you can escape the airblast by retreating deeper into your building. But that’s not necessarily true. A window can act as a funnel for rushing air, turning a long hallway into something like a wind tunnel. Doors can do the same. 

The authors found that winds would throw an average-sized human standing in the corridor nearly as far as it would throw an average-sized human standing by the front window. Intense winds can also pick up glass shards and loose objects from the floor or furniture and send them hurtling as fast as a shot from a musket, the simulations showed.

Better: a corner

Not everywhere in the house is equally deadly. The authors found that, as the nuclear shockwave passed through a room, the highest winds tended to miss the room’s edges and corners. 

Therefore, even if you’re in an otherwise dangerous room, you can protect yourself from the worst of the impact by finding a corner and bracing yourself in. The key, again, is to avoid doors and windows.

“Wherever there are no openings, you have better chances to survive,” says Drikakis. “Essentially, run away from the openings.”

Best: a corner of an interior room

The best place to hide out is in the corner of a small room as far inside the building as possible.  For example, a closet that lacks any openings is ideal.

The “good” news is that the peak of the blast lasts just a moment. The most furious winds will pass in less than a second. If you can survive that, you’ll probably stay alive—as long as you’re not in the path of the radioactive fallout.

These tips for sheltering can be useful in high-wind disasters across the board. (The US Centers for Disease Control currently advises those who cannot evacuate before a hurricane to avoid windows and find a closet.) But the authors stress that the risk of nuclear war, while low, has certainly not disappeared. “I think we have to raise awareness to the international community … to understand that this is not just a joke,” says Drikakis. “It’s not a Hollywood movie.”

The post The best—and worst—places to shelter after a nuclear blast appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Physicists figured out a recipe to make titanium stardust on Earth https://www.popsci.com/science/stardust-titanium-tools/ Fri, 13 Jan 2023 19:00:00 +0000 https://www.popsci.com/?p=505062
Cosmic dust on display in Messier 98 galaxy.
Spiral galaxy Messier 98 showcases its cosmic dust in this Hubble Space Telescope image. NASA / ESA / Hubble / V. Rubin et al

The essential ingredients are carbon atoms, titanium, and a good coating of graphite.

The post Physicists figured out a recipe to make titanium stardust on Earth appeared first on Popular Science.

]]>
Cosmic dust on display in Messier 98 galaxy.
Spiral galaxy Messier 98 showcases its cosmic dust in this Hubble Space Telescope image. NASA / ESA / Hubble / V. Rubin et al

Long ago—before humans, before Earth, before even the sun—there was stardust.

In time, the young worlds of the solar system would eat up much of that dust as those bodies ballooned into the sun, planets, and moons we know today. But some of the dust survived, pristine, in its original form, locked in places like ancient meteorites.

Scientists call this presolar dust, since it formed before the sun. Some grains of presolar dust contain tiny bits of carbon, like diamond or graphite; others contain a host of other elements such as silicon or titanium. One form contains a curious and particularly hardy material called titanium carbide, used in machine tools on Earth. 

Now, physicists and engineers think they have an idea of how those particular dust grains formed. In a study published today in the journal Science Advances, researchers believe they could use that knowledge to build better materials here on Earth.

These dust grains are extremely rare and extremely minuscule, often smaller than the width of a human hair. “They were present when the solar system formed, survived this process, and can now be found in primitive solar system materials,” such as meteorites, says Jens Barosch, an astrophysicist at the Carnegie Institution for Science in Washington, DC, who was not an author of the study.

[Related: See a spiral galaxy’s haunting ‘skeleton’ in a chilly new space telescope image]

The study authors peered into a unique kind of dust grain with a core of titanium carbide—titanium and carbon, combined into durable, ceramic-like material that’s nearly as hard as diamond—wrapped in a shell of graphite. Sometimes, tens or even hundreds of these carbon-coated cores clump together into larger grains.

But how did titanium carbide dust motes form in the first place? So far, scientists haven’t quite known for sure. Testing it on Earth is hard, because would-be dustbuilders have to deal with gravity—something that these grains didn’t have to contend with. But scientists can now go to a place where gravity is no object.

On June 24, 2019, a sounding rocket launched from Kiruna, a frigid Swedish town north of the Arctic circle. This rocket didn’t reach orbit. Like many rockets before and since, it streaked in an arc across the sky, peaking at an altitude of about 150 miles, before coming back down.

Still, that brief flight was enough for the rocket’s components to gain more than a taste of the microgravity that astronauts experience in orbit. One of those components was a contraption inside which scientists could incubate dust grains and record the process. 

“Microgravity experiments are essential to understanding dust formation,” says Yuki Kimura, a physicist at Hokkaido University in Japan, and one of the paper’s authors.

Deep Space photo
Titanium carbide grains, seen here magnified at a scale of several hundred nanometers. Yuki Kimura

Just over three hours after launch, including six and a half minutes of microgravity, the rocket landed about 46 miles away from its launch site. Kimura and his colleagues had the recovered dust grains sent back to Japan for analysis. From this shot and follow-up tests in an Earthbound lab, the group pieced together a recipe for a titanium carbide dust grain.

[Related: Black holes have a reputation as devourers. But they can help spawn stars, too.]

That recipe might look something like this: first, start with a core of carbon atoms, in graphite form; second, sprinkle the carbon core with titanium until the two sorts of atoms start to mix and create titanium carbide; third, fuse many of these cores together and drape them with graphite until you get a good-sized grain.

It’s interesting to get a glimpse of how such ancient things formed, but astronomers aren’t the only people who care. Kimura and his colleagues also believe that understanding the process could help engineers and builders craft better materials on Earth—because we already build particles not entirely unlike dust grains.

They’re called nanoparticles, and they’ve been around for decades. Scientists can insert them into polymers like plastic to strengthen them. Road-builders can use them to reinforce the asphalt under their feet. Doctors can even insert them into the human body to deliver drugs or help image hard-to-see body parts.

Typically, engineers craft nanoparticles by growing them within a liquid solution. “The large environmental impact of this method, such as liquid waste, has become an issue,” says Kimura. Stardust, then, could help reduce that waste.

Machinists already use tools strengthened by a coat of titanium carbide nanoparticles. Just like diamond, the titanium carbide helps the tools, often used to forge things like spacecraft, cut harder. One day, stardust-inspired machine coatings might help build the very vessels humans send to space.

The post Physicists figured out a recipe to make titanium stardust on Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Meet Golfi, the robot that plays putt-putt https://www.popsci.com/technology/robot-golf-neural-network-machine-learning/ Tue, 03 Jan 2023 21:00:00 +0000 https://www.popsci.com/?p=502766
Robot putting golf ball across indoor field into hole
But can Golfi sink a putt through one of those windmill obstacles, though?. YouTube

The tiny bot can scoot on a green and hit a golf ball with impressive accuracy.

The post Meet Golfi, the robot that plays putt-putt appeared first on Popular Science.

]]>
Robot putting golf ball across indoor field into hole
But can Golfi sink a putt through one of those windmill obstacles, though?. YouTube

The first robot to sink an impressive hole-in-one pulled off its fairway feat back in 2016. But the newest automated golfer looks like it’s coming for the short game.

First presented at the IEEE International Conference on Robotic Computing last month and subsequently highlighted by New Scientist on Tuesday, “Golfi” is the modest-sized creation from a research team at Germany’s Paderborn University capable of autonomously locating a ball on a green, traveling to it, and successfully sinking a putt around 60 percent of the time.

To pull off its relatively accurate par, Golfi utilizes an overhead 3D camera to scan an indoor, two-square-meter artificial putting green to find its desired golf ball target. It can then scoot over to the ball and use a neural network algorithm to quickly analyze approximately 3,000 potential golf swings from random points while accounting for physics variables like mass, speed, and ground friction. From there, its arm offers a modest putt that sinks the ball roughly 6 or 7 times out of 10. Although not quite as good as standard human players, it’s still a sizable feat for the machine.

[Related: Reverse-engineered hummingbird wings could inspire new drone designs.]

However, Golfi isn’t going to show up at minigolf parks anytime soon, however. The robot’s creators at Paderborn University designed their prototype to solely work in a small indoor area while connected to a wired power source. Golfi’s necessary overhead 3D camera mount also ensures it won’t make an outdoor tee time, either. That’s because, despite its name, Golfi isn’t actually designed to revolutionize the golf game. Instead, the little robot was built to showcase the benefits of combining physics-based models with machine learning programs.

It’s interesting to see Golfi’stalent in comparison to other recent robotic advancements, which have often drawn from inspirations within the animal kingdom—from hummingbirds, to spiders, to dogs that just so happen to also climb up walls and across ceilings.

The post Meet Golfi, the robot that plays putt-putt appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
ISS astronauts are building objects that couldn’t exist on Earth https://www.popsci.com/science/iss-resin-manufacture-new-shapes/ Tue, 03 Jan 2023 17:00:00 +0000 https://www.popsci.com/?p=502628
A test device aboard the ISS is making new shapes beyond gravity's reach.
A test device aboard the ISS is making new shapes beyond gravity's reach. NASA

Gravity-defying spare parts are created by filling silicone skins with resin.

The post ISS astronauts are building objects that couldn’t exist on Earth appeared first on Popular Science.

]]>
A test device aboard the ISS is making new shapes beyond gravity's reach.
A test device aboard the ISS is making new shapes beyond gravity's reach. NASA

Until now, virtually everything the human race has ever built—from rudimentary tools to one-story houses to the tallest skyscrapers—has had one key restriction: Earth’s gravity. Yet, if some scientists have their way, that could soon change.

Aboard the International Space Station (ISS) right now is a metal box, the size of a desktop PC tower. Inside, a nozzle is helping build little test parts that aren’t possible to make on Earth. If engineers tried to make these structures on Earth, they’d fail under Earth’s gravity. 

“These are going to be our first results for a really novel process in microgravity,” says Ariel Ekblaw, a space architect who founded MIT’s Space Exploration Initiative and one of the researchers (on Earth) behind the project.

The MIT group’s process involves taking a flexible silicone skin, shaped like the part it will eventually create, and filling it with a liquid resin. “You can think of them as balloons,” says Martin Nisser, an engineer at MIT, and another of the researchers behind the project. “Instead of injecting them with air, inject them with resin.” Both the skin and the resin are commercially available, off-the-shelf products.

The resin is sensitive to ultraviolet light. When the balloons experience an ultraviolet flash, the light percolates through the skin and washes over the resin. It cures and stiffens, hardening into a solid structure. Once it’s cured, astronauts can cut away the skin and reveal the part inside.

All of this happens inside the box that launched on November 23 and is scheduled to spend 45 days aboard the ISS. If everything is successful, the ISS will ship some experimental parts back to Earth for the MIT researchers to test. The MIT researchers have to ensure that the parts they’ve made are structurally sound. After that, more tests. “The second step would be, probably, to repeat the experiment inside the International Space Station,” says Ekblaw, “and maybe to try slightly more complicated shapes, or a tuning of a resin formulation.” After that, they’d want to try making parts outside, in the vacuum of space itself. 

The benefit of building parts like this in orbit is that Earth’s single most fundamental stressor—the planet’s gravity—is no longer a limiting factor. Say you tried to make particularly long beams with this method. “Gravity would make them sag,” says Ekblaw.

[Related: The ISS gets an extension to 2030 to wrap up unfinished business]

In the microgravity of the ISS? Not so much. If the experiment is successful, their box would be able to produce test parts that are too long to make on Earth.

The researchers imagine a near future where, if an astronaut needed to replace a mass-produced part—say, a nut or a bolt—they wouldn’t need to consign one from Earth. Instead, they could just fit a nut- or a bolt-shaped skin into a box like this and fill it up with resin.

But the researchers are also thinking long-term. If they can make very long parts in space, they think, those pieces could  speed up large construction projects, such as the structures of space habitats. They might also be used to form the structural frames for solar panels that power a habitat or radiators that keep the habitat from getting too warm.

International Space Station photo
A silicone skin that will be filled to make a truss. Rapid Liquid Printing

Building stuff in space has a few key advantages, too. If you’ve ever seen a rocket in person, you’ll know that—as impressive as they are—they aren’t particularly wide. It’s one reason that large structures such as the ISS or China’s Tiangong go up piecemeal, assembled one module at a time over years.

Mission planners today often have to spend a great deal of effort trying to squeeze telescopes and other craft into that small cargo space. The James Webb Space Telescope, for instance, has a sprawling tennis-court-sized sunshield. To fit it into its rocket, engineers had to delicately fold it up and plan an elaborate unfurling process once JWST reached its destination. Every solar panel you can assemble in Earth orbit is one less solar panel you have to stuff into a rocket. 

[Related: Have we been measuring gravity wrong this whole time?]

Another key advantage is cost. The cost of space launches, adjusted for inflation, has fallen more than 20-fold since the first Space Shuttle went up in 1981, but every pound of cargo can still cost over $1,000 to put into space. Space is now within reach of small companies and modest academic research groups, but every last ounce makes a significant price difference.

When it comes to other worlds like the moon and Mars, thinkers and planners have long thought about using the material that’s already there: lunar regolith or Martian soil, not to mention the water that’s found frozen on both worlds. In Earth’s orbit, that’s not quite as straightforward. (Architects can’t exactly turn the Van Allen radiation belts into building material.)

That’s where Ekblaw, Nisser, and their colleagues hope their resin-squirting approach might excel. It won’t create intricate components or complex circuitry in space, but every little part is one less that astronauts have to take up themselves.

“Ultimately, the purpose of this is to make this manufacturing process available and accessible to other researchers,” says Nisser.

The post ISS astronauts are building objects that couldn’t exist on Earth appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Time doesn’t have to be exact—here’s why https://www.popsci.com/science/leap-second-day-length/ Sat, 31 Dec 2022 16:00:00 +0000 https://www.popsci.com/?p=501341
Gold clock with blue arms for minutes and seconds
Starting in 2035, we'll be shaving a second off our New Year's countdowns. Hector Achautla

The recent decision to axe the leap second shouldn't affect your countdowns or timekeeping too much.

The post Time doesn’t have to be exact—here’s why appeared first on Popular Science.

]]>
Gold clock with blue arms for minutes and seconds
Starting in 2035, we'll be shaving a second off our New Year's countdowns. Hector Achautla

It’s official: The leap second’s time is numbered

By 2035, computers around the world will have one less cause for glitching based on human time. Schoolchildren will have one less confusing calculation to learn when memorizing the calendar.

Our days are continually changing: Tiny differences in the Earth’s rotation build up over months or years. To compensate, every so often, authorities of world time insert an extra second to bring the day back in line. Since 1972, when the system was introduced, we’ve experienced 27 such leap seconds.

But the leap second has always represented a deeper discrepancy. Our idea of a day is based on how fast the Earth spins; yet we define the second—the actual base unit of time as far as scientists, computers, and the like are concerned—with the help of atoms. It’s a definitive gap that puts astronomy and atomic physics at odds with each other.

[Related: Refining the clock’s second takes time—and lasers]

Last month, the guardians of global standard time chose atomic physics over astronomy—and according to experts, that’s fine.

“We will never abandon the idea that timekeeping is regulated by the Earth’s rotation. [But] the fact is we don’t want it to be strictly regulated by the Earth’s rotation,” says Patrizia Tavella, a timekeeper at the International Bureau of Weights and Measures (BIPM) in Paris, a multigovernmental agency that, amongst other things, binds together nations’ official clocks.

The day is a rather odd unit of time. We usually think about it as the duration the Earth takes to complete one rotation about its axis: a number from astronomy. The problem is that the world’s most basic unit of time is not the day, but the second, which is measured by something far more miniscule: the cesium-133 atom, an isotope of the 55th element. 

As cesium-133’s nucleus experiences tiny shifts in energy, it releases photons with very predictable timing. Since 1967, atomic clocks have counted precisely 9,192,631,770 of these time-units in every second. So, as far as metrologists (people who study measurement itself) are concerned, a single day is 86,400 of those seconds.

Except a day isn’t always exactly 86,400 seconds, because the world’s revolutions aren’t constant.

Subtle motions, such as the moon’s tidal pull or the planet’s mass distribution shifting as its melty innards churn about, affect Earth’s spin. Some scientists even believe that a warming climate could shuffle heated air and melted water closer to the poles, which might speed up the rotation. Whatever the cause, it leads to millisecond differences in day length over the year that are unacceptable for today’s ultra-punctual timekeepers. Which is why they try to adjust for it.

The International Earth Rotation and Space Systems Service (IERS), a scientific nonprofit responsible for setting global time standards, publishes regular counts of just how large the difference is for the benefit of the world’s timekeepers. For most of December, Earth’s rotation has been between 15 and 20 milliseconds off the atomic-clock day.

Scientists think it will take about a century for the difference to build up to a minute. It will take about five millennia for it to build up to an hour.

Whenever that gap has gotten too large, IERS invokes the commandment of the leap second. Every January and July, the organization publishes a judgement on whether a leap second is in order. If one is necessary, the world’s timekeepers tack a 61st second onto the last minute of June 30 or December 31, depending on whichever comes next. But this November, the BIPM ruled that by 2035, the masters of the world’s clocks will shelve the leap second in favor of a still-undecided approach.

That means the Royal Observatory in Greenwich, London—the baseline for Greenwich Mean Time (GMT) and its modern successor, Universal Coordinated Time (UTC)—will drift out of sync with the days it once defined. Amateur astronomers might complain, too, as without the leap second, star sightings could become less predictable in the night sky.

But for most people, the leap second is an insignificant curiosity—especially compared to the maze of time zones that long-distance travelers face, or the shifts that humans must observe twice a year if they live in countries that observe daylight savings or summer time

On the other hand, adding a subtle second to shove the day into perfect alignment comes at a cost: technical glitches and nightmares for programmers who must already deal with different countries’ hodgepodge of timekeeping. “The absence of leap seconds will make things a little easier by removing the need for the occasional adjustment, but the difference will not be noticed by everyday users,” says Judah Levine, a timekeeper at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, the US government agency that sets the country’s official clocks.

[Related: It’s never too late to learn to be on time]

The new plan stipulates that in 2026, BIPM and related groups will meet again to determine how much they can let the discrepancy grow before the guardians of time need to take action. “We will have to propose the new tolerance, which could be one minute, one hour, or infinite,” says Tavella. They’ll also propose how often they (or their successors) will revise the number.

It’s not a decision that needs to be made right away. “It’s probably not necessary” to reconcile atomic time with astronomical time, says Elizabeth Donley, a timekeeper at NIST. “User groups that need to know time for astronomy and navigation can already look up the difference.”

We can’t currently predict the vagaries of Earth’s rotation, but scientists think it will take about a century for the difference to build up to a minute. “Hardly anyone will notice,” says Donley. It will take about five millennia for it to build up to an hour. 

In other words, we could just kick the conundrum of counting time down the road for our grandchildren or great-grandchildren to solve. “Maybe in the future, there will be better knowledge of the Earth’s movement,” says Tavella, “And maybe, another better solution will be proposed.”

The post Time doesn’t have to be exact—here’s why appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What the Energy Department’s laser breakthrough means for nuclear fusion https://www.popsci.com/science/nuclear-fusion-laser-net-gain/ Tue, 13 Dec 2022 18:33:00 +0000 https://www.popsci.com/?p=498247
Target fusion chamber of the National Ignition Facility
The National Ignition Facility's target chamber at Lawrence Livermore National Laboratory, where fusion experiments take place, with technicians inside. LLNL/Flickr

Nearly 200 lasers fired at a tiny bit of fuel to create a gain in energy, mimicking the power of the stars.

The post What the Energy Department’s laser breakthrough means for nuclear fusion appeared first on Popular Science.

]]>
Target fusion chamber of the National Ignition Facility
The National Ignition Facility's target chamber at Lawrence Livermore National Laboratory, where fusion experiments take place, with technicians inside. LLNL/Flickr

Since the 1950s, scientists have quested to bring nuclear fusion—the sort of reaction that powers the sun—down to Earth.

Just after 1 a.m. on December 5, scientists at the National Ignition Facility (NIF) in Lawrence Livermore National Laboratory (LLNL) in California finally reached a major milestone in the history of nuclear fusion: achieving a reaction that creates more energy than scientists put in.

This moment won’t bring a fusion power plant to your city just yet—but it is an important step to that goal, one which scientists have sought from the start of their quest.

“This lays the groundwork,” says Tammy Ma, a scientist at LLNL, in a US Department of Energy press conference today. “It demonstrates the basic scientific feasibility.”

On the outside, NIF is a nondescript industrial building in a semi-arid valley east of San Francisco. On the inside, scientists have quite literally been tinkering with the energy of the stars (alternating with NIF’s other major task, nuclear weapons research).

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

Nuclear fusion is how the sun generates the heat and light that warm and illuminate the Earth to sustain life. It involves crushing hydrogen atoms together. The resulting reaction creates helium and energy—quite a bit of energy. You’re alive today because of it, and the sun doesn’t produce a wisp of greenhouse gas in the process.

But to turn fusion into anything resembling an Earthling’s energy source, you need conditions that match the heart of the sun: millions of degrees in temperature. Creating a facsimile of that environment on Earth takes an immense amount of power—far eclipsing the amount of researchers usually end up producing.

Lasers aimed at a tiny target

For decades, scientists have struggled to answer one fundamental question: How do you fine-tune a fusion experiment to create the right conditions to actually gain energy?

NIF’s answer involves an arsenal of high-powered laser beams. First, experts stuff a peanut-sized, gold-plated, open-ended cylinder (known as a hohlraum) with a peppercorn-sized pellet containing deuterium and tritium, forms of hydrogen atoms that come with extra neutrons. 

Then, they fire a laser—which splits into 192 finely tuned beams that, in turn, enter the hohlraum from both ends and strike its inside wall. 

“We don’t just smack the target with all of the laser energy all at once,” says Annie Kritcher, a scientist at NIF, at the press conference. “We divide very specific powers at very specific times to achieve the desired conditions.”

As the chamber heats up to millions of degrees under the laser barrage, it starts producing a cascade of X-rays that violently wash over the fuel pellet. They shear off the pellet’s carbon outer shell and begin to compress the hydrogen inside—heating it to hundreds of millions of degrees—squeezing and crushing the atoms into pressures and densities higher than the center of the sun.

If all goes well, that kick-starts fusion.

Nuclear fusion energy experiment fuel source in a tiny metal capsule
This metal case, called a hohlraum, holds a tiny bit of fusion fuel. Eduard Dewald/LLNL

A new world record

When NIF launched in 2009, the fusion world record belonged to the Joint European Torus (JET) in the United Kingdom. In 1997, using a magnet-based method known as a tokamak, scientists at JET produced 67 percent of the energy they put in. 

That record stood for over two decades until late 2021, when NIF scientists bested it, reaching 70 percent. In its wake, many laser-watchers whispered the obvious question: Could NIF reach 100 percent? 

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day]

But fusion is a notoriously delicate science, and the results of a given fusion experiment are difficult to predict. Any object that’s this hot will want to cool off against scientists’ wishes. Tiny, accidental differences in the setup—from the angles of the laser beams to slight flaws in the pellet shape—can make immense differences in a reaction’s outcome.

It’s for that reason that each NIF test, which takes about a billionth of a second, involves months of meticulous planning.

“All that work led up to a moment just after 1 a.m. last Monday, when we took a shot … and as the data started to come in, we saw the first indications that we’d produced more fusion energy than the laser input,” says Alex Zylstra, a scientist at NIF, at the press conference.

This time, the NIF’s laser pumped 2.05 megajoules into the pellet—and the pellet burst out 3.15 megajoules (enough to power the average American home for about 43 minutes). Not only have NIF scientists achieved that 100-percent ignition milestone, they’ve gone farther, reaching more than 150 percent.

“To be honest…we’re not surprised,” says Mike Donaldson, a systems engineer at General Fusion, a Vancouver, Canada-based private firm that aims to build a commercially viable fusion plant by the 2030s, who was not involved with the NIF experiment. “I’d say this is right on track. It’s really a culmination of lots of years of incremental progress, and I think it’s fantastic.”

But there’s a catch

These numbers only account for the energy delivered by the laser—omitting the fact that this laser, one of the largest and most intricate on the planet, needed about 300 megajoules from California’s electric grid to power on in the first place.

“The laser wasn’t designed to be efficient,” says Mark Herrmann, a scientist at LLNL, at the press conference. “The laser was designed to give as much juice as possible.” Balancing that energy-hungry laser may seem daunting, but researchers are optimistic. The laser was built on late-20th-century technology, and NIF leaders say they do see a pathway to making it more efficient and even more powerful. 

Even if they do that, experts need to figure out how to fire repeated shots that gain energy. That’s another massive challenge, but it’s a key step toward making this a viable base for a power plant.

[Related: Inside France’s super-cooled, laser-powered nuclear test lab]

“Scientific results like today’s are fantastic,” says Donaldson. “We also need to focus on all the other challenges that are required to make fusion commercializable.”

A fusion power plant may very well involve a different technique. Many experimental reactors like JET and the under-construction ITER in southern France, in lieu lasers, try to recreate the sun by using powerful magnets to shape and sculpt super-hot plasma within a specially designed chamber. Most of the private-sector fusion efforts that have mushroomed of late are keying their efforts toward magnetic methods, too.

In any event, it will be a long time before you read an article like this on a device powered by cheap fusion energy—but that day has likely come an important milestone closer.

“It’s been 60 years since ignition was first dreamed of using lasers,” Ma said at the press conference. “It’s really a testament to the perseverance and dedication of the folks that made this happen. It also means we have the perseverance to get to fusion energy on the grid.”

The post What the Energy Department’s laser breakthrough means for nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What would happen if the Earth started to spin faster? https://www.popsci.com/earth-spin-faster/ Tue, 01 Jun 2021 22:59:42 +0000 https://www.popsci.com/uncategorized/earth-spin-faster/
Earth against moon and spun to show fast the planet spins. Realistic illustration.
Since the formation of the moon, the rate that Earth spins has been slowing down by about 3.8 mph every 10 million years. Deposit Photos

Even a mile-per-hour speed boost would make things pretty weird.

The post What would happen if the Earth started to spin faster? appeared first on Popular Science.

]]>
Earth against moon and spun to show fast the planet spins. Realistic illustration.
Since the formation of the moon, the rate that Earth spins has been slowing down by about 3.8 mph every 10 million years. Deposit Photos

There are enough things in this life to worry about. Like nuclear war, climate change, and whether or not you’re brushing your teeth correctly. The Earth spinning too fast should not be high up on your list, simply because it’s not very likely to happen anytime soon—and if it does, you’ll probably be too dead to worry about it. Nevertheless, we talked to some experts to see how it would all go down.

Let’s start with the basics, like: How fast does the Earth spin now? That depends on where you are, because the planet moves fastest around its waistline. As Earth twirls around its axis, its circumference is widest at the equator. So a spot on the equator has to travel a lot farther in 24 hours to loop around to its starting position than, say, Chicago, which sits on a narrower cross-section of Earth. To make up for the extra distance, the equator spins at 1,037 mph, whereas Chicago takes a more leisurely 750 mph pace. (This calculator will tell you the exact speed based on your latitude.)

The Earth does change pace every now and then, but only incrementally. This summer, for instance, it skimmed 1.59 milliseconds off its typical rotation time, making June 29 the shortest day on record. One hypothesis is that changes in pressure actually shift the planet’s axis of rotation, though not to the extent where regular human beings could feel the difference.

Little bumps aside, if the Earth were to suddenly spin much faster, there would be some drastic changes in store. Speeding up its rotation by one mile per hour, let’s say, would cause water to migrate from the poles and raise levels around the equator to by a few inches. “It might take a few years to notice it,” says Witold Fraczek, an analyst at ESRI, a company that makes geographic information system (GIS) software.

[Related: If the Earth is spinning, why can’t I feel it?]

What might be much more noticeable is that some of our satellites would be off-track. Satellites set to geosynchronous orbit fly around our planet at a speed that matches the Earth’s rotation, so that they can stay positioned over the same spot all the time. If the planet speeds up by 1 mph, then the satellites will no longer in their proper positions, meaning satellite communications, television broadcasting, and military and intelligence operations could be interrupted, at least temporarily. Some satellites carry fuel and may be able to adjust their positions and speeds accordingly, but others might have to be replaced, and that’s expensive. “These could disturb the life and comfort of some people,” says Fraczek, “but should not be catastrophic to anybody.”

But things would get more catastrophic the faster we spin.

You would lose weight, but not mass

Centrifugal force from the Earth’s spin is constantly trying to fling you off the planet, sort of like a kid on the edge of a fast merry-go-round. For now, gravity is stronger and it keeps you grounded. But if Earth were to spin faster, the centrifugal force would get a boost, says NASA astronomer Sten Odenwald.

Currently, if you weigh about 150 pounds in the Arctic Circle, you might weigh 149 pounds at the equator. That’s because of the extra centrifugal force that’s generated as the equator spins faster combats gravity. Press fast-forward on that, and your weight would drop even further.

Odenwald calculates that eventually, if the equator revved up to 17,641 mph, the centrifugal force would be great enough that you would be essentially weightless. (That is, if you’re still alive. More on that later.)

Everyone would be constantly jet-lagged

The faster the Earth spins, the shorter our days would become. With a 1 mph speed increase, the day would only get about a minute and a half shorter and our internal body clocks, which stick to a pretty strict 24-hour schedule, probably wouldn’t notice.

But if we were rotating 100 mph faster than usual, a day would be about 22 hours long. For our bodies, that would be like daylight savings time on boosters. Instead of setting the clocks back by an hour, you’d be setting them back by two hours every single day, without a chance for your body to adjust. And the changing day length would probably mess up plants and animals too.

But all this is only if Earth speeds up all of a sudden. “If it gradually speeds up over millions of years, we would adapt to deal with that,” says Odenwald.

Hurricanes would get stronger

If Earth’s rotation picked up slowly, it would carry the atmosphere with it—and we wouldn’t necessarily notice a big difference in the day-to-day winds and weather patterns. “Temperature difference is still going to be the main driver of winds,” says Odenwald. However, extreme weather could become more destructive. “Hurricanes will spin faster,” he says, “and there will be more energy in them.”

The reason why goes back to that weird phenomenon we mentioned earlier: the Earth spins faster around the equator.

If the Earth wasn’t spinning at all, winds from the north pole would blow in a straight line to the equator, and vice versa. But because we are spinning, the pathway of the winds gets deflected eastward. This curvature of the winds is called the Coriolis effect, and it’s what gives a hurricane its spin. And if the Earth spun faster, the winds would be deflected further eastward. “That effectively makes the rotation more severe,” says Odenwald.

Water would cover the world

Extra speed at the equator means the water in the oceans would start to amass there. At 1 mph faster, the water around the equator would get a few inches deeper within just a few days.

At 100 mph faster, the equator would start to drown. “I think the Amazon Basin, Northern Australia, and not to mention the islands in the equatorial region, they would all go under water,” says Fraczek. “How deep underwater, I’m not sure, but I’d estimate about 30 to 65 feet.”

If we double the speed at the equator, so that Earth spins 1,000 miles faster, “it would clearly be a disaster,” says Fraczek. The centrifugal force would pull hundreds of feet of water toward the Earth’s waistline. “Except for the highest mountains, such as Kilimanjaro or the highest summits of the Andes, I think everything in the equatorial region would be covered with water.” That extra water would be pulled out of the polar regions, where centrifugal force is lower, so the Arctic Ocean would be a lot shallower.

At 100 mph faster, the equator would start to drown.

Meanwhile, the added centrifugal force from spinning 1,000 mph faster means water at the equator would have an easier time combating gravity. The air would be heavy with moisture in these regions, Fraczek predicts. Shrouded in a dense fog and heavy clouds, these regions might experience constant rain—as if they’d need any more water.

Finally, at about 17,000 miles per hour, the centrifugal force at the equator would match the force of gravity. After that, “we might experience reverse rain,” Fraczek speculates. “Droplets of water could start moving up in the atmosphere.” At that point, the Earth would be spinning more than 17 times faster that it is now, and there probably wouldn’t be many humans left in the equatorial region to marvel at the phenomenon.

“If those few miserable humans would still be alive after most of Earth’s water had been transferred to the atmosphere and beyond, they would clearly want to run out of the equator area as soon as possible,” says Fraczek, “meaning that they should already be at the Polar regions, or at least middle latitudes.”

Seismic activity would rock the planet

At very fast speeds—like, about 24,000 mph—and over thousands of years, eventually the Earth’s crust would shift too, flattening out at the poles and bulging around the equator.

“We would have enormous earthquakes,” says Fraczek. “The tectonic plates would move quickly and that would be disastrous to life on the globe.”

How fast would the Earth spin in the future?

Believe it or not, Earth’s speed is constantly fluctuating, says Odenwald. Earthquakes, tsunamis, large air masses, and melting ice sheets can all change the spin rate at the millisecond level. If an earthquake swallows a bit of the ground, reducing the planet’s circumference ever so slightly, it effectively speeds up how quickly Earth completes its rotation. A large air mass can have the opposite effect, slowing our spins a smidgen like an ice skater who leaves her arms out instead of drawing them in.

The Earth’s rotation speed changes over time, too. About 4.4 billion years ago, the moon formed after something huge crashed into Earth. At that time, Odenwald calculates our planet was probably shaped like a flattened football and spinning so rapidly that each day might have been only about four hours long.

“This event dramatically distorted Earth’s shape and almost fragmented Earth completely,” says Odenwald. “Will this ever happen again? We had better hope not!”

[Related: 10 easy ways you can tell for yourself that the Earth is not flat]

Since the formation of the moon, Earth’s spin has been slowing down by about 3.8 mph every 10 million years, mostly due to the moon’s gravitational pull on our planet. So it’s a lot more likely that Earth’s spin will continue to slow down in the future, not speed up.

“There’s no conceivable way that the Earth could spin up so dramatically,” says Odenwald. “To spin faster it would have to be hit just right by the right object, and that would liquify the crust so we’d be dead anyway.”

This post has been updated. It was originally published in 2017.

The post What would happen if the Earth started to spin faster? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The small, mighty, world-changing transistor turns 75 https://www.popsci.com/science/transistors-changed-the-world/ Sun, 04 Dec 2022 18:00:00 +0000 https://www.popsci.com/?p=493705
Japanese woman holding gold Sharp calculator with transistor
Sharp employee Ema Tanaka displays a gold-colored electronic caluculator "EL-BN691,, which is the Japanese electronics company's commemoration model, next to the world's first all-transistor/diode desktop calculator CS-10A introduced in 1964. YOSHIKAZU TSUNO/AFP via Getty Images

Without this universal technology, our computers would probably be bulky vacuum-tube machines.

The post The small, mighty, world-changing transistor turns 75 appeared first on Popular Science.

]]>
Japanese woman holding gold Sharp calculator with transistor
Sharp employee Ema Tanaka displays a gold-colored electronic caluculator "EL-BN691,, which is the Japanese electronics company's commemoration model, next to the world's first all-transistor/diode desktop calculator CS-10A introduced in 1964. YOSHIKAZU TSUNO/AFP via Getty Images

It’s not an exaggeration to say that the modern world began 75 years ago in a nondescript office park in New Jersey.

This was the heyday of Bell Labs. Established as the research arm of a telephone company, it had become a playground for scientists and engineers by the 1940s. This office complex was the forge of innovation after innovation: radio telescopes, lasers, solar cells, and multiple programming languages. But none were as consequential as the transistor.

Some historians of technology have argued that the transistor, first crafted at Bell Labs in late 1947, is the most important invention in human history. Whether that’s true or not, what is without question is that the transistor helped trigger a revolution that digitized the world. Without the transistor, electronics as we know them could not exist. Almost everyone on Earth would be experiencing a vastly different day-to-day.

“Transistors have had a considerable impact in countries at all income levels,” says Manoj Saxena, senior member of the New Jersey-based Institute of Electrical and Electronics Engineers. “It is hard to overestimate the impact they have had on the lives of nearly every person on the planet,” Tod Sizer, a vice president at modern Nokia Bell Labs, writes in an email.

What is a transistor, anyway?

A transistor is, to put it simply, a device that can switch an electric current on or off. Think of it as an electric gate that can open and shut thousands upon thousands of times every second. Additionally, a transistor can boost current passing through it. Those abilities are fundamental for building all sorts of electronics, computers included.

Within the first decade of the transistor era, these powers were recognized when three Bell Labs scientists who built that first transistor—William Shockley, John Bardeen, Walter Brattain—won the 1956 Nobel Prize in Physics. (In later decades, much of the scientific community would condemn Shockley for his support of eugenics and racist ideas about IQ.)

Transistors are typically made from certain elements called semiconductors, which are useful for manipulating current. The first transistor, the size of a human palm, was fashioned from a metalloid, germanium. By the mid-1960s, most transistors were being made from silicon—the element just above germanium in the periodic table—and engineers were packing transistors together into complex integrated circuits: the foundation of computer chips.

[Related: Here’s the simple law behind your shrinking gadgets]

For decades, the development of transistors has stuck to a rule of thumb known as Moore’s law: The number of transistors you can pack into a state-of-the-art circuit doubles roughly every two years. Moore’s law, a buzzword in the computer chip world, has long been a cliché among engineers, though it still abides today.

Modern transistors are just a few nanometers in size. The typical processor in the device you’re using to read this probably packs billions of transistors onto a chip smaller than a human fingernail. 

What would a world without the transistor be like?

To answer that question, we have to look at what the transistor replaced—it wasn’t the only device that could amplify current. 

Before its dominance, electronics relied on vacuum tubes: bulbs, typically made of glass, that held charged plates inside an airless interior. Vacuum tubes have a few advantages over transistors. They could generate more power. Decades after the technology became obsolete, some audiophiles swore that vacuum tube music players sounded better than their transistor counterparts. 

But vacuum tubes are very bulky and delicate (they tend to burn out quickly, just like incandescent light bulbs). Moreover, they often need time to “warm up,” making vacuum tube gadgets a bit like creaky old radiators. 

The transistor seemed to be a convenient replacement. “The inventors of the transistors themselves believed that the transistor might be used in some special instruments and possibly in military radio equipment,” says Ravi Todi, current president of the IEEE Electron Devices Society.

The earliest transistor gadget to hit the market was a hearing aid released in 1953. Soon after came the transistor radio, which became emblematic of the 1960s. Portable vacuum tube radios did exist, but without the transistor, handheld radios likely wouldn’t have become the ubiquitous device that kick-started the ability to listen to music out and about.

Martin Luther King Jr listens to a transistor radio.
Civil rights activist Martin Luther King Jr listens to a transistor radio during the third march from Selma to Montgomery, Alabama, in 1965. William Lovelace/Daily Express/Hulton Archive/Getty Images

But even in the early years of the transistor era, these devices started to skyrocket in number—and in some cases, literally. The Apollo program’s onboard computer, which helped astronauts orient their ship through maneuvers in space, was built with transistors. Without it, engineers would either have had to fit a bulky vacuum tube device onto a cramped spacecraft, or astronauts would have had to rely on tedious commands from the ground.

Transistors had already begun revolutionizing computers themselves. A computer built just before the start of the transistor era—ENIAC, designed to conduct research for the US military—used 18,000 vacuum tubes and filled up a space the size of a ballroom.

Vacuum tube computers squeezed into smaller rooms over time. Even then, 1951’s UNIVAC I cost over a million dollars (not accounting for inflation), and its customers were large businesses or data-heavy government agencies like the Census Bureau. It wouldn’t be until the 1970s and 1980s when personal computers, powered by transistors, started to enter middle-class homes.

Without transistors, we might live in a world where a computer is something you’d use at work—not at home. Forget smartphones, handheld navigation, flatscreen displays, electronic timing screens in train stations, or even humble digital watches. All of those need transistors to work.

“The transistor is fundamental for all modern technology, including telecommunications, data communications, aviation, and audio and video recording equipment,” says Todi.

What do the next 75 years of transistor technologies look like?

It’s hard to deny that the world of 2022 looks vastly different from the world of 1947, largely thanks to transistors. So what should we expect from transistors 75 years in the future, in the world of 2097?

It’s hard to say with any amount of certainty. Almost all transistors today are made with silicon—how Silicon Valley got its name. But how long will that last? 

[Related: The trick to a more powerful computer chip? Going vertical.]

Silicon transistors are now small enough that engineers aren’t sure how much smaller they can get, indicating Moore’s law may have a finite end. And energy-conscious researchers want to make computer chips that use less power, partly in hopes of reducing the carbon footprint from data centers and other large facilities

A growing number of researchers are thinking up alternatives to silicon. They’re thinking of computer chips that harness weird quantum effects and tiny bits of magnets. They’re looking at alternative materials like germanium to exotic forms of carbon. Which of these, if any, may one day replace the silicon transistor? That isn’t certain yet.

“No one technology can meet all needs,” says Saxena. And it’s very possible that the defining technology of the 2090s hasn’t been invented yet.

The post The small, mighty, world-changing transistor turns 75 appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists modeled a tiny wormhole on a quantum computer https://www.popsci.com/technology/wormhole-quantum-computer/ Thu, 01 Dec 2022 22:30:00 +0000 https://www.popsci.com/?p=493923
traversable wormhole illustration
inqnet/A. Mueller (Caltech)

It’s not a real rip in spacetime, but it’s still cool.

The post Scientists modeled a tiny wormhole on a quantum computer appeared first on Popular Science.

]]>
traversable wormhole illustration
inqnet/A. Mueller (Caltech)

Physicists, mathematicians, astronomers, and even filmmakers have long been fascinated by the concept of a wormhole: an unpredictable and oftentimes volatile phenomenon that is believed to create tunnels (and shortcuts between two distant locations) across spacetime. Another theory holds that if you link up two black holes in the right way, you can create a wormhole. 

Studying wormholes is like piecing together an incomplete puzzle without knowing what the final picture is supposed to look like. You can roughly deduce what’s supposed to go in the gaps based on the completed images around it, but you can’t know for sure. That’s because there has not yet been definitive proof that wormholes are in fact out there. However, some of the solutions to fundamental equations and theories in physics suggest that such an entity exists. 

In order to understand the properties of this cosmic phantom based on what has been deduced so far, researchers from Caltech, Harvard, MIT, Fermilab, and Google created a small “wormhole” effect between two quantum systems sitting on the same processor. What’s more, the team was able to send a signal through it. 

According to Quanta, this edges the Caltech-Google team ahead of an IBM-Quantinuum team that also sought to establish wormhole teleportation. 

While what they created is unfortunately not a real crack through the fabric of spacetime, the system does mimic the known dynamics of wormholes. In terms of the properties that physicists usually consider, like positive or negative energy, gravity and particle behavior, the computer simulation effectively looks and works like a tiny wormhole. This model, the team said in a press conference, is a way to study the fundamental problems of the universe in a laboratory setting. A paper describing this system was published this week in the journal Nature

“We found a quantum system that exhibits key properties of a gravitational wormhole yet is sufficiently small to implement on today’s quantum hardware,” Maria Spiropulu, a professor of physics at Caltech, said in a press release. “This work constitutes a step toward a larger program of testing quantum gravity physics using a quantum computer.” 

[Related: Chicago now has a 124-mile quantum network. This is what it’s for.]

Quantum gravity is a set of theories that posits how the rules governing gravity (which describes how matter and energy behave) and quantum mechanics (which describes how atoms and particles behave) fit together. Researchers don’t have the exact equation to describe quantum gravity in our universe yet. 

Although scientists have been mulling over the relationship between gravity and wormholes for around 100 years, it wasn’t until 2013 that entanglement (a quantum physics phenomenon), was thought to factor into the link. And in 2017, another group of scientists suggested that traversable wormholes worked kind of like quantum teleportation (in which information is transported across space using principles of entanglement). 

In the latest experiment, run on just 9 qubits (the quantum equivalent of binary bits in classical computing) in Google’s Sycamore quantum processor, the team used machine learning to set up a simplified version of the wormhole system “that could be encoded in the current quantum architectures and that would preserve the gravitational properties,” Spiropulu explained. During the experiment, they showed that information (in the form of qubits), could be sent through one system and reappear on the other system in the right order—a behavior that is wormhole-like. 

[Related: In photos: Journey to the center of a quantum computer]

So how do researchers go about setting up a little universe in a box with its own special rules and geometry in place? According to Google, a special type of correspondence (technically known as AdS/CFT) between different physical theories allowed the scientists to construct a hologram-like universe where they can “connect objects in space with specific ensembles of interacting qubits on the surface,” researchers wrote in a blog post. “This allows quantum processors to work directly with qubits while providing insights into spacetime physics. By carefully defining the parameters of the quantum computer to emulate a given model, we can look at black holes, or even go further and look at two black holes connected to each other — a configuration known as a wormhole.”

The researchers used machine learning to find the perfect quantum system that would preserve some key gravitational properties and maintain the energy dynamics that they wanted the model to portray. Plus, they had to simulate particles called fermions

The team noted in the press conference that there is strong evidence that our universe operates by similar rules as the hologram universe observed on the quantum chip. The researchers wrote in the Google blog item: “Gravity is only one example of the unique ability of quantum computers to probe complex physical theories: quantum processors can provide insight into time crystals, quantum chaos, and chemistry.” 

The post Scientists modeled a tiny wormhole on a quantum computer appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Gravity could be bringing you down with IBS https://www.popsci.com/health/ibs-causes-gravity/ Thu, 01 Dec 2022 19:23:53 +0000 https://www.popsci.com/?p=493752
Toilet paper rolls on a yellow background to represent IBS causes
There are two prevailing ideas behind what causes IBS, and now, there might be a third. Deposit Photos

It's your GI tract versus the forces of nature.

The post Gravity could be bringing you down with IBS appeared first on Popular Science.

]]>
Toilet paper rolls on a yellow background to represent IBS causes
There are two prevailing ideas behind what causes IBS, and now, there might be a third. Deposit Photos

There is no official cause for irritable bowel syndrome (IBS), but for one gastroenterologist, the answer has been right under our noses. A review published today in The American Journal of Gastroenterology posits that the all-too-common ailment is caused by the forces of gravity. The unconventional hypothesis suggests the human body has evolved to live with this universal force, and when our ability to manage gravity falters, it can have dire ramifications on our health.

“Our relationship to gravity is a little bit like a fish’s relationship to water,” says Brennan Spiegel, a gastroenterologist at Cedars-Sinai Medical Center in Los Angeles and author of the new paper. “The fish evolved to have a body that survives and thrives in water, even if it may not know it’s in water to begin with.” Similarly, Spiegel explains that while we aren’t always conscious of gravity, it’s a constant influence on our lives. For example, our early human ancestors evolved to become bipedal organisms, spending two-thirds of their lives in an upright position. But standing erect would cause gravity to constantly pull our body system down toward the ground, meaning organs and other bodily systems must have a plan in place to manage and resist gravitational forces. (For example, mammalian brains have evolved ways to sense altered gravity conditions.)

[Related: What to do when you’re trying not to poop]

Spiegel says he thought about the gravity hypothesis in relation to IBS when visiting a sick family member at an assisted living center. Lying in bed for most of the day, he noticed an increase in GI problems during her stay, including constipation, bloating, and abdominal pain, making him evaluate whether lying down all day changes a person’s relationship to the force of gravity. “Why is it that she’s not able to move her intestines as well as she could before,” he first questioned. 

Think of the GI tract as a sack of potatoes. Humans internally lug around this sack their whole lives, though Spiegel argues that some people’s body compositions are better suited to carry that sack around better than others. But according to Newton’s Third Law (for every force in nature there is an equal and opposite reaction), because gravity is pulling our body down, our body’s must have “antigravity” mechanisms in place to stabilize organs. This support comes from musculoskeletal structures like the spine and mesentery that works as an internal suspension system to hold the intestines in place. What’s more, the rib cage along with the spine helps to secure the position of the diaphragm, which acts as a ceiling mount to suspend organs in the upright abdominal cavity. All together, these structures work as a crane to stabilize and keep the organs in place.

The gravity hypothesis is not meant to disprove other ideas on what causes IBS, but rather, a way to tie in all them into a concise explanation. 

But what happens when the antigravity mechanisms in our body fail? You’ll see symptoms very similar to those who have IBS, according to the research paper. When the musculoskeletal system is not aligned with gravity, it’s not capable of completely resisting this force of attraction. The mismatched strain between attractive and repulsive forces would theoretically cause tension in the body, resulting in muscle cramping and pain from being unable to properly support the contents in the abdomen. Additionally, excess pressure on the spine from trying to stabilize sagging structures would cause intense back pain. Finally, if the abdominal crane starts to sag and loosens its hold, the pull of gravity would cause the organ to move out of place, pushing the GI tract forward and giving little space for food to move in and out of the tract. All of these changes may compound in IBS symptoms.

One point Spiegel emphasizes is that the gravity hypothesis is not meant to disprove other ideas—two popular ones being that IBS is caused by changes in the gut microbiome or from elevated serotonin levels—but rather, a way to tie in all them into a concise explanation. 

“Intestines fall under the force of gravity, and they can develop a problem where they kink up almost like a twisted garden hose that makes it hard for water to get through,” Spiegel says. “As a result, they get bacterial overgrowth, and they get abdominal pain and gassiness.”

[Related: What happens if you get diarrhea in space?]

Julie Khlevner, a gastroenterologist at the NewYork-Presbyterian Morgan Stanley Children’s Hospital who was not affiliated with the research, says that while the gravity hypothesis is less conventional than other prevailing theories for IBS, it has been previously used to explain other diseases like amyotrophic lateral sclerosis. “Although [it’s] thought provoking and theoretically compatible with the clinical manifestations of IBS, it remains in its hypothetical stage and requires further research,” she cautions. “For now, the currently accepted concepts in pathophysiology of IBS [alterations in the bidirectional brain-gut-microbiome interaction] will remain the pillars for development of targeted therapies.”

If Spigel is right about his rationale, he could be onto something bigger. Understanding how gravity alters our bodily functions could help find answers on why certain exercises, such as yoga and tai chi, can relieve GI symptoms by strengthening musculoskeletal muscles and the anterior abdominal wall. Or why people experience more stomach problems at high altitudes like when climbing up mountains, or more generally, why women are disproportionately affected by IBS. Spiegel already has an explanation for the last issue (he says women have more elastic internal structures than men, including floppier and longer colons that are more susceptible to the pull of gravity), but he’s hoping others will pursue the same line of work and help bring relief to the millions of people living with IBS everyday.

The post Gravity could be bringing you down with IBS appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What this jellyfish-like sea creature can teach us about underwater vehicles of the future https://www.popsci.com/technology/marine-animal-siphonophore-design/ Tue, 29 Nov 2022 20:00:00 +0000 https://www.popsci.com/?p=492881
A jellyfish-like sea creature that's classified as a siphonophore
NOAA Photo Library / Flickr

Nanomia bijuga is built like bubble wrap, and it's a master of multi-jet propulsion.

The post What this jellyfish-like sea creature can teach us about underwater vehicles of the future appeared first on Popular Science.

]]>
A jellyfish-like sea creature that's classified as a siphonophore
NOAA Photo Library / Flickr

Sea creatures have developed many creative ways of getting around their watery worlds. Some have tails for swimming, some have flippers for gliding, and others propel themselves using jets. That last transportation mode is commonly associated with squids, octopuses, and jellyfish. For years, researchers have been interested in trying to transfer this type of movement to soft robots, although it’s been challenging. (Here are a few more examples.) 

A team led by researchers from the University of Oregon have sought to get a closer understanding of how these gelatinous organisms are steering themselves about their underwater domains, in order to brainstorm better ways of designing underwater vehicles of the future. Their findings were published this week in the journal PNAS. The creature they focused on was Nanomia bijuga, a close relative of jellyfish, which looks largely like two rows of bubble wrap with some ribbons attached on one end of it. 

This bubble wrap body is known as the nectosome, and each individual bubble is called a nectophore. All of the nectophores can produce jets of water independently by expanding and contracting to direct currents of seawater through a flexible opening. Technically speaking, each nectophore is an organism in and of itself, and they’re bundled together into a colony. The Monterey Bay Aquarium Research Institute describes these animals as “living commuter trains.” 

The bubble units can coordinate to swim together as one, produce jets in sequence, or do their own thing if they want. Importantly, a few patterns of firing the jets produces the major movements. Firing pairs of nectophores in sequence from the tip to the ribbon tail enables Nanomia to swim forward or in reverse. Firing all the nectophores on one side, or firing some individual nectophores, turns and rotates its body. Using these commands for the multiple jets, Namonia can migrate hundreds of yards twice a day down to depths of 2,300 feet (which includes the twilight zone). 

For Namonia, the number of nectophores can vary from animal to animal. So, to take this examination further, the team wanted to see whether this variation impacted swimming speed or efficiency. Both efficiency and speed appear to increase with more nectophores, but seem to hit a plateau at around 12.  

This system of propulsion lets the Namonia go about the ocean at similar rates to many fish (judged by speed in context of body length), but without the high metabolic cost of operating a neuromuscular system. 

[Related: This tiny AI-powered robot is learning to explore the ocean on its own]

So, how could this sea creature help inform the design of vehicles that travel beneath the waves? Caltech’s John Dabiri, one of the authors on the paper, has long been a proponent of taking inspiration from the fluid dynamics of critters like jellyfish to fashion aquatic vessels. And while the researchers in this paper do not suggest a specific design for a propulsion system for underwater vehicles, they do note that the behavior of these animals may offer helpful guidelines for engines that operate through multiple jets. “Analogously to [Namonia] bijuga, a single underwater vehicle with multiple propulsors could use different modes to adapt to context,” the researchers wrote in the paper. 

Simple changes in the timing of how the jets fire, or which jets fire together, can have a big impact on the energy efficiency and speed of a vehicle. For example, if engineers wanted to make a system that doesn’t need a lot of power, then it might be helpful to have jets that could be controlled independently. If the vehicle needs to be fast, then there needs to be a function that can operate all engines from one side at the same time.

“For underwater vehicles with few propulsors, adding propulsors may provide large performance benefits,” the researchers noted, “but when the number of propulsors is high, the increase in complexity from adding propulsors may outweigh the incremental performance gains.”

Learn more about Nanomia and watch it freestyle below: 

The post What this jellyfish-like sea creature can teach us about underwater vehicles of the future appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
The leap second’s time will be up in 2035—and tech companies are thrilled https://www.popsci.com/technology/bipm-abandon-leap-second/ Sat, 26 Nov 2022 15:00:00 +0000 https://www.popsci.com/?p=490660
people walking in front of clock
Stijn te Strake / Unsplash

Y2Yay?

The post The leap second’s time will be up in 2035—and tech companies are thrilled appeared first on Popular Science.

]]>
people walking in front of clock
Stijn te Strake / Unsplash

It’s the final countdown for the leap second, a janky way of aligning the atomic clock with the natural variation in the Earth’s rotation—but we’ll get to that. At a meeting last week in Versailles, the International Bureau of Weights and Measures (BIPM) voted nearly unanimously to abandon the controversial convention in 2035 for at least 100 years. Basically, the world’s metrologists (people who study measurement) are crossing their fingers and hoping that someone will come up with a better solution for syncing human timekeeping with nature. Here’s why it matters. 

Unfortunately for us humans, the universe is a messy place. Approximate values work well for day-to-day life but aren’t sufficient for scientific measurements or advanced technology. Take years: Each one is 365 days long, right? Well, not quite. It actually takes the Earth something like 365.25 days to rotate around the sun. That’s why approximately every fourth year (except for years evenly divisible by 100 but not by 400) is 366 days long. The extra leap day keeps our calendar roughly aligned with the Earth’s actual rotation. 

Things get more frustrating the more accurately you try to measure things. A day is 86,400 seconds long—give or take a few milliseconds. The Earth’s rotation is actually slowing down due to lots of complicated factors including the ocean tides and shifts in how the Earth’s mass is distributed. All this means that days are getting ever so slightly longer, a few milliseconds at a time. If we continued to assume that all days are exactly 86,400 seconds long, our clocks would drift out of alignment with the sun. Wait long enough and it would start rising at midnight. 

In 1972, BIMP (it comes from the French name, Bureau International des Poids et Mesures) agreed to a simple fix: leap seconds. Like leap days, leap seconds would be inserted into the year so as to align Universal Coordinated Time (UTC) with the Earth-tracking Universal Time (UTI). Leap seconds aren’t needed predictably or very often. So, instead of having a regular pattern for adding them, BIMP would tally up all the extra milliseconds and it was necessary, tell everyone to add one whole millisecond to the clock. Between 1972 and now, 27 leap seconds have been inserted into UTC. 

While probably not the best idea even back in the 70s, the leap second has become a progressively worse idea as computers made precision timekeeping more widespread. When the leap second was created, accurate clocks were the preserve of research laboratories and military installations. Now, every smartphone can get the exact UTC time accurate to 100 billionth of a second from the GPS and other navigation satellites in orbit. 

The problem is that all the interlinked computers on the internet use UTC to function, not just let you know that it’s time for lunch. When files are saved to a database, they’re time stamped with UTC; when you play an online video game, it relies on UTC to work out who shot first; if you post a Tweet, UTC is in the mix. Keeping everything on track is a major headache for large tech companies like Meta—which recently published a blog post calling for the abolition of the leap second—that rely on UTC to keep their servers in sync and operational.

That’s because the process of adding leap seconds—or possibly removing one as the Earth appears to be speeding up again for some reason—break key assumptions that computers have about how time works. These are simple rules: Minutes have 60 seconds, time always goes forward, doesn’t repeat, doesn’t stop, and so on. Inserting and removing leap seconds makes it very easy for two computers that are meant to be in sync to get out of sync—and when that happens, things break. 

When a leap second was added in 2012, Reddit went down for 40 minutes. DNS provider Cloudflare had an outage on New Year’s Day in 2017 when the most recent leap second was added. And these happened despite the best efforts of the companies involved to account for the leap second and mitigate any adverse effects.

While large companies have developed techniques like “smearing,” where the leap second is added over a number of hours rather than all at once. Still, it would make things a lot easier if they didn’t have to at all. 

Of course, that brings us back to last Friday’s important decision. From 2035, leap seconds are no longer going to matter. BIMP is going to allow UTC and UTI to drift apart until at least 2135, hoping that scientists can come up with a better system of accounting for lost time—or computers can get smarter about handling clock changes. It’s not a perfect fix, but like many modern problems, it might be easier to kick it down the line.

The post The leap second’s time will be up in 2035—and tech companies are thrilled appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Astronomers now know how supermassive black holes blast us with energy https://www.popsci.com/science/black-hole-light-energy-x-ray/ Wed, 23 Nov 2022 18:54:41 +0000 https://www.popsci.com/?p=490856
Black hole shooting beam of energy out speed of light and being caught by a space telescope in an illustration. There's an inset showing blue and purple electromagnetic waves,
This illustration shows the IXPE spacecraft, at right, observing blazar Markarian 501, at left. A blazar is a black hole surrounded by a disk of gas and dust with a bright jet of high-energy particles pointed toward Earth. The inset illustration shows high-energy particles in the jet (blue). Pablo Garcia (NASA/MSFC)

An extreme particle accelerator millions of light-years away is directing immensely fast electromagnetic waves at Earth.

The post Astronomers now know how supermassive black holes blast us with energy appeared first on Popular Science.

]]>
Black hole shooting beam of energy out speed of light and being caught by a space telescope in an illustration. There's an inset showing blue and purple electromagnetic waves,
This illustration shows the IXPE spacecraft, at right, observing blazar Markarian 501, at left. A blazar is a black hole surrounded by a disk of gas and dust with a bright jet of high-energy particles pointed toward Earth. The inset illustration shows high-energy particles in the jet (blue). Pablo Garcia (NASA/MSFC)


Some 450 million light-years away from Earth in the constellation Hercules lies a galaxy named Markarian 501. In the visible-light images we have of it, Markarian 501 looks like a simple, uninteresting blob.

But looks can be deceiving, especially in space. Markarian 501 is a launchpad for charged particles traveling near the speed of light. From the galaxy’s heart erupts a bright jet of high-energy particles and radiation, rushing right in Earth’s direction. That makes it a perfect natural laboratory to study those accelerating particles—if only scientists could understand what causes them.

In a paper published in the journal Nature today, astronomers have been able to take a never-before-seen look deep into the heart of one of those jets and see what drives those particles out in the first place. “This is the first time we are able to directly test models of particle acceleration,” says Yannis Liodakis, an astronomer at the University of Turku in Finland and the paper’s lead author.

Markarian 501 is a literally shining example of a special class of galaxy called a blazar. What makes this galaxy so bright is the supermassive black hole at its center. The gravity-dense region spews a colossal wellspring of high-energy particles, forming a jet that travels very near the speed of light and stretches over hundreds of millions of light-years.

Many galaxies have supermassive black holes spew out jets like this—they’re what astronomers call active galactic nuclei. But blazars like Markarian 501 are defined by the fact that their jets are pointed right in Earth’s general direction. Astronomers can use telescopes trained at it to look upstream and get a clear view of a constant torrent of particles riding through waves of every part of the electromagnetic spectrum, from bright radio waves to visible light to blazing gamma rays.

[Related: You’ve probably never heard of terahertz waves, but they could change your life]

A blazar can spread its influence far beyond its own corner of the universe. For instance, a detector buried under the Antarctic ice caught a neutrino—a ghostly, low-mass particle that does its best to elude physicists—coming from a blazar called TXS 0506+56. It was the first time researchers had ever picked up a neutrino alighting on Earth from a point of origin outside the solar system (and from 5 billion light-years away, at that).

But what actually causes a supermassive black hole to form light and other electromagnetic waves? What happens inside that jet? If you were surfing inside of it, what exactly would you feel and see?

Scientists want to know these answers, too, and not just because they make for a fun, extreme thought experiment. Blazars are natural particle accelerators, and they’re far larger and more powerful than any accelerator we can currently hope to build on Earth. By analyzing the dynamics of a blazar jet, they can learn what natural processes can accelerate matter to near the speed of light. What’s more, Markarian 501 is one of the more desirable blazars to study, given that it’s relatively close to the Earth, at least compared to other blazars that can be many billions of light-years farther still.

[Related: What would happen if you fell into a black hole?]

So, Liodakis and dozens of colleagues from around the world took to observing it. They used the Imaging X-ray Polarization Explorer (IXPE), a jellyfish-like telescope launched by NASA in December 2021, to look down the length of that jet. In particular, IXPE studied if distant X-rays were polarized, and how their electromagnetic waves are oriented in space. The waves from a light bulb, for instance, aren’t polarized—they wiggle every which way. The waves from an LCD screen, on the other hand, are polarized and only wiggle in one direction, which is why you can pull tricks like making your screen invisible to everyone else. 

Back to the sky, if astronomers know the polarization of a source like a black hole, they might be able to reconstruct what happened at it. Liodakis and his colleagues had some idea of what to expect, because experts in their field had previously spent years modeling and simulating jets on their computers. “This was the first time we were able to directly test the predictions from those models,” he explains.

They found that the culprits were shockwaves: fronts of fast-moving particles crashing into slower-moving particles, speeding them along like flotsam pushed by rushing water. The violent crashes created the X-rays that the astronomers saw in IXPE’s readings.

It’s the first time that astronomers have used the X-ray polarization method to see results. “This is really a breakthrough in our understanding of these sources,” says Liodakis.

In an accompanying perspective in Nature, Lea Marcotulli, an astrophysicist at Yale University who wasn’t an author on the paper, called the result “dazzling.” “This huge leap forward brings us yet another step closer to understanding these extreme particle accelerators,” she wrote.

Of course, there are still many unanswered questions surrounding the jets. Do these shockwaves account for all the particles accelerating from Markarian 501’s black hole? And do other blazars and galaxies have shockwaves like them?

Liodakis says his group will continue to study the X-rays from Markarian 501, at least into 2023. With an object this dazzling, it’s hard to look away.

The post Astronomers now know how supermassive black holes blast us with energy appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Magnets might be the future of nuclear fusion https://www.popsci.com/science/nuclear-fusion-magnet-nif/ Fri, 11 Nov 2022 11:00:00 +0000 https://www.popsci.com/?p=486077
A hohlraum at the Lawrence Livermore National Laboratory.
A target at the NIF, pictured here in 2008, includes the cylindrical fuel container called a hohlraum. Lawrence Livermore National Laboratory

When shooting lasers at a nuclear fusion target, magnets give you a major energy increase.

The post Magnets might be the future of nuclear fusion appeared first on Popular Science.

]]>
A hohlraum at the Lawrence Livermore National Laboratory.
A target at the NIF, pictured here in 2008, includes the cylindrical fuel container called a hohlraum. Lawrence Livermore National Laboratory

For scientists and dreamers alike, one of the greatest hopes for a future of bountiful energy is nestled in a winery-coated vale east of San Francisco. 

Here lies the National Ignition Facility (NIF) in California’s Lawrence Livermore National Laboratory. Inside NIF’s boxy walls, scientists are working to create nuclear fusion, the same physics that powers the sun. About a year ago, NIF scientists came closer than anyone to a key checkpoint in the quest for fusion: creating more energy than was put in.

Unfortunately—but in a familiar outcome to those familiar with fusion—that world would have to wait. In the months after the achievement, NIF scientists weren’t able to replicate their feat. 

But they haven’t given up. And a recent paper, published in the journal Physics Review Letters on November 4, might bring them one step closer to cracking a problem that has confounded energy-seekers for decades. Their latest trick: lighting up fusion within the flux of a strong magnetic field. 

Fusion power, to put it simply, aims to ape the sun’s interior. By smashing certain hydrogen atoms together and making them stick, you get helium and a lot of energy. The catch is that actually making atoms stick together requires very high temperatures—which, in turn, requires fusion-operators to spend incredible amounts of energy in the first place. 

[Related: In 5 seconds, this fusion reactor made enough energy to power a home for a day]

Before you can even think about making a feasible fusion power plant, you need to somehow create more energy than you put in. That tipping point—a point that plasma physicists call ignition—has been fusion’s longest-sought goal.

The NIF’s container of choice is a gold-plated cylinder, smaller than a human fingernail. Scientists call that cylinder a hohlraum; it houses a peppercorn-sized pellet of hydrogen fuel.

At fusion time, scientists fire finely tuned laser beams at the hohlraum—in NIF’s case, 192 beams in all—energizing the cylinder enough to evoke violent X-rays within. In turn, those X-rays wash over the pellet, squeezing and battering it into an implosion that fuses hydrogen atoms together. That, at least, is the hope.

NIF used this method to achieve its smashing result in late 2021: creating some 70 percent of the energy put in, far and away the record at the time. For plasma physicists, it was a siren call. “It has breathed new enthusiasm into the community,” says Matt Zepf, a physicist at the Helmholtz Institute Jena in Germany. Fusion-folk wondered: Could NIF do it again?

As it happens, they would have to wait. Subsequent laser shots didn’t succeed at coming even close to that original. Part of the problem is that, even with all the knowledge and capabilities they have, scientists have a very hard time predicting what exactly a shot will do.

[Related: Nuclear power’s biggest problem could have a small solution]

“NIF implosions are currently showing significant fluctuations in their performance, which is caused by slight variations in the target quality and laser quality,” says John Moody, a physicist at NIF. “The targets are very, very good, but slight imperfections can have a big effect.”

Physicists could continue fine-tuning their laser or tinkering with their fuel pullet. But there might be a third way to improve that performance: bathing the hohlraum and its fuel pellet in a magnetic field.

Tests with other lasers, like OMEGA in Rochester, New York, and the Z-machine in Sandia, New Mexico—had shown that this method could prove fruitful. Moreover, computer simulations of NIF’s own laser suggested that a magnetic field could double the energy of NIF’s best-performing shots. 

“Pre-magnetized fuel will allow us to get good performance even with targets or laser delivery that is a little off of what we want,” says Moody, one of the paper’s authors.

So NIF scientists decided to try it out themselves.

They had to swap out the hohlraum first. Pure gold wouldn’t do well—putting the metal under a magnetic field like theirs would create electric currents in the cylinder walls, tearing it apart. So the scientists crafted a new cylinder, forged from an alloy of gold and tantalum, a rare metal found in some electronics.

Then, the scientists stuffed their new hohlraum with a hydrogen pellet, switched on the magnetic field, and lined up a shot.

As it happened, the magnetic field indeed made a difference. Compared to similar magnetless shots, the energy increased threefold. It was a low-powered test shot, to be sure, but the results give scientists a new glimmer of hope. “The paper marks a major achievement,” says Zepf, who was not an author of the report.

Still, the results are early days, “essentially learning to walk before we run,” Moody cautions. Next, the NIF scientists will try to replicate the experiment with other laser setups. If they can do that, they’ll know they can add a magnetic field to a wide range of shots.

As with anything in this misty plane of physics, this alone won’t be enough to solve all of fusion’s problems. Even if NIF does achieve ignition, afterward comes phase two: being able to create significantly more energy than you put in, something that physicists call “gain.” Especially for a laser of NIF’s limited size, says Zepf, that is an even more foreboding quest.

Nonetheless, the eyes of the fusion world will be watching. Zepf says that NIF’s results can teach similar facilities around the world how to get the most from their laser shots.

Achieving a high enough gain is a prerequisite for a phase that’s even further into the future: actually turning the heat of fusion power into a feasible power plant design. That’s still another step for particle physicists—and it’s a project that the fusion community is already working on.

The post Magnets might be the future of nuclear fusion appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Bees can sense a flower’s electric field—unless fertilizer messes with the buzz https://www.popsci.com/science/bumblebees-flowers-cues-electric-fields/ Wed, 09 Nov 2022 22:00:00 +0000 https://www.popsci.com/?p=485757
a fuzzy bumblebee settles on a pink flower
Pollinators, like this bumblebee (Bombus terrestris), can detect all kinds of sensory cues from flowers. Deposit Photos

Bumblebees are really good at picking up on cues from flowers, even electrical signals.

The post Bees can sense a flower’s electric field—unless fertilizer messes with the buzz appeared first on Popular Science.

]]>
a fuzzy bumblebee settles on a pink flower
Pollinators, like this bumblebee (Bombus terrestris), can detect all kinds of sensory cues from flowers. Deposit Photos

Bees are well-versed in the unspoken language of flowers. These buzzing pollinators are in tune with many features of flowering plants—the shape of the bulbs, the diversity of colors, and their alluring scents—which bees rely on to tell whether a reward of nectar and pollen is near. But bees can also detect signals that go beyond sight and smell. The tiny hairs covering their bodies, for instance, are ultra-sensitive to electric fields that help bees identify flowers. These electric fields can influence how bees forage—or, if those fields are artificially changed, even disrupt that behavior.

Today in the journal PNAS Nexus, biologists found that synthetic spray fertilizers can temporarily alter electric cues of flowers, a shift that causes bumblebees to land less frequently on plants. The team also tested a type of neonicotinoid pesticide—known to be toxic and detrimental to honeybee health—called imidacloprid, and detected changes to the electric field around flowers. Interestingly, the chemicals did not seem to impact vision and smell cues, hinting that this lesser-known signal is playing a greater role in communication. 

“Everything has an electric field,” says Ellard Hunting, lead study author and sensory biophysicist at the University of Bristol. “If you are really small, small weak electric fields become very profound, especially if you have lots of hairs, like bees and insects.” 

[Related: A swarm of honeybees can have the same electrical charge as a storm cloud]

Biologists are just beginning to understand how important electric signals are in the world of floral cues. To distinguish between more and less resource-rich flowers within a species, bees, for instance, can recognize specific visual patterns on petals, like spots on the surface, and remember them for future visits. Shape of the bloom also matters—larger, more open flowers might be an easier landing pad for less agile beetles, while narrow tube-shaped bulbs are hotspots for butterflies with long mouthparts that can reach nectar. Changes in humidity around a flower have also been found to influence hawkmoths, as newly opened flowers typically have higher humidity levels.   

An electrical cue, though, is “a pretty recent thing that we found out about,” says Carla Essenberg, a biologist studying pollination ecology at Bates College in Maine who was not involved in the study. A 2016 study found that foraging bumblebees change a flower’s electric field for about 1 to 2 minutes. The study authors suggested that even this short change might be detectable by other passerby bees, informing them the flower was recently visited—and has less nectar and pollen to offer. 

A flower’s natural electric field is largely created by its bioelectric potential—the flow of charge produced by or occurring within living organisms.  But electric fields are a dynamic phenomenon, explains Hunting. “Flowers typically have a negative potential and bees have a positive potential,” Hunting says. “Once bees approach, they can sense a field.” The wind, a bee’s landing, or other interactions will trigger immediate changes in a flower’s bioelectric potential and its surrounding field. Knowing this, Hunting had the idea to investigate any electric field changes caused by chemical applications, and if they deterred bee visits. 

He first started out with pesticides because of the well-studied impacts they can have on insects. “But then I figured, fertilizer also has a charge, and they are also applied and it is way more relevant on a larger-scale,” he says. These chemical mixtures used in agriculture and gardens often contain various levels of nitrogen, phosphorus, and potassium. “Everyone uses [fertilizers], and they’re claimed to be non-toxic.”  

First, to assess bumblebee foraging behavior, Hunting and his colleagues set up an experiment in a rural field site at the University of Bristol campus using two potted lavender plants. They sprayed a commercially available fertilizer mixture on one of the potted plants while spraying the other with demineralized water. Then, the team watched as bumblebees bypassed the fertilizer-covered lavender. Sprays that contained the pesticide or fertilizer changed the bioelectric potential of the flower for up to 25 minutes—much longer than shifts caused by wind or a bee landing. 

[Related: Arachnids may sense electrical fields to gain a true spidey sense]

But to confirm that the bees were avoiding the fertilizer because of a change in electric field—and not because of the chemical compounds or other factors—the researchers needed to recreate the electric shift in the flower, without actually spraying. In his soccer-pitch-sized backyard, a natural area free of other sources of electricity, Hunting manipulated the bioelectrical potential of lavender plants in order to mimic the change. He placed the stems in water, wired them with electrodes, and streamed a current with a DC powerbank battery. This created an electric field around the plant in the same way as the fertilizer. 

He observed that while the bees approached the electrically manipulated flowers, they did not land on them. They also approached the flowers significantly less than the control flowers, Hunting says. “This shows that the electrics alone already elicit avoidance behavior.”

Hunting suggests that the plant’s defense mechanism might be at the root of the electrical change. “What actually happens if you apply chemicals to plant cells, it triggers a chemical stress response in the plant, similar to a wounding response,” he explains. The plant sends metabolites—which have ionic charge—to start to fix the tissue. This flux of ions generates an electric current, which the bees detect. 

The researchers also noted that the chemicals didn’t seem to impact vision or smell, and that, interestingly, the plants sprayed with pesticide and fertilizers seemed to experience a shift in electric field again after it rained. This could indicate that the effect persists beyond just one spray. The new findings could have implications for casual gardeners and major agricultural industries, the researchers note. 

“Ideally, you would apply fertilizer to the soil [instead of spraying directly on the plant],” Hunting says. But that would require more labor than the approach used by many in US agriculture, in which airplanes spray massive fields. 

[Related: Build a garden that’ll have pollinators buzzin’]

Essenberg says that luckily the electric field changes are relatively short lived, making it a bit easier for farmers to find workarounds. For instance, they could spray agricultural chemicals during the middle of the day, when pollinators forage less frequently because many flowers open in the morning and typically run out of pollen by then. 

The toxicity of chemical sprays is probably a bigger influence “at the population level” on bee decline, Essenberg says. But this study offers a new idea: that change in electric potential might need to be taken into account for effectively spraying plants. “It raises questions about what other kinds of things might influence that potential,” she adds, such as contaminants in the air or pollution that falls with the rain.

Essenberg says it would be helpful to look at the impacts of electric field changes in more realistic foraging settings over longer periods of time. Hunting agrees. “Whether the phenomenon is really relevant in the long run, it might be, but we need to uncover more about this new mechanism.” 

The post Bees can sense a flower’s electric field—unless fertilizer messes with the buzz appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
This far-off galaxy is probably shooting us with oodles of ghostly particles https://www.popsci.com/science/icecube-neutrino-spiral-galaxy/ Thu, 03 Nov 2022 18:00:00 +0000 https://www.popsci.com/?p=483938
The center of Messier 77's spiral galaxy.
The center of the galaxy NGC 1068 (also known as Messier 77) where neutrinos may originate, as captured by the Hubble Space Telescope. NASA, ESA & A. van der Hoeven

A sophisticated experiment buried under Antarctica is tracing neutrinos to their extraterrestrial origin.

The post This far-off galaxy is probably shooting us with oodles of ghostly particles appeared first on Popular Science.

]]>
The center of Messier 77's spiral galaxy.
The center of the galaxy NGC 1068 (also known as Messier 77) where neutrinos may originate, as captured by the Hubble Space Telescope. NASA, ESA & A. van der Hoeven

Deep under the South Pole sits an icebound forest of wiring called IceCube. It’s no cube: IceCube is a hexagonal formation of kilometer-deep holes in the ice, drilled with hot water and filled with electronics. Its purpose is to pick up flickers of neutrinos—ghostly little particles that often come from space and phase through Earth with hardly a trace. 

Four years ago, IceCube helped scientists find their first hint of a neutrino source outside our solar system. Now, for the second time, IceCube scientists have pinpointed a fountain of far-traveler neutrinos, originating from NGC 1068, a galaxy about 47 million light-years away.

Their results, published in the journal Science on November 3, need further confirmation. But if these observations are correct, they’re a key step in helping astronomers understand where in the universe those neutrinos originate. And, curiously, NGC 1068 is very different from IceCube’s first suspect.

Neutrinos are little phantoms. By some estimates, 100 trillion pass through your body every single second—and virtually none of them interact with your body’s atoms. Unlike charged particles such as protons and electrons, neutrinos are immune to the pulling and pushing of electromagnetism. Neutrinos have so little mass that, for many years, physicists thought they had no mass at all.

Most neutrinos that we see from space spew out from the sun. But scientists are especially interested in the even more elusive breed of neutrinos that come from outside the solar system. For astronomers, IceCube represents a wish: If researchers can observe far-flung neutrinos, they can use them to see through gas and dust clouds, which light typically doesn’t pass through.

[Related: We may finally know where the ‘ghost particles’ that surround us come from]

IceCube’s mission is to find those neutrinos, which reach Earth with far more energy than any solar neutrino. Although it’s at the south pole, IceCube actually focuses on neutrinos striking the northern hemisphere. IceCube’s detectors try to discern the direction a neutrino is traveling. If IceCube detects particles pointing downwards, scientists struggle to discern them from the raging static of cosmic radiation that constantly batters Earth’s atmosphere. If IceCube detects particles pointing upwards, on the other hand, scientists know that they’ve come from the north, having passed through the bulk of our planet before striking icebound detectors.

“We discovered neutrinos reaching us from the cosmos in 2013,” says Francis Halzen, a physicist at the University of Wisconsin-Madison and part of the IceCube collaboration who authored the paper, “which raised the question of where they originate.”

Finding neutrinos is already hard; finding where they come from is orders of magnitude harder. Identifying a neutrino origin involves painstaking data analysis that can take years to complete.

Crucially, this isn’t IceCube’s first identification. In 2018, scientists comparing IceCube data to observations from traditional telescopes pinpointed one possible neutrino source, more than 5 billion light years away: TXS 0506+56. TXS 0506+56 is an example of what astronomers call a blazar: a distant, high-energy galaxy with a central black hole that spews out a jet directly in Earth’s direction. It’s loud, bright, and the exact sort of object that astronomers thought created neutrinos. 

But not everybody was convinced they had the whole picture.

“The interpretation has been under debate,” says Kohta Murase, a physicist at Pennsylvania State University, who wasn’t an author of the new paper. “Many researchers think that other source classes are necessary to explain the origin of high-energy neutrinos coming from different directions over the sky.”

So IceCube scientists got to work. They combed through nine years’ worth of IceCube observations, from 2011 to 2020. Since blazars such as TXS 0506+56 tend to spew out torrents of gamma rays, the researchers tried to match the neutrinos with known gamma-ray sources.

As it happened, the source they found wasn’t the gamma-ray source they expected.

[Related: This ghostly particle may be why dark matter keeps eluding us]

NGC 1068 (also known as M77), located some 47 million light-years from us, is not unlike our own galaxy. Like the Milky Way, it’s shaped like a spiral. Like the Milky Way, it has a supermassive black hole at its heart. Some astronomers had suspected it as a neutrino source, but any proof remained elusive.

That black hole produces a torrent of what astrophysicists call cosmic rays. Despite their name (the scientists who first discovered them thought they were rays), cosmic rays are actually ultra-energized protons and atomic nuclei hurtling through the universe at nearly the speed of light. 

But, unlike its counterpart at the center of the Milky Way, NGC 1068’s black hole is shrouded behind a thick veil of gas and dust, which blocks many of the gamma rays that would otherwise emerge. That, astronomers say, complicates the old picture of where neutrinos came from. “This is the key issue,” says Halzen. “The sources we detect are not seen in high energy gamma rays.”

As cosmic rays crash into that veil, they cause a cascade of nuclear reactions that spew out neutrinos. (In fact, cosmic rays do the same when they strike Earth’s atmosphere). One reason why the NGC 1068 discovery is so exciting, then, is that the ensuing neutrinos might give astronomers clues about those cosmic rays.

It’s not final proof; there’s not enough data quite yet to be certain. That will take more observations, more years of painstaking data analysis. Even so, Murase says, other astronomers might search the sky for galaxies like NGC 1068, galaxies whose central black holes are occluded.

Meanwhile, other astronomers believe that there are even more places high-energy neutrinos could flow from. If a star passes too close to a supermassive black hole, for instance, the black hole’s gravity might rip the star apart and unleash neutrinos in the process. As astronomers prepare to look for neutrinos, they’ll want to look for new, more diverse points in the sky, too.

They’ll soon have more than just IceCube to work with. Astronomers are laying the groundwork—or seawork—for additional high-sensitivity neutrino detectors: one at the bottom of Siberia’s Lake Baikal and another on the Mediterranean Sea floor. Soon, those may join the hunt for distant, far-traveler neutrinos.

The post This far-off galaxy is probably shooting us with oodles of ghostly particles appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Here’s what the Earth’s magnetic field would sound like if we could hear it https://www.popsci.com/science/earth-magnetic-field-sounds/ Fri, 28 Oct 2022 15:44:47 +0000 https://www.popsci.com/?p=481875
Earth's magnetic field in a black, red, and white illustration
Our magnetic field protects us from cosmic radiation and charged particles that bombard Earth in solar winds. ESA/ATG medialab

Listen to magnetic signals battle with a solar storm.

The post Here’s what the Earth’s magnetic field would sound like if we could hear it appeared first on Popular Science.

]]>
Earth's magnetic field in a black, red, and white illustration
Our magnetic field protects us from cosmic radiation and charged particles that bombard Earth in solar winds. ESA/ATG medialab

As the end of spooky season approaches, the icy cold and darkness of space has some extra scary vibes to offer to anyone willing to listen very closely. Researchers at the Technical University of Denmark in Lyngby have converted the radiofrequency signals from Earth’s magnetic field into sounds. The signals were measured by the European Space Agency’s (ESA) Swarm satellite mission. Take a listen.


The moody sound bite comes from the magnetic field generated by Earth’s core and its interaction with a solar storm. Earth’s magnetic field is essential to life on the planet, but it isn’t something that we can typically see or hear. It’s basically a shield that protects the planet from the cosmic radiation and charged particles coming from solar winds.

When the colorful aurora borealis (or northern lights) dances across the night sky in the upper latitudes, it’s a visual example of charged particles from the sun interacting with Earth’s magnetic field. However, hearing the sounds that occur when the magnetic field interacts with charged particles from the sun is a bit more more tricky.

[Related: Will Earth’s shifting magnetic poles push the Northern Lights too?]

According to the ESA, “Our magnetic field is largely generated by an ocean of superheated, swirling liquid iron that makes up the outer core around 1,864 miles beneath our feet. Acting as a spinning conductor in a bicycle dynamo, it creates electrical currents, which in turn, generate our continuously changing electromagnetic field.”

Earth's magnetic field
Strength of the magnetic field at Earth’s surface. CREDIT: DTU/ESA

DTU/ESA

In 2013, the ESA launched three Swarm satellites for a mission to help decode how Earth’s magnetic field is generated by precisely measuring the radiofrequency signals coming from the planet’s core, mantle, crust, and oceans, as well as from the ionosphere and magnetosphere. Swarm is also helping scientists better understand space weather.

[Related: Astronomers used telescopic ‘sunglasses’ to photograph a black hole’s magnetic field.]

“The team used data from ESA’s Swarm satellites, as well as other sources, and used these magnetic signals to manipulate and control a sonic representation of the core field,” explained musician and project supporter Klaus Nielsen, from the Technical University of Denmark, in a statement. “The project has certainly been a rewarding exercise in bringing art and science together.”

The team set up 30 loudspeakers to play the sounds in Solbjerg Square in Copenhagen until October 30, with each speaker representing a different spot on Earth to demonstrate how the planet’s magnetic field has fluctuated over the past 100,000 years.

“The rumbling of Earth’s magnetic field is accompanied by a representation of a geomagnetic storm that resulted from a solar flare on November 3, 2011, and indeed it sounds pretty scary,” added Nielsen.

According to the scientists, the intention isn’t to spook people, but use the sounds as a clever way to remind us that Earth’s magnetic field exists and has a pull on our lives.

The post Here’s what the Earth’s magnetic field would sound like if we could hear it appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
To set the record straight: Nothing can break the speed of light https://www.popsci.com/science/whats-faster-than-the-speed-of-light/ Mon, 24 Oct 2022 12:35:47 +0000 https://www.popsci.com/?p=480200
Gamma-ray burst from exploding galaxy in NASA Hubble telescope rendition
Gamma-ray bursts (like the one in this illustration) from distant exploding galaxies transmit more powerful light than the visible wavelengths we see. But that doesn't mean they're faster. NASA, ESA and M. Kornmesser

Objects may not be as fast as they appear with this universal illusion.

The post To set the record straight: Nothing can break the speed of light appeared first on Popular Science.

]]>
Gamma-ray burst from exploding galaxy in NASA Hubble telescope rendition
Gamma-ray bursts (like the one in this illustration) from distant exploding galaxies transmit more powerful light than the visible wavelengths we see. But that doesn't mean they're faster. NASA, ESA and M. Kornmesser

Back in 2018, astronomers examining the ruins of two collided neutron stars in Hubble Space Telescope images noticed something peculiar: a stream of bright high-energy ions, jetting away from the merger in Earth’s direction at seven times the speed of light.

That didn’t seem right, so the team recalculated with observations from a different radio telescope. In those observations, the stream was flying past at only four times the speed of light.

That still didn’t seem right. Nothing in the universe can go faster than the speed of light. As it happens, it was an illusion, a study published in the journal Nature explained earlier this month.

[Related: Have we been measuring gravity wrong this whole time?]

The phenomenon that makes particles in space appear to travel faster than light is called superluminal motion. The phrase fits the illusion: It means “more than light,” but actually describes a trick where an object moving toward you appears much faster than its actual speed. There are high-energy streams out in space there that can pretend to move faster than light—today, astronomers are seeing a growing number of them.

“They look like they’re moving across the sky, crazy fast, but it’s just that they’re moving toward you and across the sky at the same time,” says Jay Anderson, an astronomer at the Space Telescope Science Institute in Maryland who has worked extensively with Hubble and helped author the Nature paper.

To get their jet’s true speed, Anderson and his collaborates compared Hubble and radio telescope observations. Ultimately, they estimated that the jet was zooming directly at Earth at around 99.95 percent the speed of light. That’s very close to the speed of light, but not quite faster than it.

Indeed, to our knowledge so far, nothing on or off our planet can travel faster than the speed of light. This has been proven time and time again through the laws of special relativity, put on paper by Albert Einstein a century ago. Light, which moves at about 670 million miles per hour, is the ultimate cosmic speed limit. Not only that, special relativity holds that the speed of light is a constant no matter who or what is observing it.

But special relativity doesn’t limit things from traveling super close to the speed of light (cosmic rays and the particles from solar flares are some examples). That’s where superluminal motion kicks in. As something moves toward you, the distance that its light and image needs to reach you decreases. In everyday life, that’s not really a factor: Even seemingly speedy things, like a plane moving through the sky above you, don’t move anywhere near the speed of light. 

[Related: Check out the latest version of Boom’s supersonic plane]

But when something is moving at high speeds at hundreds of millions of miles per hour in the proper direction, the distance between the object and the perceiver (whether it be a person or a camera lens) drops very quickly. This gives the illusion that something is approaching more rapidly than it actually is. Neither our eyes nor our telescopes can tell the difference, which means astronomers have to calculate an object’s actual speed from data collected in images.

The researchers behind the new Nature paper weren’t the first to grapple with superluminal motion. In fact, they’re more than a century late. In 1901, astronomers scanning the night sky caught a glimpse of a nova in the direction of the constellation Perseus. It was the remnants of a white dwarf that ate the outer shells of a nearby gas giant, briefly lighting up bright enough to see with the naked eye. Astronomers caught a bubble inflating from the nova at breakneck speed. But because there was no theory of general relativity at the time, the event quickly faded from memory.

The phenomenon gained buzz again by the 1970s and 1980s. By then, astronomers were finding all sorts of odd high-energy objects in distant corners of the universe: quasars and active galaxies, all of which could shoot out jets of material. Most of the time, these objects were powered by black holes that spewed out high-energy jets almost moving at the speed of the light. Depending on the mass and strength of the black hole they come from, they could stretch for thousands, hundreds of thousands, or even millions of light-years to reach Earth.

As distant objects close in, neither our eyes nor our telescopes can tell the difference, giving us the illusion that they’re moving faster and faster.

Around the same time, scientists studying radio waves began seeing enough faux-speeders to raise alarms. They even found a jet from one distant galaxy that appeared to be racing at nearly 10 times the speed of light. The observations garnered a fair amount of alarm among astronomers, though by then the mechanisms were well-understood.

In the decades since, observations of superluminal motion have added up. Astronomers are seeing an ever-increasing number of jets through telescopes, particularly ones that are floating through space like Hubble or the James Webb Space Telescope. When light doesn’t have to pass through Earth’s atmosphere, their captures can be much higher in resolution. This helps teams find more jets that are farther away (such as from ancient, distant galaxies), and it helps them view closer jets in more detail. “Things stand out much better in Hubble images than they do in ground-based images,” says Anderson. 

[Related: This image wiggles when you scroll—or does it?]

Take, for instance, the distant galaxy M87, whose gargantuan central black hole launched a jet that apparently clocked in at between 4 and 6 times the speed of light. By the 1990s, Hubble could actually peer into the stream of energy and reveal that parts it were traveling at different speeds. “You could actually see features in the jet moving, and you could measure the locations of those features,” Anderson explains.

There are good reasons for astronomers to be interested in such breakneck jets, especially now. In the case of the smashing neutron stars from the Nature study, the crash caused a gamma-ray burst, a type of high-energy explosion that remains poorly understood. The event also stirred up a storm of gravitational waves, causing rippled in space-time that researchers can now pick up and observe. But until they uncover some strange new physics in the matter flying through space, the speed of light remains the hard limit.

The post To set the record straight: Nothing can break the speed of light appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Could quantum physics unlock teleportation? https://www.popsci.com/science/quantum-teleportation-history/ Thu, 20 Oct 2022 15:30:00 +0000 https://www.popsci.com/?p=479596
illustrations of a person being teleported in a 1960s style
The article 'Teleportation: Beam Me Up, Bob' appeared in the November 1993 issue of Popular Science. Popular Science

Physicists are making leaps in quantum teleportation, but it's still a long ways from 'Star Trek.'

The post Could quantum physics unlock teleportation? appeared first on Popular Science.

]]>
illustrations of a person being teleported in a 1960s style
The article 'Teleportation: Beam Me Up, Bob' appeared in the November 1993 issue of Popular Science. Popular Science

From cities in the sky to robot butlers, futuristic visions fill the history of PopSci. In the Are we there yet? column we check in on progress towards our most ambitious promises. Read the series and explore all our 150th anniversary coverage here.

Jetpacks, flying cars, hoverboards, bullet trains—inventors have dreamt up all kinds of creative ways, from science fiction to science fact, to get from point A to point B. But when it comes to transportation nirvana, nothing beats teleportation—vehicle-free, instantaneous travel. If beam-me-up-Scotty technology has gotten less attention than other transportation tropes—Popular Science ran short explainers in November 1993 and September 2004—it’s not because the idea isn’t appealing. Regrettably, over the decades there just hasn’t been much progress in teleportation science to report. However, since the 2010s, new discoveries on the subatomic level are shaking up the playing field: specifically, quantum teleportation.

Just this month, the 2022 Nobel Prize in Physics was awarded to three scientists “for experiments with entangled photons,” according to the Royal Swedish Academy of Sciences, which selects the winners. The recipients’ work demonstrated that teleportation is possible—well, at least between photons (and with some serious caveats on what could be teleported). The physicists—Alain Aspect, John Clauser, and Anton Zeilinger—had independent breakthroughs over the last several decades. The result of their work not only demonstrated quantum entanglement in action but also showed how the arcane property could be a channel to teleport quantum information from one photon to another. While their findings are not anywhere close to transforming airports and train stations into Star Trek-style transporters, they have been making their way into promising applications, including quantum computing, quantum networks, and quantum encryption

“Teleportation is a very inspiring word,” says Maria Spiropulu, the Shang-Yi Ch’en professor of physics at the California Institute of Technology, and director of the INQNET quantum network program. “It evokes our senses and suggests that a weird phenomenon is taking place. But nothing weird is taking place in quantum teleportation.”

When quantum mechanics was being hashed out in the early 20th century between physicists like Max Planck, Albert Einstein, Niels Bohr, and Erwin Schrödinger, it was becoming clear that at the subatomic particle level, nature appeared to have its own hidden communication channel, called quantum entanglement. Einstein described the phenomenon scientifically in a paper published in 1935, but famously called it “spooky action at a distance” because it appeared to defy the normal rules of physics. At the time, it seemed as fantastical as teleportation, a phrase first coined by writer Charles Fort just four years earlier to describe unexplainable spectacles like UFOs and poltergeists.

“Fifty years ago, when scientists started doing [quantum] experiments,” says Spiropulu, “it was still considered quite esoteric.” As if in tribute to those scientists, Spiropulu has  a print honoring physicist Richard Feynman in her office. Feynman won the Nobel Prize in 1965 for his Feynman diagrams, a graphical interpretation of quantum mechanics.

Spiropulu equates quantum entanglement with shared memories. “Once you marry, it doesn’t matter how many divorces you may have,” she explains. Because you’ve made memories together, “you are connected forever.” At a subatomic level, the “shared memories” between particles enables instantaneous transfer of information about quantum states—like atomic spin and photon polarization—between distant particles. These bits of information are called quantum bits, or qubits. Classical digital bits are binary, meaning that they can only hold the value of 1 or 0, but qubits can represent any range between 0 and 1 in a superposition, meaning there’s a certain probability of being 0 and certain probability of being 1 at the same time. Qubits’ ability to take on an infinite number of potential values simultaneously allows them to process information much faster—and that’s just what physicists are looking for in a system that leverages quantum teleportation. 

[Related: Quantum teleportation is real, but it’s not what you think]

But for qubits to work as information processors, they need to share information the way classical computer chips share information. Enter entanglement and teleportation. By entangling subatomic particles, like photons or electrons—the qubits—and then separating them, operations can be performed on one that generates an instantaneous response in its entangled twin. 

The farthest distance to date that qubits have been separated was set by Chinese scientists, who used quantum entanglement to send information from Tibet to a satellite in orbit 870 miles away. On terra firma, the record is just tens of miles, traveling through fiber optic connections and through air (line of sight lasers).

Qubits’ strange behavior—acting like they’re still together no matter how far apart they’ve been separated—continues to puzzle but amaze physicists. “It does appear magical,” Spiropulu admits. “The effect appears very, ‘wow!’ But once you break it down, then it’s engineering.” And in just the past five years, great strides have been made in quantum engineering to apply the mysterious but predictable characteristics of qubits. Besides quantum computing advances made by tech giants like Google, IBM, and Microsoft, Spiropulu has been spearheading a government- and privately funded program to build out a quantum internet that leverages quantum teleportation. 
With some guidance from Spiropulu’s postdoctoral researchers at Caltech, Venkata R. (Raju) Valivarthi and Neil Sinclair, this is how state-of-the-art quantum teleportation would work (you might want to strap yourself in):

Step 1: Entangle

a diagram of an orange unlabeled circle representing a photon pointing towards a pyramid representing a crystal and getting split into two photons labeled one and two

Using a laser, a stream of photons shoots through a special optical crystal that can split photons into pairs. The pair of photons are now entangled, meaning they share information. When one changes, the other will, too.

Step 2: Open a quantum teleportation channel

a diagram of photon 1 and 2 connected by a dotted line representing the quantum channel. the photons are in two locations

Then, one of the two photons is sent over a fiber optic cable (or another medium capable of transmitting light, such as air or space) to a distant location.This opens a quantum channel for teleportation. The distant photon (labeled photon one above) becomes the receiver, while the photon that remains behind (labeled photon two) is the transmitter. This channel does not necessarily indicate the direction of information flow as the photons could be distributed in roundabout ways.

Step 3: Prepare a message for teleportation

a diagram of a message icon with an arrow pointing at a photon labeled three. above the arrow are some dots and lines representing that the message is encoded

A third photon is added to the mix, and is encoded with the information to be teleported. This third photon is the message carrier. The types of information transmitted could be encoded into what’s called the photon’s properties, or state, such as its position, polarization, and momenta. (This is where qubits come in, if you think of the encoded message in terms of 0s, 1s, and their superpositions.)

Step 4: Teleport the encoded message

a diagram of step four with the photons changing states

One of the curious properties of quantum physics is that a particle’s state, or properties, such as its spin or position, cannot be known until it is measured. You can think of it like dice. A single die can hold up to six values, but its value isn’t known until it’s rolled. Measuring a particle is like rolling dice, it locks in a specific value. In teleportation, once the third photon is encoded, a joint measurement is taken of the second and third photons’ properties, which means their states are measured at the same time and their values are locked in (like viewing the value of a pair of dice). The act of measuring changes the state of the second photon to match the state of the third photon. As soon as the second photon changes, the first photon, on the receiving end of the quantum channel, snaps into a matching state.

Now the information lies with photon one—the receiver. However, even though the information has been teleported to the distant location, it’s still encoded, which means that like an unrolled die it’s indeterminate until it can be decoded, or measured. The measurement of photon one needs to match the joint measurement taken on photons two and three. So the outcome of the joint measurement taken on photons two and three is recorded and sent to photon one’s location so it can be repeated to unlock the information. At this point, photons two and three are gone because the act of measuring photons destroys them. Photons are absorbed by whatever is used to measure them, like our eyes. 

Step 5: Complete the teleportation

step five diagram shows photons three and two whited out (meaning they are gone) and photon one with the message decoded

To decode the state of photon one and complete the teleportation, photon one must be manipulated based on the outcome of the joint measurement, also called rotating it, which is like rolling the dice the same way they were rolled before for photons one and two. This decodes the message—similar to how binary 1s and 0s are translated into text or numeric values. The teleportation may seem instantaneous on the surface, but because the decoding instructions from the joint measurement can only be sent using light (in this scenario over a fiber optic cable), the photons only transfer the information at the speed of light. That’s important because teleportation would otherwise violate Einstein’s relativity principle, which states that nothing travels faster than the speed of light—if it did, this would lead to all sorts of bizarre implications and possibly upend physics. Now, the encoded information in photon three (the messenger) has been teleported from photon two’s position (transmitter) to photon one’s position (receiver) and decoded.

Whew! Quantum teleportation complete. 

Since we transmit digital bits today using light, it might seem like quantum teleportation and quantum networks offer no inherent advantage. But the difference is significant. Qubits can convey much more information than bits. Plus, quantum networks are more secure, since attempts to interfere with quantum entanglement would destroy the open quantum channel.

Researchers have discovered many different ways to entangle, transmit, and measure subatomic information. Plus, they’re upgrading from teleporting information about photons, to teleporting information about larger-sized particles like electrons, and even atoms

[Related: Warp speed space travel just got a tiny bit more realistic]

But it’s still just information being transmitted, not matter—the stuff that humans are made of. While the ultimate dream may be human teleportation, it actually might be a good thing we’re not there yet. 

The Star Trek television and film franchise not only helped popularize teleportation but also glamorized it with a glittery dissolve effect and catchy transporter-tone. The Fly, on the other hand, a movie about teleportation gone wrong, painted a much darker, but possibly scientifically truer picture of teleportation. That’s because teleportation is really an act of reincarnation. Teleportation of living matter is risky business: It would require scanning the traveler’s information at the point of departure, transmitting that information to the desired coordinates, and deconstructing them at the point of departure while simultaneously reconstructing the traveler at the point of arrival—we wouldn’t want errant copies of ourselves on the loose. Nor would we want to arrive as a lifeless copy of ourselves. We would have to arrive with all our beating, breathing, blinking systems intact in order for the process to be a success. Teleporting living beings, at its core, is a matter of life and death.

Or not.

Formidable minds, such as Stephen Hawking, have proposed that the information, or vector state, that is teleported over quantum entanglement channels does not have to be confined to subatomic particle properties. In fact, entire blackholes’ worth of trapped information could be teleported, according to this theory. It gets weird, but by entangling two blackholes and connecting them with a wormhole (a space-time shortcut), information that disappears into one blackhole might emerge from the other as a hologram. Under this reasoning, the vector states of molecules, humans, and even entire planets could theoretically be teleported as holograms. 

Kip Thorne, a Caltech physicist who won the 2017 Nobel Prize in Physics for gravity wave detection, may have best explained the possibilities of teleportation and time travel as far back as 1988: “One can imagine an advanced civilization pulling a wormhole out of the quantum foam and enlarging it to classical size. This might be analyzed by techniques now being developed for computation of spontaneous wormhole production by quantum tunneling.” 

For now, Spiropulu remains focused on the immediate promise of quantum teleportation. But it won’t look anything like Star Trek. “‘Beam me up, Scotty?’ No such things,” she says. “But yes, a lot of progress. And it’s transformative.”

The post Could quantum physics unlock teleportation? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Refining the clock’s second takes time—and lasers https://www.popsci.com/technology/measure-second-optical-clocks-laser/ Wed, 19 Oct 2022 19:30:00 +0000 https://www.popsci.com/?p=479419
Gold pocket watch and hourglass
Metrologists hope optical clocks will provide a redefined 'second' by 2030. Deposit Photos

A Chinese research team set a new distance record for syncing two timepieces thanks to some very precise lasers.

The post Refining the clock’s second takes time—and lasers appeared first on Popular Science.

]]>
Gold pocket watch and hourglass
Metrologists hope optical clocks will provide a redefined 'second' by 2030. Deposit Photos

As technology progresses, so does our ability to more precisely delineate the passage of time. Since their introduction in 1967, the standard bearers of accurate timekeeping have been atomic clocks, whose caesium-33 atoms’ oscillations serve as a reference point for a single “second.” But as a new paper published with Nature earlier in this month reminds us, atomic clocks are so literally and metaphorically yesterday’s news.

According to the publication’s writeup yesterday, a group of researchers at the University of Science and Technology of China in Hefei recently synced optical clocks located 113 kilometers (about 70.2 miles) apart using precise pulses of laser light—roughly seven times farther than that of the previous record. The milestone represents a significant step forward for metrologists, scientists who study measurement, to “redefine” the second by the end of the decade. Once successful, their studies could be an estimated 100 times more accurate than those using existing atomic clock readings.

[Related: What the heck is a time crystal?]

Unlike atomic clocks’ caesium microwaves, optical clocks are reliant on the movement of higher-frequency elements such as strontium and ytterbium to measure time. To measure this, metrologists need to transmit and compare clocks’ readings on different continents, but since satellites are required to accomplish this, our atmosphere’s occluding effects need to be addressed to ensure as accurate a measurement as possible. These latest advances using lasers offer a major step forward towards bypassing these hurdles.

There are a bunch of potential optical clock benefits apart from simply getting an even more nitty-gritty second. According to Nature, researchers will be able to more accurately test the general theory of relativity, which states that time passes slower in regions with higher gravitational pull, i.e. lower altitudes. Optical clocks’ ticking “could even reveal subtle changes in gravitational fields caused by the movement of masses — for example by shifting tectonic plates,” explains the writeup.

[Related: Best sunrise clocks of 2022.]

Researchers still have a lot of work ahead of them before they can confidently reboot the second. In particular, sending signals to orbiting satellites—while roughly the same distance as what the Chinese team just pulled off—must take other factors into consideration, particularly their orbital speeds. For this, metrologists will turn to their recent advances in a whole other field—quantum-communications satellites.

The post Refining the clock’s second takes time—and lasers appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Geologists are searching for when the Earth took its first breath https://www.popsci.com/science/earths-first-breath/ Fri, 14 Oct 2022 20:21:42 +0000 https://www.popsci.com/?p=478159
Volcano belching lava and gas above ocean to represent Great Oxygenation Event
At first the Earth's atmosphere was filled with helium and volcanic emissions. Then it slowly got doses of oxygen from the oceans and tectonic activity. Deposit Photos

The planet's early oxygenation events were more like rollercoaster rides than spikes.

The post Geologists are searching for when the Earth took its first breath appeared first on Popular Science.

]]>
Volcano belching lava and gas above ocean to represent Great Oxygenation Event
At first the Earth's atmosphere was filled with helium and volcanic emissions. Then it slowly got doses of oxygen from the oceans and tectonic activity. Deposit Photos

Many eons ago, the Earth was a vastly different place from our home. A great supercontinent called Rodinia was fragmenting into shards with faintly familiar names like Laurentia, Baltica, and Gondwanaland. For a time, Earth was covered, in its entirety, by sheets of ice. Life was barely clinging onto this drastically changing world.

All this came from the chapter of our planet’s history that scientists today have titled the Neoproterozoic Era, which lasted from roughly 1 billion to 540 million years ago. The long stretches of time within its stony pages were a very distant prelude to our world today: a time when the first animals stirred to life, evolving from protists in ancient seas.

Just as humans and their fellow animals do today, these ancient precursors would have needed oxygen to live. But where did it come from, and when? We still don’t have firm answers. But experts have developed a blurry snapshot of how oxygen built up in the Neoproterozoic, published today in the journal Science Advances. And that picture is a bumpy ride, filled with periods of oxygen entering the atmosphere before disappearing again, on repeat, in cycles that lasted tens of millions of years.

To look that far back, you have to throw much of what we take for granted about the modern world right out the window. “As you go further back in time, the more alien of a world Earth becomes,” says Alexander Krause, a geologist at University College London in the United Kingdom, and one of the paper’s authors.

[Related: Here’s how life on Earth might have formed out of thin air and water]

Indeed, after the Earth formed, its early atmosphere was a medley of gases burped out by volcanic eruptions. Over several billion years, they coated our planet with a stew of noxious methane, hydrogen sulfide, carbon dioxide, and water vapor.

That would change in time. We know that some 2.3 billion years ago, microorganisms called cyanobacteria created a windfall of oxygen through photosynthesis. Scientists call these first drops of the gas, creatively, the Great Oxygenation Event. But despite its grandiose name, the juncture only brought our atmosphere’s oxygen to at most a small fraction of today’s levels. 

What happened between then and now is still a murky question. Many experts think that there was another oxygenation event about 400 million years ago in the Paleozoic Era, just as animals were starting to crawl out of the ocean and onto land. Another camp, including the authors of this new research, think there was a third event, sometime around 700 million years ago in the Neoproterozoic. But no one knows for sure if oxygen gradually increased over time, or if it fluctuated wildly. 

That’s important for geologists to know, because atmospheric oxygen is involved in virtually every process on Earth’s surface. Even if early life mostly lived in the sea, the upper levels of the ocean and the atmosphere constantly exchange gases.

To learn more, Krause and his collaborators simulated the atmosphere from 1.5 billion years ago until today—and how oxygen levels in the air fluctuated over that span. Though they didn’t have the technology to take a whiff of billion-year-old air, there are a few fingerprints geologists can use to reconstruct what the ancient atmosphere might have looked like. By probing sedimentary rocks from that era, they’re able to measure the carbon and sulfur isotopes within, which rely on oxygen in the atmosphere to form.

Additionally, as the planet’s tectonic plates move, oxygen buried deep within the mantle can emerge and bubble up into the air through a process known as tectonic degassing. Using information on tectonic activity from the relevant eras, Krause and his colleagues previously estimated the history of degassing over time.

No one knows for sure if oxygen gradually increased over time, or if it fluctuated wildly. 

By putting those scraps of evidence together, the team came up with a projection of how oxygen levels wavered in the air until the present day. It’s not the first time scientists have tried to make such a model, but according to Krause, it’s the first time anyone has tried it over a billion-year timescale. “Others have only reconstructed it for a few tens of millions of years,” Krause says.

He and his colleagues found that atmospheric oxygen levels didn’t follow a straight line over the Earth’s history. Instead, imagine it like an oxygen roller coaster. Across 100-million-year stretches or so, oxygen levels rose to around 50 percent of modern levels, and then plummeted again. The Neoproterozoic alone saw five such peaks.

Only after 540 million years ago, in the Paleozoic Era, did the atmosphere really start to fill up. Finally, close to 350 million years ago, oxygen reached something close to current-day levels. That increase coincided with the great burst of life’s diversity known as the Cambrian Explosion. Since then, while oxygen levels have continued to fluctuate, they’ve never dropped below around 60 percent of the present.

“It’s an interesting paper,” says Maxwell Lechte, a geologist at McGill University in Montréal, who wasn’t involved in the research. “It’s probably one of the big contentious discussion points of the last 10 years or so” in the study of Earth’s distant past.

[Related: Enjoy breathing oxygen? Thank the moon.]

It’s important to note, however, that the data set used for the simulation was incomplete. “There’s still a lot of rock out there that hasn’t been looked at,” says Lechte. “As more studies come out, they can probably update the model, and it would potentially change the outputs significantly.”

The obvious question then is how oxygen trends left ripple effects on the evolution of life.After all, it’s during that third possible oxygenation event that protists began to diversify and fan out into the very first animals—multicellular creatures that required oxygen to live. Paleontologists have found an abundance of fossils that date to the very end of the era, including a contested 890-million-year-old sponge.

Those animals might have developed and thrived in periods when oxygen levels were sufficiently high, like the flourishing Cambrian Explosion. Meanwhile, drops in oxygen levels might have coincided with great die-offs. 

Astronomers might take note of this work, too. Any oxygenation answers have serious implications for what we might find on distant Earth-like exoplanets. If these geologists are correct, then it’s evidence that Earth’s history is not linear, but rather bumpy, twisted, and sometimes violent. “These questions that this paper deals with represent a fundamental gap in our understanding of how our planet actually operates,” says Lechte.

The post Geologists are searching for when the Earth took its first breath appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
These islanders live and thrive alongside lava https://www.popsci.com/science/living-alongside-volcanoes-cris-toala-olivares/ Thu, 06 Oct 2022 11:00:00 +0000 https://www.popsci.com/?p=475478
People taking photos of glowing lava on Cape Verde
The 2014 eruption of the Fogo volcano ate up 75 percent of the surrounding villages and 25 percent of the farmland. Still residents returned. Lannoo Publishers/Cris Toala Olivares

Photographer Cris Toala Olivares visits communities who've built a relationship with one of nature's most terrifying forces.

The post These islanders live and thrive alongside lava appeared first on Popular Science.

]]>
People taking photos of glowing lava on Cape Verde
The 2014 eruption of the Fogo volcano ate up 75 percent of the surrounding villages and 25 percent of the farmland. Still residents returned. Lannoo Publishers/Cris Toala Olivares

Excerpt and photographs from Living with Volcanoes by Cris Toala Olivares. Copyright © 2022 by Cris Toala Olivares. Reprinted by permission of Lannoo Publishers.

Visiting the island of Fogo in Cape Verde in 2014 was a rare chance to people who actually live in a crater of a volcano, side by side with lava. I wanted to see this with my own eyes, despite the difficulties I had reaching the island, including a six-hour ride in a cargo boat that left many fellow travellers seasick.

While I was there, the Cha das Caldeiras community in the crater was being evacuated by authorities as the lava flow from an eruption engulfed their land and houses. Despite the dangers, the people were passionately trying to return to their homes due to the connection they feel with the place they are from. They were attempting to force their way back in, saying: “I was born with lava, and I will die with lava.”

Living With Volcanoes book cover
Courtesy of Lannoo Publishers

I met and travelled with volcano guide Manuel during my visit. Like many residents of the crater, he was a confident character and strongly attached to his way of life. Most people would move away from the lava, but he was desperate to remain and help his friends and neighbors. While I was with him, I felt safe, he understood the lava tracks and he knew where to walk and where to avoid.

When I was in the crater, I experienced how it is to live in this environment: it is like being in the volcano’s womb. You feel heat all around you, as if you are in an oven, and there is a comforting circulation of warm and dry winds. This was also the first time I saw flowing rivers of bright red lava. The people here have everything they need, and they know how to work with the nature surrounding them. They also produce food for all of Cape Verde on the volcanic soils of their farms, including beans, fruit, and wine. Everyone lives close by each other, many in the same house. Due to the risks of the lava, they know the importance of cooperation and solidarity.

I was struck by the loyalty and the bond these people have to their traditions and the life they know. In a world where many others like to move around and lifestyles change so fast, it was inspiring to see people wanting to hold onto their way of doing things no matter what.

Lava flowing from Fogo volcano through villages at night
River of lava flowing between the houses from the upper village Portela towards the lower village Bangaeira in Cha das Caldeiras, Ilha do Fogo, Cape Verde. The main cone last erupted in 1675, causing mass emigration from the island. In 1847 an eruption folowed by earthquakes killed several people. The thirt eruption was dated in 1951. Fourtyfour years later on the night of 2 of april, 1995 an other vent erupted. Residents of Cha das Caldeiras where evacuated Picture taken near the vents at Pico do Fogo on 8th of december 2014 Lannoo Publishers/Cris Toala Olivares
Black lava and ash covering an island village at sunrise
Sunrise overlooking the massive destruction the volcano has caused 16 days after the first eruption, burring the second village of Bangaeira with lava. Lannoo Publishers/Cris Toala Olivares
Two Fogo family members moving their belongings as a volcanic cone erupts in the distance
Overall, the island has a population of about 48,000 people. The name Fogo means “fire” in itself. Lannoo Publishers/Cris Toala Olivares
Tree with red leaves growing in ash of Fogo volcano
Like the people, many of the plants on Fogo find benefits in the volcanic ash. Trees, shrubs, and grape vines all grow beautifully in the aftermath of eruptions. Lannoo Publishers/Cris Toala Olivares

Buy Living With Volcanoes by Cris Toala Olivares here.

The post These islanders live and thrive alongside lava appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Quantum entanglement theorists win Nobel Prize for loophole-busting experiments https://www.popsci.com/science/nobel-prize-physics-2022-winners/ Tue, 04 Oct 2022 18:00:00 +0000 https://www.popsci.com/?p=474805
Nobel Prize Physics 2022 winners Alain Aspect, John F. Clauser and Anton Zeilinger in gold and black illustration
(From left) Alain Aspect, John F. Clauser, and Anton Zeilinger. Ill. Niklas Elmehed © Nobel Prize Outreach

A concept Einstein once called 'spooky action at a distance' earns a major scientific distinction.

The post Quantum entanglement theorists win Nobel Prize for loophole-busting experiments appeared first on Popular Science.

]]>
Nobel Prize Physics 2022 winners Alain Aspect, John F. Clauser and Anton Zeilinger in gold and black illustration
(From left) Alain Aspect, John F. Clauser, and Anton Zeilinger. Ill. Niklas Elmehed © Nobel Prize Outreach

After awarding three climate change modelers with the physics prize last year, the Nobel Committee recognized another trio of theorists in the field this year. Earlier today, it announced John F. Clauser, Alain Aspect, and Anton Zeilinger as the winners of the 2022 Nobel Prize in Physics for their independent contributions to understanding quantum entanglement.

Quantum mechanics represents a relatively new arena of physics focused on the mysterious atomic and subatomic properties of particles. Much of the research dwells on individual conditions and reactions; however, some experts theorize that two or more, say, photons can share the same state while keeping their distance from each other. If so, an expert can analyze the first sample and assume what the second, third, or fourth ones might be like. 

[Related: Nobel Prize in Medicine awarded to scientist who sequenced the Neanderthal genome.]

The phenomenon, called quantum entanglement, could hold answers to how energy flows through the universe and how information can travel over isolated networks. But some detractors wonder if the similarities in states are simply coincidental, or borne from other hard physics variables. Albert Einstein himself was skeptical of the explanation, calling it “spooky action at a distance” and a paradox in a letter to a colleague.

That’s where Clauser, Aspect, and Zeilinger come in. All three have designed experiments that address potential loopholes in the quantum entanglement theory, otherwise known as Bell inequalities. Clauser, an independent research physicist based in California, tested the polarization of photons emitted by lit-up calcium atoms with the help of a graduate student in 1972. His measurements matched those from previous physics formulas, but he worried that the way he produced the particles still left room for other correlations. 

In response, French physicist Alain Aspect recreated the experiment in a way that detected the photons and their shared states much better. His results, the Nobel Committee stated, “closed an important loophole and provided a very clear result: quantum mechanics is correct and there are no hidden variables.”

[Related: NASA is launching a new quantum entanglement experiment in space.]

While Clauser and Aspect looked at entanglement in pure particle physics, Zeilinger expanded on it with the emerging fields of computation and encryption. The professor emeritus at the University of Vienna fired lasers at crystals to create mirroring photons, and held them at various measurements to compare their properties. He also tied in data from cosmic radiation to ensure that signals from outer space weren’t influencing the particles. His work set the stage for technology’s adoption of quantum mechanics, and has now been applied to transistors, satellites, optical fibers, and IBM computers

The Institute of Science and Technology Austria issued a statement this morning congratulating Zeilinger, a former vice president in the group, and his fellow Nobel Prize recipients for their advancements. “It was the extraordinary work of Aspect, Clauser, and Zeilinger that translated the revolutionary theory of quantum physics into experiments,” they wrote. “Their demonstrations uncovered profound and mind-boggling properties of our natural world. Violations of the so-called Bell inequality continue to challenge our most profound intuitions about reality and causality. By exploring quantum states experimentally, driven only by curiosity, a range of new phenomena was discovered: quantum teleportation, many-particle and higher-order entanglements, and the technological prospects for quantum cryptography and quantum computation.”

The post Quantum entanglement theorists win Nobel Prize for loophole-busting experiments appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Here’s how life on Earth might have formed out of thin air and water https://www.popsci.com/science/water-peptide-life-earth/ Tue, 04 Oct 2022 15:30:00 +0000 https://www.popsci.com/?p=474634
Water droplets rising from Iceland's Skogafoss waterfall.
This is the first display of simple amino acids forming peptides in droplets of water. Deposit Photos

When droplets of water react with the air, life-starting things may happen.

The post Here’s how life on Earth might have formed out of thin air and water appeared first on Popular Science.

]]>
Water droplets rising from Iceland's Skogafoss waterfall.
This is the first display of simple amino acids forming peptides in droplets of water. Deposit Photos

The origins of how life on Earth arose remains a deep existential and scientific mystery. It’s long been theorized that our planet’s plentiful oceans could hold key to the secret. A new study from scientists at Purdue University could advance that idea one step further.

The paper published, on October 3 in the Proceedings of the National Academy of Sciences (PNAS), look at peptides: strings of amino acids that are important and tiny building blocks of protein and life itself. The authors found that amino peptides can spontaneously generate in droplets of water during the quick reactions that happen when water meets the atmosphere, such as when a waterfall crashes down to a rock and the spray is lifted into the air. It’s possible that this action happened when the Earth was a life-less, volcanic, watery, molten rock-filled planet about four billion years ago when life first began.

“This is essentially the chemistry behind the origin of life,” Graham Cooks, an author of the study and professor of analytical chemistry at Purdue, said in a press release. “This is the first demonstration that primordial molecules, simple amino acids, spontaneously form peptides, the building blocks of life, in droplets of pure water. This is a dramatic discovery.”

[Related: A primer on the primal origins of humans on Earth.]

In the study, the authors write that this discovery provides “a plausible route for the formation of the first biopolymers,” or the complex structures produced by living things. Scientists have been chipping away at the goal of understanding how this works for decades, since decoding the secret of how (and even why) life arose on Earth can help scientists better search for life on other planets, or even moons in our galaxy and beyond.

Understanding this water-based chemistry circles back to the proteins that created life on Earth itself. Billions of years ago, the raw amino acids that built life are believed to have been delivered to Earth by meteorites. These amino acids reacted and clung together to form peptides, the building blocks of proteins and eventually life itself. However, a water molecule must be lost when the amino acids cling together for peptides to form. That’s not easy to do in a planet that is mostly covered in water. Basically, for life to form, it needs water, but also the loss of some water.

Cooks explained this “water paradox,” to VICE. “The water paradox is the contradiction between (i) the very considerable evidence that the chemical reactions leading to life occurred in the prebiotic ocean and (ii) the thermodynamic constraint against exactly these (water loss) reactions occurring in water. Proteins are formed from amino acids by loss of water” and “loss of water in water will not occur because the process will be reversed by the water (thermodynamically forbidden).”

The new study has taken a rare glimpse into the Earth’s early years, when nonliving compounds suddenly combined to form living things. This process of nonliving things giving rise to life called abiogenesis and it is a still not completely clear how it works. Since peptides form the basis of proteins (and other biomolecules that can self-replicate), the creation of peptides is a crucial step in abiogenesis.

[Related: Comets Could Have Kickstarted Life On Earth And Other Worlds.]

Cooks and his team demonstrated that peptides can readily form in the kinds of chemical environments that were present on Earth billions of years ago. A key aspect, however, is the size of the tiny droplets flying through the air or sliding down rocks, interacting with the air and forming quick chemical reactions. “The rates of reactions in droplets are anywhere from a hundred to a million times faster than the same chemicals reacting in bulk solution,” said Cooks.

This speedy chemical reactions do not require a catalyst to begin the reaction, which made the evolution of life on Earth possible. The team used “droplet fusion” experiments to reconstruct the possible formation of peptides, that simulate how water droplets collide in the air. Understanding the chemical synthesis process at play when amino acids built themselves into protein could help synthetic chemists speed up the chemical reactions critical to creating new drugs and therapeutic treatments for diseases.

“If you walk through an academic campus at night, the buildings with the lights on are where synthetic chemists are working,” Cooks said. “Their experiments are so slow that they run for days or weeks at a time. This isn’t necessary, and using droplet chemistry, we have built an apparatus, which is being used at Purdue now, to speed up the synthesis of novel chemicals and potential new drugs.” 

The post Here’s how life on Earth might have formed out of thin air and water appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Europe’s energy crisis could shut down the Large Hadron Collider https://www.popsci.com/science/europe-gas-crisis-cern/ Mon, 26 Sep 2022 21:00:00 +0000 https://www.popsci.com/?p=472868
Large Hadron Collider experiment view with CERN staff in a hard hat standing near during the Europe gas crisis
Large Hadron Collider experiments like beauty might be put on ice for a few months, or even a year. CERN

In light of Russia's war in Ukraine, CERN officials are considering the energy costs of particle physics experiments.

The post Europe’s energy crisis could shut down the Large Hadron Collider appeared first on Popular Science.

]]>
Large Hadron Collider experiment view with CERN staff in a hard hat standing near during the Europe gas crisis
Large Hadron Collider experiments like beauty might be put on ice for a few months, or even a year. CERN

Europe is now suffering an energy crisis. The fallout from the invasion of Ukraine, resulting in the Russian government choking gas supplies, has pushed the continent’s heating and electricity prices up to a much higher order of magnitude.

In the heart of Europe, along the French-Swiss border, the particle physics laboratory at CERN is facing the same plight. This month, it’s been reported that CERN officials are drawing up plans to limit or even shut down the recently rebooted Large Hadron Collider (LHC).

If the LHC, the largest and most expensive collider in the world, does shut down for a short stint, it wouldn’t be out of the ordinary for particle accelerator research. But if it has to go into hibernation for a longer period, complications might arise.

[Related: The green revolution is coming for power-hungry particle accelerators]

Some say that CERN uses as much electricity as a small city, and there’s some truth in that. By the group’s own admission, in a year, its facility consumes about one-third the electricity as nearby Geneva, Switzerland. The exact numbers do vary from month to month and year to year, but the lab’s particle accelerators can account for around 90 percent of CERN’s electric bill.

For an observer on the ground, it’s very easy to wonder why so much energy is going into arcane physics experiments involving subatomic particles, plasma, and dark matter. “Given the current context and as part of its social responsibility, CERN is drawing up a plan to reduce its energy consumption this winter,” Maïlys Nicolet, a spokesperson for the group, wrote in a press statement.

That said, CERN doesn’t have the same utility concerns as the everyday European as its energy strategy is already somewhat sustainable. The facility draws its power from the French grid, which sources more than two-thirds of its juice from nuclear fission—the highest of any country in the world. Not only does that drastically reduce the LHC’s carbon footprint, it also makes it far less reliant on imported fossil fuels.

But the French grid has another quirk: Unlike much of Europe, which relies on gas to heat its homes, homes in France often use electric heaters. As a result, local power bills can double during the cold months. Right now, 32 of the country’s 56 nuclear reactors are down for maintenance or repairs. The French government plans to bolster its grid against the energy crisis by switching most of them back on by winter. 

[Related: Can Europe swap Russian energy with nuclear power?]

But if that doesn’t happen, CERN might be facing a power supply shortage. Even if the research giant stretched its budget to pay for power, there just might not be enough of it, depending on how France’s reactors fare. “For this autumn, it is not a price issue, it’s an availability issue,” Serge Claudet, chair of CERN’s energy management panel, told Science.

Hibernation isn’t exactly out of the ordinary for LHC, though. In the past, CERN has shut down the particle accelerator for maintenance during the winter. This year is no exception: The collider’s stewards plan to mothball it from November until March. If Europe’s energy crisis continues into 2023, the LHC pause could last well into the warmer months, if not longer.

CERN managers are exploring their options, according to the facility’s spokesperson. The French government might order the LHC not to run at times of peak electric demand, such as mornings or evenings. Alternatively, to keep its flagship running, CERN might try to shut off some of the smaller accelerators that share the site.

But not all particle physicists are on board with prioritizing energy for a single machine “I don’t think you could justify running it but switching off everything else,” says Kristin Lohwasser, a particle physicist at the University of Sheffield in the United Kingdom and a collaborator on ATLAS, one of the LHC’s experiments.

On the other hand, the LHC has more to lose by going dark for an indefinite amount of time. If it has to power down for a year or more, the collider’s equipment, such as the detectors used to watch collisions at very small scales, might start to degrade. “This is why no one would blankly advertise to switch off and just wait five years,” says Lohwasser. It also takes a fair amount of energy to keep the LHC in a dormant state.

Even if CERN’s accelerators aren’t running, the particle physicists around the world sifting through the data will still have plenty to work on. Experiments in the field produce tons of results: positions, velocities, and countless mysterious bits of matter from thousands of collisions. Experts can still find subatomic artifacts hidden in the measurements as much as a decade after they’re logged. The flow of physics studies almost certainly won’t cease on account of an energy crisis.

For now, the decision to power LHC’s third run of experiments still remains up in the air. This week CERN officials will present a plan to the agency’s governing authority on how to proceed. That solution will, in turn, be presented to the French and Swiss governments for consultation. Only after will the final decision be made public.

“So far, I do not necessarily see a big concern from [physicists] about these plans,” says Lohwasser. If CERN must take a back seat to larger concerns, then many in the scientific community will accept that.

The post Europe’s energy crisis could shut down the Large Hadron Collider appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
After the big bang, light and electricity shaped the early universe https://www.popsci.com/science/big-bang-galaxy-formation-james-webb-space-telescope/ Tue, 20 Sep 2022 16:18:00 +0000 https://www.popsci.com/?p=471170
Deepest image of space with twinkling stars captured by James Webb Space Telescope
As the James Webb Space Telescope peers far into space, it could dredge up clues to how early universes were shaped by atomic interactions. NASA, ESA, CSA, STScI

Free-roaming atoms charged across newly formed galaxies, bringing us from cosmic dark to dawn.

The post After the big bang, light and electricity shaped the early universe appeared first on Popular Science.

]]>
Deepest image of space with twinkling stars captured by James Webb Space Telescope
As the James Webb Space Telescope peers far into space, it could dredge up clues to how early universes were shaped by atomic interactions. NASA, ESA, CSA, STScI

When the first stars and galaxies formed, they didn’t just illuminate the cosmos. These bright structures also fundamentally changed the chemistry of the universe. 

During that time, the hydrogen gas that makes up most of the material in the space between galaxies today became electrically charged. That epoch of reionization, as it’s called, was “one of the last major changes in the universe,” says Brant Robertson, who leads the Computational Astrophysics Research Group at the University of California, Santa Cruz. It was the dawn of the universe as we know it.

But scientists haven’t been able to observe in detail what occurred during the epoch of reionization—until now. NASA’s newly active James Webb Space Telescope offers eyes that can pierce the veil on this formative time. Astrophysicists like Robertson are already poring over JWST data looking for answers to fundamental questions about that electric cosmic dawn, and what it can tell us about the dynamics that shape the universe today.

What happened after the big bang?

The epoch of reionization wasn’t the first time that the universe was filled with electricity. Right after the big bang, the cosmos were dark and hot; there were no stars, galaxies, and planets. Instead, electrons and protons roamed free, as it was too steamy for them to pair up

But as the universe cooled down, the protons began to capture the electrons to form the first atoms—hydrogen, specifically—in a period called “recombination,” explains Anne Hutter, a postdoctoral researcher at the Cosmic Dawn Center, a research collaboration between the University of Copenhagen and the National Space Institute at the Technical University of Denmark. That process neutralized the charged material.

Any material held in the universe was spread out relatively evenly at that time, and there was very little structure. But there were small fluctuations in density, and over millions of years, the changes drew early atoms together to eventually form stars. The gravity of early stars drew more gases, particles, and other components to coalesce into more stars and then galaxies. 

[Related: How old is the universe? Our answer keeps getting better.]

Once the beginnings of galaxies lit up, the cosmic dark age, as astrophysicists call it, was over. These stellar bodies were especially bright, Robertson says: They were more massive than our sun and burned hot, shining in the ultraviolet spectrum

“Ultraviolet light, if it’s energetic enough, can actually ionize hydrogen,” Robertson says. All it takes is a single, especially energetic particle of light, called a photon, to strip away the electron on a hydrogen atom and leave it with a positive electrical charge. 

As the galaxies started coming together, they would first ionize the regions around them, leaving bubbles of charged hydrogen gas across the universe. As the light-emitting clusters grew, more stars formed to make them even brighter and full of photons. Additional new galaxies began to develop, too. As they became luminous, the ionized bubbles began to overlap. That allowed a photon from one galaxy to “travel a much larger distance because it didn’t run into a hydrogen atom as it crossed through this network,” Robertson explains.

At that point, the rest of the intergalactic medium in the universe—even in regions far from galaxies—quickly becomes ionized. That’s when the epoch of reionization ended and the universe as we know it began.

“This was the last time whole properties of the universe were changed,” Robertson says. “It also was the first time that galaxies actually had an impact beyond their local region.”

The James Webb Space Telescope’s hunt for ionized clues

With all of the hydrogen between galaxies charged the universe entered a new phase of formation. This ionization had a ripple effect on galaxy formation: Any star-studded structures that formed after the cosmic dawn were likely affected. 

“If you ionize a gas, you also heat it up,” explains Hutter. Remember, high temperatures it difficult for material to coalesce and form new stars and planets—and can even destroy gases that are already present. As a result, small galaxies forming in an ionized region might have trouble gaining enough gas to make more stars. “That really has an impact on how many stars the galaxies are forming,” Hutter says. “It affects their entire history.”

Although scientists have a sense of the broad strokes of the story of reionization, some big questions remain. For instance, while they know roughly that the epoch ended about a billion years after the big bang, they’re not quite sure when reionization—and therefore the first galaxy formation—began. 

That’s where JWST comes in. The new space telescope is designed to be able to search out the oldest bits of the universe that are invisible to human eyes, and gather data on the first glimmers of starlight that ionized the intergalactic medium. Astronomers largely detect celestial objects by the radiation they emit. The ones farther away from us tend to appear in the infrared, as the distance distorts their wavelengths to be longer. With the universe expanding, the light can take billions of years to reach JWST’s detectors. 

[Related: Astronomers are already using James Webb Space Telescope data to hunt down cryptic galaxies]

That, in a nutshell, is how scientists are using JWST to peer at the first galaxies in the process of ionizing the universe. While older tools like the Hubble Space Telescope could spot the occasional early galaxy, the new space observatory can gather finer details to place the groups of stars in time.

“Now, we can very precisely work out how many galaxies were around, you know, 900 million years after the big bang, 800, 700, 600, all the way back to 300 million years after the big bang,” Robertson says. Using that information, astrophysicists can calculate how many ionizing photons were around at each age, and how the particles might have affected their surroundings.

Painting a picture of the cosmic dawn isn’t just about understanding the large-scale structure in the universe: It also explains when the elements that made us, like carbon and oxygen, became available as they formed inside the first stars. “[The question] really is,” Hutter says, “where do we come from?” 

Correction (September 21, 2022): The fluctuations in the early universe’s density took place over millions of years, not billions as previously written. This was an editing error.

The post After the big bang, light and electricity shaped the early universe appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh https://www.popsci.com/environment/bangladesh-farmers-seasonal-floods/ Thu, 15 Sep 2022 18:00:00 +0000 https://www.popsci.com/?p=470227
Groundwater pumps like this one deliver water from below to farms in Bangladesh.
A groundwater pump delivers water from below a farm during the dry season in Bangladesh. M. Shamsudduha

Pumping water in the dry months makes the ground sponge-like for the wet season, a system called the Bengal Water Machine.

The post Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh appeared first on Popular Science.

]]>
Groundwater pumps like this one deliver water from below to farms in Bangladesh.
A groundwater pump delivers water from below a farm during the dry season in Bangladesh. M. Shamsudduha

To control unpredictable water and stop floods, you might build a dam. To build a dam, you generally need hills and dales—geographic features to hold water in a reservoir. Which is why dams don’t fare well in Bangladesh, most of which is a flat floodplain that’s just a few feet above sea level.

Instead, in a happy accident, millions of Bangladeshi farmers have managed to create a flood control system of their very own, taking advantage of the region’s wet-and-dry seasonal climate. As farmers pump water from the ground in the dry season, they free up space for water to flood in during the wet season, hydrogeologists found. 

Researchers published the system they’d uncovered in the journal Science on September 15. And authorities could use the findings to make farming more sustainable, writes Aditi Mukherji, a researcher in Delhi for the International Water Management Institute who wasn’t involved in the paper, in a companion article in Science.

“No one really intended this to happen, because farmers didn’t have the knowledge when they started pumping,” says Mohammad Shamsudduha, a geoscientist at University College London in the UK and one of the paper’s authors.

[Related: What is a flash flood?]

Most of Bangladesh lies in the largest river delta on the planet, where the Rivers Ganges and Brahmaputra fan out into the Bay of Bengal. It’s an expanse of lush floodplains and emerald forests, blanketing some of the most fertile soil in the world. Indeed, that soil supports a population density nearly thrice that of New Jersey, the densest US state.

Like much of South Asia, Bangladesh’s climate revolves around the yearly monsoon. The monsoon rains support local animal and plant life and are vital to agriculture, too. But a heavy monsoon can cause devastating floods, as residents of northern Bangladesh experienced in June.

Yet Bangladesh’s warm climate means that farmers can grow crops, especially rice, in the dry season. To do so, farmers often irrigate their fields with water they draw up from the ground. Many small-scale farmers started doing so in the 1990s, when the Bangladeshi government loosened restrictions on importing diesel-powered pumps and made them more affordable. 

The authors of the new study wanted to examine whether pumping was depriving the ground of its water. That’s generally not very good, resulting in strained water supplies and the ground literally sinking (just ask Jakarta). They examined data from 465 government-controlled stations that monitor Bangladesh’s irrigation efforts across the country.

[Related: How climate change fed Pakistan’s devastating floods]

The situation was not so simple: In many parts of the country, groundwater wasn’t depleting at all.

It’s thanks to how rivers craft the delta. The Ganges and the Brahmaputra carry a wealth of silt and sediment from as far away as the Himalayas. As they fan out through the delta, they deposit those fine particles into the surrounding land. These sediments help make the delta’s soil as fertile as it is. 

This accumulation also results in loads of little pores in the ground. When the heavy rains come, instead of running off into the ocean or adding to runaway flooding, all that water can soak into the ground, where farmers can use it.

Where a dam’s reservoir is more like a bucket, Bangladesh is more like a sponge. During the dry season, farmers dry out the sponge. That gives it more room to absorb more water in the monsoon. And so forth, in an—ideally—self-sustaining cycle. Researchers call it the Bengal Water Machine. 

“The operation of the [Bengal Water Machine] was suspected by a small number of hydrogeologists within our research network but essentially unknown prior to this paper,” says Richard Taylor, a hydrogeologist at University College London in the UK, and another of the paper’s authors.

“If there was no pumping, then this would not have happened,” says Kazi Matin Uddin Ahmed, a hydrogeologist at the University of Dhaka in Bangladesh, and another of the paper’s authors. 

Storing water underground instead of a dam has a few advantages, Ahmed adds. The subsurface liquid is at less risk of evaporating into useless vapor. It doesn’t rewrite the region’s geography, and farmers can draw water from their own land, rather than relying on water shuttled in through irrigation channels.

The researchers believe that other “water machines” might fill fertile deltas elsewhere in the tropics with similar wet-and-dry climates. Southeast Asia might host a few, at the mouths of the Red River, the Mekong, and the Irrawaddy.

But an ominous question looms over the Bengal Water Machine: What happens as climate change reshapes the delta? Most crucially, a warming climate might intensify monsoons and change where they deliver their rains. “This is something we need to look into,” says Shamsudduha.

The Bengal Water Machine faces several other immediate challenges. In 2019, in response to overpumping concerns, the Bangladeshi government reintroduced restrictions on which farmers get to install a pump, which could make groundwater pumping more inaccessible. Additionally, many farmers use dirty diesel-powered pumps. (The government’s now encouraging farmers to switch to solar power.)

Also, keeping the Bengal Water Machine ship-shape means not using too much groundwater. Unfortunately, that’s already happening. Bangladesh’s west generally gets less rainfall than its east, and the results reflect that. The researchers noticed groundwater depletion in the west that wasn’t happening out east.

“There is a limit,” says Ahmed. “There has to be close monitoring of the system.”

The post Farmers accidentally created a flood-resistant ‘machine’ across Bangladesh appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Space diamonds sparkle from the wreckage of a crushed dwarf planet https://www.popsci.com/science/diamond-meteorite-crystal-structure/ Thu, 15 Sep 2022 12:34:20 +0000 https://www.popsci.com/?p=470022
Ionsdaleite diamond crystal structure under a microscope
A closeup of the Ionsdaleite diamond's complex folded structure, which may add to its toughness. Andy Tomkins

Mysterious meteorite gems from the solar system's early days could help us design harder lab-grown diamonds.

The post Space diamonds sparkle from the wreckage of a crushed dwarf planet appeared first on Popular Science.

]]>
Ionsdaleite diamond crystal structure under a microscope
A closeup of the Ionsdaleite diamond's complex folded structure, which may add to its toughness. Andy Tomkins

Today, our solar system is fairly stable. There are eight planets (sorry, Pluto) that keep constant orbit around the sun, with little risk of being crushed by asteroids. But it wasn’t always that way.

Some 4.5 billion years ago, as the solar system was just forming, large chunks of rocks frequently collided with slow-growing dwarf planets. The results were often cataclysmic for both bodies, reducing them to debris that still pummels Earth today. But sometimes those violent collisions yielded the creation of something shiny and new—perhaps even diamonds.

That’s likely what happened when an asteroid smashed a dwarf planet into smithereens in those earliest days of the solar system, according to a new paper published this week in the journal Proceedings of the National Academy of Sciences. The collision was so violent, the authors say, that it triggered a chain of events that transformed graphite from the dwarf planet’s mantle into diamonds now found in meteorites.

The explosive process through which these space gems formed, the researchers say, might even inspire a method to make lab-grown diamonds that are tougher than the ones people mine.

“We always say a diamond is the hardest material. It’s natural, nothing we’ve been able to make in the lab is harder than diamond. Yet, there have been hints of research over the years that there are forms of diamond that appear to actually be harder than single-crystal diamonds. And that would be immensely useful,” says Laurence Garvie, a research scientist in the Center for Meteorite Studies at Arizona State University, who was not involved in the new research. “Here’s a hypothesis that may add a new understanding of how these materials are formed.” And such a possibility, he adds, is tantalizing for all kinds of industrial and consumer uses.  

[Related: Meteorites older than the solar system contain key ingredients for life]

On Earth, diamonds emerge when carbon deposits are subjected to high pressure and high temperatures, typically from the geo processes rumbling deep under the planet’s crust. But that explanation never made sense for a carbon-rich meteorite, called a ureilite, that’s mysteriously filled with space diamonds. It takes a fair amount of mass to exert enough pressure on the carbon, Garvie explains, much more than the dwarf planet that these ancient rocks probably came from. Instead, some meteoriticists have proposed that shock from an impact triggered the transformation

But shock alone doesn’t completely explain the crystals in the ureilites, says Alan Salek, a physics researcher at the Royal Melbourne Institute of Technology and one of the authors on the new paper. For example, the meteorites’ diamonds are much larger than any created in laboratory experiments that mimicked the proposed conditions, he says. 

Furthermore, scientists have found inconsistencies in the urelites’ composition. Some don’t appear to have any hints of diamonds. Others contain carbon crystals that look notably different from engagement ring stones: The structures have more folds, with atoms that appear to be hexagonal rather than cubic. Those extra sides are thought to make the material harder.

But as Andrew Tomkins, a geoscientist at Monash University who led the latest research, writes in an email to Popular Science, “of course everyone knows that diamond is very hard, so it should be impossible to fold.” 

After studying the atomic properties of the carbon in ureilites, Tomkins, Salek, and their colleagues devised a scenario they say can explain all of the gems’ quirks. The story goes that when an asteroid slammed into a dwarf planet in the active early solar system, it barreled deep into the ureilite parent body and triggered a sequence of events. 

Two meteorite researchers holding up a diamond sample on a slide tray in a lab
Andy Tomkins (left) and Alan Salek hold up a ureilite sample. RMIT University

The dwarf planet contained folded graphite in the dwarf planet. Once the asteroid hit, the violent collision released pressure from the mantle, much like when you twist the lid off a soda bottle, Tomkins explains. This rapid decompression caused a bit of the mantle to melt and release fluids and gases, which then reacted with minerals. The activity forced the folded graphite in the planet to transform into the hexagonal crystals. Later, as the pressure and temperature dropped, regular cubic diamonds formed, too. 

The hexagonal structure of the crystals are still the subject of some controversy. Some scientists argue that the shape makes them a different kind of diamond known as lonsdaleite. The gem was first identified in Crater Diablo in Arizona in 1967, and has since been found at other impact sites around the world. Others have suggested that the material is something like a snapshot of disordered diamond formation. Garvie and his colleagues have given alternate explanations, such as diamonds with graphine-like intergrowths. But Salek and Tomkins say their new research definitively proves that the ureilite-based gems are indeed hexagonal diamonds, and therefore, should be classified as lonsdaleite.

[Related: Earth has more than 10,000 kinds of minerals]

However it’s defined, scientists tend to agree that this substance could have valuable properties. One attempt to recreate lonsdaleite indirectly measured the material to be 58 percent stronger than its cubic counterpart.  If the substance can be made artificially in a laboratory, Garvie says “the possibilities are endless,” describing potential uses for protective coatings on, say, an iPhone or a camera lens. Salek suggests creating saws and other cutting implements with blades so hard that they can’t get dull. 

The crystals Salek and Tomkins found, however, are just about 2 percent of the size of a human hair. So don’t expect to profess your love with a rare ureilite diamond anytime soon. But, Salek adds, “the hope is to mimic the process [from space] and make bigger ones.”

The post Space diamonds sparkle from the wreckage of a crushed dwarf planet appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Scientists used lasers to make the coldest matter in the universe https://www.popsci.com/science/create-the-coldest-matter-in-the-universe/ Fri, 09 Sep 2022 17:00:00 +0000 https://www.popsci.com/?p=468770
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer.
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer. Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University

The atoms were chilled to within a billionth of a degree of absolute zero.

The post Scientists used lasers to make the coldest matter in the universe appeared first on Popular Science.

]]>
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer.
The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer. Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University

In a laboratory in Kyoto, Japan, researchers are working on some very cool experiments. A team of scientists from Kyoto University and Rice University in Houston, Texas has cooled matter to within a billionth of a degree of absolute zero (the temperature when all motion stops), making it the coldest matter in the entire universe . The study was published in the September issue of Nature Physics, and “opens a portal to an unexplored realm of quantum magnetism,” according to Rice University.

“Unless an alien civilization is doing experiments like these right now, anytime this experiment is running at Kyoto University it is making the coldest fermions in the universe,” said Rice University professor Kaden Hazzard, corresponding theory author of the study, and member of the Rice Quantum Initiative, in a press release. “Fermions are not rare particles. They include things like electrons and are one of two types of particles that all matter is made of.”

The simulator uses up to 300,000 atoms, allowing physicists to directly observe how particles interact in quantum magnets whose complexity is beyond the reach of even the most powerful supercomputer.
Different colors represent the six possible spin states of each atom. CREDIT: Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University Image by Ella Maru Studio/Courtesy of K. Hazzard/Rice University

The Kyoto team led by study author Yoshiro Takahashi used lasers to cool the fermions (or particles like protons, neutrons, and electrons whose spin quantum number is an odd half integer like 1/2 or 3/2) of ytterbium atoms to within about one-billionth of a degree of absolute zero. That’s roughly 3 billion times colder than interstellar space. This area of space is still warmed by the cosmic microwave background (CMB), or the afterglow of radiation from the Big Bang… about 13.7 billion years ago. The coldest known region of space is the Boomerang Nebula, which has a temperature of one degree above absolute zero and is 3,000 light-years from Earth.

[Related: How the most distant object ever made by humans is spending its dying days.]

Just like electrons and photons, atoms are are subject to the laws of quantum dynamics, but their quantum behaviors only become noticeable when they are cooled to within a fraction of a degree of absolute zero. Lasers have been used for more than 25 years to cool atoms to study the quantum properties of ultracold atoms.

“The payoff of getting this cold is that the physics really changes,” Hazzard said. “The physics starts to become more quantum mechanical, and it lets you see new phenomena.”

In this experiment, lasers were used to to cool the matter by stopping the movements of 300,000 ytterbium atoms within an optical lattice. It simulates the Hubbard model, a quantum physics first proposed by theoretical physicist John Hubbard in 1963. Physicists use Hubbard models to investigate the magnetic and superconducting behavior of materials, especially those where interactions between electrons produce collective behavior,

This model allows for atoms to show off their unusual quantum properties, which include the collective behavior between electrons (a bit like a group of fans performing “the wave” at a football or soccer game) and superconduction, or an object’s ability to conduct electricity without losing energy.

“The thermometer they use in Kyoto is one of the important things provided by our theory,” said Hazzard. “Comparing their measurements to our calculations, we can determine the temperature. The record-setting temperature is achieved thanks to fun new physics that has to do with the very high symmetry of the system.”

[Related: Chicago now has a 124-mile quantum network. This is what it’s for.]

The Hubbard model simulated in Kyoto has special symmetry known as SU(N). The SU stands for special unitary group, which is a mathematical way of describing the symmetry. The N denotes the possible spin states of particles within the model.

The greater the value of N, the greater the model’s symmetry and the complexity of magnetic behaviors it describes. Ytterbium atoms have six possible spin states, and the simulator in Kyoto is the first to reveal magnetic correlations in an SU(6) Hubbard model. These types of calculations are impossible to calculate on a computer, according to the study.

“That’s the real reason to do this experiment,” Hazzard said. “Because we’re dying to know the physics of this SU(N) Hubbard model.”

Graduate student in Hazzard’s research group and study co-author Eduardo Ibarra-García-Padilla added that the Hubbard model aims to capture the very basic ingredients needed for what makes a solid material a metal, insulator, magnet, or superconductor. “One of the fascinating questions that experiments can explore is the role of symmetry,” said Ibarra-García-Padilla. “To have the capability to engineer it in a laboratory is extraordinary. If we can understand this, it may guide us to making real materials with new, desired properties.”

The team is currently working on developing the first tools capable of measuring the behavior that arises a billionth of a degree above absolute zero.

“These systems are pretty exotic and special, but the hope is that by studying and understanding them, we can identify the key ingredients that need to be there in real materials,” conculed Hazzard.

The post Scientists used lasers to make the coldest matter in the universe appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Sustainable batteries could one day be made from crab shells https://www.popsci.com/science/crab-shell-green-batteries/ Thu, 01 Sep 2022 19:30:00 +0000 https://www.popsci.com/?p=467040
A bucket of crabs, who have a multi-purpose material called chitosan in their shells.
Crabs that we eat contain chitosan in their shells, which scientists are using to make batteries. Mark Stebnick via Pexels

A material in crab shells has been used to brew booze, dress wounds, and store energy.

The post Sustainable batteries could one day be made from crab shells appeared first on Popular Science.

]]>
A bucket of crabs, who have a multi-purpose material called chitosan in their shells.
Crabs that we eat contain chitosan in their shells, which scientists are using to make batteries. Mark Stebnick via Pexels

There are those who say ours is the age of the battery. New and improved batteries, perhaps more than anything else, have made possible a world of mobile phones, smart devices, and blossoming electric vehicle fleets. Electrical grids powered by clean energy may soon depend on server-farm-sized battery projects with massive storage capacity.

But our batteries aren’t perfect. Even if they’ll one day underpin a world that’s sustainable, today they’re made from materials that aren’t. They rely on heavy metals or non-organic polymers that might take hundreds of years to degrade. That’s why battery disposal is such a tricky task.

Enter researchers from the University of Maryland and the University of Houston, who have made a battery from a promising alternative: crustacean shells. They’ve taken a biological material, easily sourced from the same crabs and squids you can eat, and crafted it into a partly biodegradable battery. They published their results in the journal Matter on September 1.

It’s not the first time batteries have been made from this stuff. But what makes the researchers’ work new is the design, according to Liangbing Hu, a materials scientist at the University of Maryland and one of the paper’s authors. 

A battery has three key components: two ends and a conductive filling, called an electrolyte. In short, charged particles crossing the electrolyte put out a steady flow of electric current. Without an electrolyte, a battery would just be a sitting husk of electric charge.

Today’s batteries use a whole rainbow of electrolytes, and few are things you’d particularly want to put in your mouth. A standard AA battery uses a paste of potassium hydroxide, a dangerously corrosive substance that makes throwing batteries in the trash a very bad idea. 

[Related: This lithium-ion battery kept going (and going and going) in the extreme cold]

The rechargeable batteries in your phone are a completely different sort of battery: lithium-ion batteries. Those batteries can power on for many years and usually rely on plastic-polymer-based electrolytes that aren’t quite as toxic, but they can still take centuries or even millennia to break down.

Batteries themselves, full of environmentally unfriendly materials, aren’t the greenest. They’re rarely sustainably made, either, reliant on rare earth mining. Even if batteries can last thousands of discharges and recharges, thousands more get binned every day.

So researchers are trawling through oceans of materials for a better alternative. In that, they’ve started to dredge up crustacean parts. From crabs and prawns and lobsters, battery-crafters can extract a material called chitosan. It’s a derivative of chitin, which makes up the hardened exoskeletons of crustaceans and insects, too. There’s plenty of chitin to go around, and a relatively simple chemical process is all that’s necessary to convert it into chitosan.

We already use chitosan for quite a few applications, most of which have little to do with batteries. Since the 1980s, farmers have sprinkled chitosan over their crops. It can boost plant growth and harden their defenses against fungal infestation. 

[Related: The race to close the EV battery recycling loop]

Away from the fields, chitosan can remove particles from liquids: Water purification plants use it to remove sediment and impurities from drinking water, and alcohol-makers use it to clarify their brew. Some bandages come dressed with chitosan that helps seal wounds.

You can sculpt things from chitosan gel, too. Because chitosan is biodegradable and non-toxic, it’s especially good for making things that must go into the human body. It’s entirely possible that hospitals of the future might use specialized 3D printers to carve chitosan into tissues and organs for transplants.

Now, researchers are seeking to put chitosan into batteries whose ends are made from zinc. Largely experimental today, these rechargeable batteries could one day form the backbone of an energy storage system.

The researchers at Maryland and Houston weren’t the first to think about making chitosan into batteries. Scientists around the world, from China to Italy to Malaysia to Iraqi Kurdistan, have been playing with crab-stuff for about a decade, spindling it into intricate webwork that charged particles could cross like adventurers.

The authors of the new work added zinc ions to that chitosan structure, which bolstered its physical strength. Combined with the zinc ends, the addition also boosted the battery’s effectiveness.

This design means that two-thirds of the battery is biodegradable; the researchers found that the electrolyte broke down completely within around five months. Compared to conventional electrolytes and their thousand-year lifespans in the landfill, Hu says, these have little downside. 

And although this design was made for those experimental zinc batteries, Hu sees no reason researchers can’t extend it to other sorts of batteries—including the one in your phone.

Now, Hu and his colleagues are pressing ahead with their work. One of their next steps, Hu says, is to expand their focus beyond the confines of the electrolyte—to the other parts of a battery. “We will put more attention to the design of a fully biodegradable battery,” he says.

The post Sustainable batteries could one day be made from crab shells appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Atoms are famously camera-shy. This dazzling custom rig can catch them. https://www.popsci.com/science/particle-physics-custom-camera/ Sun, 28 Aug 2022 17:00:00 +0000 https://www.popsci.com/?p=465661
MAGIS-100 vacuum for a Fermilab quantum physics experiment
When built, the MAGIS-100 atom interferometer will be the largest in the world. But it's still missing a key component: a detailed camera. Stanford University

The mirror-studded camera is designed to take glamor shots of quantum physics experiments.

The post Atoms are famously camera-shy. This dazzling custom rig can catch them. appeared first on Popular Science.

]]>
MAGIS-100 vacuum for a Fermilab quantum physics experiment
When built, the MAGIS-100 atom interferometer will be the largest in the world. But it's still missing a key component: a detailed camera. Stanford University

In suburban Chicago, about 34 miles west of Lake Michigan, sits a hole in the ground that goes about 330 feet straight down. Long ago, scientists had the shaft drilled for a particle physics experiment that’s long vanished from this world. Now, in a few short years, they will reuse the shaft for a new project with the mystical name MAGIS-100.

When MAGIS-100 is complete, physicists plan to use it for detecting hidden treasures: dark matter, the mysterious invisible something that’s thought to make up much of the universe; and gravitational waves, ripples in space-time caused by cosmic shocks like black holes collisions. They hope to find traces of those elusive phenomena by watching the quantum signatures they leave behind on raindrop-sized clouds of strontium atoms.

But actually observing those atoms is trickier than you might expect. To pull off similar experiments, physicists have so far relied on cameras comparable to the ones on a smartphone. And while the technology might work fine for a sunset or a tasty-looking food shot, it limits what physicists can see on the atomic level.

[Related: It’s pretty hard to measure nothing, but these engineers are getting close]

Fortunately, some physicists may have an upgrade. A research team from different groups in Stanford, California, has created a unique camera contraption that relies on a dome of mirrors. The extra reflections help them to see what light is entering the lens, and tell what angle a certain patch of light is coming from. That, they hope, will let them peer into an atom cloud like never before.

Your mobile phone camera or DSLR doesn’t care where light travels from: It captures the intensity of the photons and the colors reflected by the wavelengths, little more. For taking photographs of your family, a city skyline, or the Grand Canyon, that’s all well and good. But for studying atoms, it leaves quite a bit to be desired. “You’re throwing away a lot of light,” says Murtaza Safdari, a physics graduate student at Stanford University and one of the creators.

Physicists want to preserve that information because it lets them paint a more complex, 3D picture of the object (or objects) they’re studying. And when it comes to the finicky analyses physicists like to do, the more information they can get in one shot, the quicker and better. 

One way to get that information is to set up multiple cameras, allowing them to snap pictures from multiple angles and stitch them together for a more detailed view. That can work great with, say, five cameras. But some physics experiments require such precise measurements that even a thousand cameras might not do the trick.

Stanford atom camera mirror array shown in the lab
The 3D-printed, laser-cut camera. Sanha Cheong/Stanford University

So, in a Stanford basement, researchers decided to set out on making their own system to get around that problem. “Our thinking…was basically: Can we try and completely capture as much information as we can, and can we preserve directional information?” says Safdari.

Their resulting prototype—made from off-the-shelf and 3D-printed components—looks like a shallow dome, spangled with an array of little mirror-like dots on the inside. The pattern seems to form a fun optical illusion of concentric circles, but it’s carefully calculated to maximize the light striking the camera.

For the MAGIS-100 project, the subject of the shot—the cloud of strontium atoms—would sit within the dome. A brief light flash from an external laser beam would then scatter off the mirror-dots and through the cloud at a myriad angles. The lens would pick up the resulting reflections, how they’ve interacted with the molecules, and which dots they’ve bounced off.

Then, from that information, machine learning algorithms can piece the three-dimensional structure of the cloud back together. Currently, this reconstruction takes many seconds; in an ideal world, it would take milliseconds, or even less. But, like the algorithms used to trainy self-driving cars to adjust to the surrounding world, researchers think their computer codes’ performance will improve. 

While the creators haven’t gotten around to testing the camera on atoms just yet, they did try it out by scanning some suitably sized sample parts: 3D-printed letter-shaped pieces the size of the strontium droplets they intend to use. The photo they took was so clear, they could find defects where the little letters D, O, and E varied from their intended design. 

3D-printed letters photographed and 3D modeled on a grid
Reconstructions of the test letters from a number of angles. Sanha Cheong/SLAC National Accelerator Laboratory

For atom experiments like MAGIS-100, this equipment is distinct from anything else on the market. “The state of the art are just cameras, commercial cameras, and lenses,” says Ariel Schwartzman, a physicist at SLAC National Accelerator Laboratory in California and co-creator of the Stanford setup. They scoured photo-equipment catalogs for something that could see into an atom cloud from multiple angles at once. “Nothing was available,” says Schwartzman.

Complicating matters is that many experiments require atoms to rest in extremely cold temperatures, barely above absolute zero. This means they require low-light conditions—shining any bright light source for too long could heat them up too fast. Setting a longer exposure time on a camera could help, but it also means sacrificing some of the detail and information needed in the final image. “You are allowing the atom cloud to diffuse,” says Sanha Cheong, a physics graduate student at Stanford University and member of the camera-building crew. The mirror dome, on the other hand, aims to use only a brief laser-flash with an exposure of microseconds. 

[Related: Stanford researchers want to give digital cameras better depth perception]

The creators’ next challenge is to actually place the camera in MAGIS-100, which will take a lot of tinkering to fit the camera to a much larger shaft and in a vacuum. But physicists are hopeful: A camera like this might go a lot further than detecting obscure effects around atoms. Its designers plan to use it for everything from tracking particles in plasma to measuring quality control of small parts in the factory.

“To be able to capture as much light and information in a single shot in the shortest exposure possible—it opens up new doors,” says Cheong.

The post Atoms are famously camera-shy. This dazzling custom rig can catch them. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Need more air in space? Magnets could yank it out of water. https://www.popsci.com/science/magnets-oxygen-international-space-station/ Thu, 18 Aug 2022 10:00:00 +0000 https://www.popsci.com/?p=463203
The International Space Station, seen from a Dragon Capsule in November 2021.
The International Space Station makes its own oxygen through electrolysis, an energy-intensive process. NASA

Water is magnetic—a property that could help astronauts breathe a little easier.

The post Need more air in space? Magnets could yank it out of water. appeared first on Popular Science.

]]>
The International Space Station, seen from a Dragon Capsule in November 2021.
The International Space Station makes its own oxygen through electrolysis, an energy-intensive process. NASA

Humans tend to take a lot for granted, even something as simple as a breath of fresh air. It’s easy to forget how much our bodies depend on oxygen—until it becomes an invaluable resource, such as aboard the International Space Station. 

Although astronauts are typically sent to space with stores of necessary supplies, it’d be too costly to keep sending tanks of breathable air up to the station. Instead the oxygen that astronauts rely on for primary life support is created through a process called electrolysis, wherein electricity is used to split water into hydrogen gas and oxygen gas. On Earth, a similar process happens naturally through photosynthesis, when plants use hydrogen to make sugars for food and release oxygen into the atmosphere.

Yet because the system on the ISS requires massive amounts of energy and upkeep, scientists have been looking for alternative ways to sustainably create air in space. One such solution was recently published in NPJ Microgravity, in which researchers found a way to pull gases from liquids using magnets.

“Not a lot of people [are] aware that water and other liquids are also magnetic to some extent,” says Álvaro Romero-Calvo, currently an assistant professor at the Guggenheim School of Aerospace Engineering at Georgia Tech and lead author of the study.

“The physical principle is pretty well known in the physics community [but] the application in space is barely explored at this point,” he says. “When a space engineer is designing a space system involving fluids, they do not even consider the possibility of using magnets to induce phase separation.”

[Related: Lunar soil could help us make oxygen in space]

At the Center for Applied Space Technology and Microgravity (ZARM) at the University of Brennan in Germany, Romero-Calvo’s team was able to study the phenomenon of “magnetically-induced buoyancy.” The idea is easier to explain by visualizing a can of soda: On Earth, because the liquid is denser than carbon dioxide molecules, soda bubbles separate and float to the top of the drink when subjected to the planet’s gravity. In space, where microgravity creates a continuous freefall and removes the effect of buoyancy, the substances inside become harder to separate and these bubbles are simply left suspended in air.

To test whether magnets could make a difference, the team took their research to ZARM’s drop tower, where an experiment, once placed in an airtight drop capsule, can achieve weightlessness for a few seconds. By injecting air bubbles into syringes filled with different carrier liquids, the team was able to use the power of magnetism to successfully detach gas bubbles in microgravity. This proved that the bubbles can be both attracted to and repelled by a neodymium magnet from within various substances. 

Additionally, the researchers found that through the inherent magnetic properties of various aqueous solutions (like purified water and olive oil) they tested, it’s possible to direct air bubbles to different locations within the liquid. Essentially, it’d become easier to collect or send air through a vessel. Besides being used to create an abundance of oxygen for the crew, Romero-Calvo says the study’s results show that developing microgravity magnetic phase separators could lead to more reliable and lightweight space systems, like better propellant management devices or wastewater recycling technologies.

To demonstrate the magnets’ potential use for research purposes, the team also experimented with Lysogeny Broth, a medium used in to grow bacteria for ISS experiments. As it turns out, both the broth and the olive oil were “significantly affected” by the magnetic force expended on it. “Every bit of effort that we devote to this problem is effort well spent, because it will affect many other products in space,” Romero-Calvo says. 

[Related: How the ISS recycles its air and water]

If the next generation of space engineers do decide to apply magnets to future space stations, the new method could generate more efficient, breathable atmospheres to support human travel to other extraterrestrial environments, including the moon and most especially, to Mars. If we were to plan a human mission to the Red Planet, the ISS’s current oxygenation system is too complex to be completely reliable during the long journey. Simplifying it with magnets would lower overall mission costs and ensure that oxygen is abundant. 

Although Romero-Calvo says their breakthrough could ultimately help us touch down on Mars, other scientists are working on ways to manufacture oxygen using plasma—a state of matter that contains free charged particles like electrons which are easily excited by powerful electric fields—for fuels, fertilizers, and other materials that could help colonize the planet. And while neither project is up to scale just yet, these emerging advances represent the amazing feats humans are capable of as we keep moving forward, striving to reach beyond our familiar horizons.

The post Need more air in space? Magnets could yank it out of water. appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
It’s pretty hard to measure nothing, but these engineers are getting close https://www.popsci.com/science/vacuum-measurements-manufacturing-new-method/ Mon, 08 Aug 2022 22:30:00 +0000 https://www.popsci.com/?p=461034
NIST computer room with small glass vacuum chamber
The National Institute of Standards and Technology sets the bar for precise vacuum measurements. NIST

The US still uses Cold War-era tech to calibrate vacuums for manufacturing. Is there a more precise (and fun) option out there?

The post It’s pretty hard to measure nothing, but these engineers are getting close appeared first on Popular Science.

]]>
NIST computer room with small glass vacuum chamber
The National Institute of Standards and Technology sets the bar for precise vacuum measurements. NIST

Outer space is a vast nothingness. It’s not a perfect vacuum—as far as astronomers know, that concept only exists in theoretical calculations and Hollywood thrillers. But aside from the remnant hydrogen atom floating about, it is a vacuum.

That’s important because here on Earth, much of the modern world quietly relies on partial vacuums. More than just a place for physicists to do fun experiments, the machine-based environments are critical for crafting many of the electronic components in cutting-edge phones and computers. But to actually measure a vacuum—and understand how good it will be at manufacturing—engineers rely on relatively basic tech left over from the days of old-school vacuum tubes.

[Related: What happens to your body when you die in space?]

Now, some teams are working on an upgrade. Recent research has brought a novel technique—one that relies on the coolest new atomic physics (as cool as -459 degrees Fahrenheit)—one step closer to being used as a standardized method.

“It’s a new way of measuring vacuum, and I think it’s really revolutionary,” says Kirk Madison, a physicist at the University of British Columbia in Vancouver.

NIST circular metal vacuum chamber with blue lights
The NIST mass-in-vacuum precision mass comparator. NIST

What’s inside a vacuum

It might seem hard to quantify nothing, but what you’re actually doing is reading the gas pressure inside a vacuum—in other words, the force that any remaining atoms put on the chamber well. So, measuring vacuums is really about calculating pressures with far more precision than your local meteorologist can manage.

Today, engineers might do that with a tool called an ion gauge. It consists of a spiralling wire that pops out electrons when inserted into a vacuum chamber; the electrons collide with any gas atoms within the spiral, turning them into charged ions. The gauge then reads the number of ions left in the chamber. But to interpret that figure, you need to know the composition of the different gases you’re measuring, which isn’t always simple.

Ion gauges are technological cousins of vacuum tubes, the components that powered antique radios and the colossal computers that filled rooms and pulp science fiction stories before the development of the silicon transistor. “They are very unreliable,” says Stephen Eckel, a physicist at the National Institute for Standards and Technology (NIST). “They require constant recalibration.”

Other vacuum measuring tools do exist, but ion gauges are the best at getting pressure readings down to billionths of a Pascal (the standard unit of pressure). While this might seem unnecessarily precise, many high-tech manufacturers want to read nothingness as accurately as possible. A couple of common techniques to fabricate electronic components and gadgets like lasers and nanoparticles rely on delicately layering materials inside vacuum chambers. Those techniques need pure voids of matter to work well.

The purer the voir, the harder it is to identify the leftover atoms, making ion gauges even more unreliable. That’s where deep-frozen atoms come in.

Playing snooker with atoms

For decades physicists have taken atoms, pulsed them with a finely tuned laser, and confined them in a magnetic cage, all to keep them trapped at temperatures just fractions of a degree above absolute zero. The frigidness forces atoms, otherwise wont to fly about, to effectively sit still so that physicists can watch how they behave.

In 2009, Madison and other physicists at several institutions in British Columbia were watching trapped atoms of chilled rubidium—an element with psychrophilic properties—when a new arrangement dawned on them.

Suppose you put a trap full of ultracold atoms in a vacuum chamber at room temperature. They would face a constant barrage of whatever hotter, higher-energy atoms were left in the vacuum. Most of the frenzied particles would slip through the magnetic trap without notice, but some would collide with the trapped atoms and snooker them out of the trap.

It isn’t a perfect measurement—not all collisions would successfully kick an atom out of the trap. But if you know the trap’s “depth” (or temperature) and a number called the atomic cross-section (essentially, a measure of the probability of a collision), you can find out fairly quickly how many atoms are entering the plane. Based on that, you can know the pressure, along with how much matter is left in the vacuum, Madison explains.

Such a method could have a few advantages over ion gauges. For one, it would work for all types of gases present in the vacuum, as there are no chemical reactions happening. Most of all, due to the fact that you’re making calculations from how the atoms’ behaviors, nothing needs to be calibrated.

At first, few people in the physics community noticed the breakthrough by Madison and his collaborators. “Nobody believed that the work we were doing was impactful,” he says. But in the 13 years since, other groups have taken up the technology themselves. In China, the Lanzhou Institute of Physics has begun building their own version. So has an agency in the German government.

The NIST is the latest test subject on the list. It’s the US agency responsible for deciding the country’s official weights and measures, like the official kilogram (yes, even the US government uses the SI system). One of NIST’s tasks for decades has been to calibrate those persnickety ion gauges as manufacturers kept sending them in. The British Columbia researchers’ new way presented an appealing shortcut.

NIST engineer in red polo and glasses testing silver cold-atom vacuum chamber
As part of a project testing the ultra-cold atom method of vacuum measurement, NIST scientist Stephen Eckel behind a pCAVS unit (silver-colored cube left of center) that is connected to a chamber (cylinder at right). C. Suplee/NIST

A new standard for nothing

NIST’s system isn’t exactly like the one Madison’s group devised. For one, the agency uses lithium atoms, which are much smaller and lighter than rubidium. Eckert, who was involved in the NIST project, says that these atoms are far less likely to stay in the trap after collision. But it uses the same underlying principles as the original experiment, which reduces labor because it doesn’t need to be calibrated over and over.

“If I go out and I build one of these things, it had better measure the pressure correctly,” says Eckel. “Otherwise, it’s not a standard.”

NIST put their system to the test in the last two years. To make sure it worked, they built two identical cold-atom devices and ran them in the same vacuum chamber. When they turned the devices on, they were dismayed to find that both produced different measurements. As it turned out, the vacuum chamber had developed a leak, allowing atmospheric gases to trickle in. “Once we fixed the leak, they agreed with each other,” says Eckel.

Now that their system seems to work against itself, NIST researchers want to compare the ultra-chilled atoms against ion gauges and other old-fashioned techniques. If these, too, result in the same measurement, then engineers might soon be able to close in on nothingness by themselves.

The post It’s pretty hard to measure nothing, but these engineers are getting close appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Nuclear power’s biggest problem could have a small solution https://www.popsci.com/science/nuclear-fusion-less-energy/ Sun, 07 Aug 2022 23:07:36 +0000 https://www.popsci.com/?p=460468
Spherical fusion energy reactor in gold, copper, and silver seen from above
In 2015 the fusion reactor at the Princeton Plasma Physics Laboratory got a spherical upgrade for an energy-efficiency boost. Some physicists think this sort of design might be the future of the field. US Department of Energy

Most fusion experiments take place in giant doughnut-shaped reactors. Physicists want to test a smaller peanut-like one instead.

The post Nuclear power’s biggest problem could have a small solution appeared first on Popular Science.

]]>
Spherical fusion energy reactor in gold, copper, and silver seen from above
In 2015 the fusion reactor at the Princeton Plasma Physics Laboratory got a spherical upgrade for an energy-efficiency boost. Some physicists think this sort of design might be the future of the field. US Department of Energy

For decades, if you asked a fusion scientist to picture a fusion reactor, they’d probably tell you about a tokamak. It’s a chamber about the size of a large room, shaped like a hollow doughnut. Physicists fill its insides with a not-so-tasty jam of superheated plasma. Then they surround it with magnets in the hopes of crushing atoms together to create energy, just as the sun does.

But experts think you can make tokamaks in other shapes. Some believe that making tokamaks smaller and leaner could make them better at handling plasma. If the fusion scientists proposing it are right, then it could be a long-awaited upgrade for nuclear energy. Thanks to recent research and a newly proposed reactor project, the field is seriously thinking about generating electricity with a “spherical tokamak.”

“The indication from experiments up to now is that [spherical tokamaks] may, pound for pound, confine plasmas better and therefore make better fusion reactors,” says Steven Cowley, director of Princeton Plasma Physics Laboratory.

[Related: Physicists want to create energy like stars do. These two ways are their best shot.]

If you’re wondering how fusion power works, it’s the same process that the sun uses to generate heat and light. If you can push certain types of hydrogen atoms past the electromagnetic forces keeping them apart and crush them together, you get helium and a lot of energy—with virtually no pollution or carbon emissions.

It does sound wonderful. The problem is that, to force atoms together and make said reaction happen, you need to achieve celestial temperatures of millions of degrees for sustained periods of time. That’s a difficult benchmark, and it’s one reason that fusion’s holy grail—a reaction that generates more energy than you put into it, also known as breakeven and gain—remains elusive.

The tokamak, in theory, is one way to reach it. The idea is that by carefully sculpting the plasma with powerful electromagnets that line the doughnut’s shell, fusion scientists can keep that superhot reaction going. But tokamaks have been used since the 1950s, and despite continuous optimism, they’ve never been able to mold the plasma the way they need to deliver on their promise.

But there’s another way to create fusion outside of a tokamak, called inertial confinement fusion (ICF). For this, you take a sand-grain-sized pellet of hydrogen, place it inside a special container, blast it with laser beams, and let the resulting shockwaves ruffle the pellet’s interior into jump-starting fusion. Last year, an ICF reactor in California came closer than anyone’s gotten to that energy milestone. Unfortunately, in the year since, physicists haven’t been able to make the flash happen again.

Stories like this show that if there’s an alternative method, researchers won’t hesitate to jump on it.

The idea of trimming down the tokamak emerged in the 1980s, when theoretical physicists—followed by computer simulations—proposed that a more compact shape could handle the plasma more effectively than a traditional tokamak.

Not long after, groups at the Culham Center for Fusion Energy in the UK and Princeton University in New Jersey began testing the design. “The results were almost instantaneously very good,” says Cowley. That’s not something physicists can say with every new chamber design.

Round fusion reactor with silver lithium sides and a core
A more classic-shaped lithium tokamak at the Plasma Physics Laboratory. US Department of Energy

Despite the name, a spherical tokamak isn’t a true sphere: It’s more like an unshelled peanut. This shape, proponents think, gives it a few key advantages. The smaller size allows the magnets to be placed closer to the plasma, reducing the energy (and cost) needed to actually power them. Plasma also tends to act more stably in a spherical tokamak throughout the reaction.

But there are disadvantages, too. In a standard tokamak, the doughnut hole in the middle of the chamber contains some of those important electromagnets, along with the wiring and components needed to power the magnets up and support them. Downsizing the tokamak reduces that space into something like an apple core, which means the accessories need to be miniaturized to match. “The technology of being able to get everything down the narrow hole in the middle is quite hard work,” says Cowley. “We’ve had some false starts on that.”

On top of the fitting issues, placing those components closer to the celestially hot plasma tends to wear them out more quickly. In the background, researchers are making new components to solve these problems. At Princeton, one group has shrunk those magnets and wrapped them with special wires that don’t have conventional insulation, which would need to be specially treated in an expensive and error-prone process to fit in fusion reactors’ harsh conditions. This development doesn’t solve all of the problems, but it’s an incremental step.

[Related: At NYC’s biggest power plant, a switch to clean energy will help a neighborhood breathe easier]

Others are dreaming of going even further. The world of experimental tokamaks is currently preparing for ITER, a record-capacity test reactor that’s been underway since the 1980s and will finally finish construction in southern France this decade. It will hopefully pave the way for viable fusion power by the 2040s. 

Meanwhile, fusion scientists are already designing something very similar in Britain with a Spherical Tokamak for Energy Production, or STEP. The chamber is nowhere near completion—the most optimistic plans won’t have it begin construction until the mid-2030s and start generating power until about 2040—but it’s an indication that engineers are taking the spherical tokamak design quite seriously. 

“One of the things we always have to keep doing is asking ourselves: ‘If I were to build a reactor today, what would I build?’” says Cowley. Spherical tokamaks, he thinks, are beginning to enter that equation.

The post Nuclear power’s biggest problem could have a small solution appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
See the first video of solitary solid atoms playing with liquid https://www.popsci.com/science/video-solid-in-liquid/ Thu, 04 Aug 2022 12:00:00 +0000 https://www.popsci.com/?p=460136
Platinum atoms and liquid graphene seen in red and purple under a microscope next to a graphic of material particle locations
Left to right: Platinum atoms in liquid graphene under a transmission electron microscope in a colorized image; platinum atom trajectories are shown with a color scale from blue (start) to green, yellow, orange, then red. Clark et al (2022)

To catch "swimming" platinum atoms, materials scientists made a graphene sandwich.

The post See the first video of solitary solid atoms playing with liquid appeared first on Popular Science.

]]>
Platinum atoms and liquid graphene seen in red and purple under a microscope next to a graphic of material particle locations
Left to right: Platinum atoms in liquid graphene under a transmission electron microscope in a colorized image; platinum atom trajectories are shown with a color scale from blue (start) to green, yellow, orange, then red. Clark et al (2022)

It’s summer, it’s hot, and these atoms are going for a swim.

For the first time ever, materials scientists recorded individual solid atoms moving through a liquid solution. A team of engineers from the National Graphene Institute at the University of Manchester and the University of Cambridge, both in the UK, used a transmission electron microscope to pull off the delicate feat. The technique lets researchers view and take images of miniscule things in extraordinary detail. Typically, however, the subject has to be immobile and held in a high-pressure vacuum system to allow the electrons to scan properly. This limits the microscope’s use at the atomic level.

The engineers got around this by tapping a newer form of the instrument that works on contained liquid and gaseous environments. To set up the experiment, they created a “pool” with  a nanometers-thin, double-layer graphene cell. Their “swimmers” consisted of a sample of platinum atoms covered in a salty solution, or adatoms, because they were sitting on mineral crystals. 

Once it was in the liquid graphene, the solid platinum moved quickly. (For context, the looped video below is shown at real speed.) The team tested the same reaction with a vacuum in place of the graphene cell. They saw that the platinum atoms didn’t react as naturally in the traditional setup.

Credit: Adi Gal-Greenwood

After recreating the motion in liquid more than 70,000 times, the team deemed their methods successful. They published their work in the journal Nature on July 27.

The results could make waves for a few different reasons. One, it “paves the way” for transmission electron microscopes to be used widely to study “chemical processes with single-atom precision,” the scientists wrote in the paper. Two, “given the widespread industrial and scientific importance of such behavior [of solids], it is truly surprising how much we still have to learn about the fundamentals of how atoms behave on surfaces in contact with liquids,” materials scientist and co-author Sarah Haigh said in a press release.

[Related: These levitating beads can teach physicists about spinning celestial objects]

A growing number of technologies depend on the interplay between solid particles and liquid cells. Graphene, which was discovered by researchers at the University of Manchester in the early 2000s, is a key component in battery electrodes, computer circuitry, and a new technique for green hydrogen production. Meanwhile, platinum gets made into LCD screens, cathode ray tubes, sensors, and much more. Seeing how these two materials pair together at the nanometer level opens up a more precise, efficient, and inventive world.

The post See the first video of solitary solid atoms playing with liquid appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
What engineers learned about diving injuries by throwing dummies into a pool https://www.popsci.com/science/physics-how-to-dive-safely/ Wed, 27 Jul 2022 21:00:00 +0000 https://www.popsci.com/?p=458564
Diving mannequins enter a pool so researchers can measure what forces affect them.
Two 3D-printed mannequins plunge into a pool. Anupam Pandey, Jisoo Yuk, and Sunghwan Jung

Pointier poses slipped into the water more easily than rounded ones.

The post What engineers learned about diving injuries by throwing dummies into a pool appeared first on Popular Science.

]]>
Diving mannequins enter a pool so researchers can measure what forces affect them.
Two 3D-printed mannequins plunge into a pool. Anupam Pandey, Jisoo Yuk, and Sunghwan Jung

The next time you’re about to jump off a diving board to escape the summer heat, consider this: There are denizens of the animal kingdom who can put even the flashiest of Olympic divers to shame. Take, for instance, the gannet. In search of fresh fish to eat, this white seabird can plunge into water at 55 miles per hour. That’s almost double the speed of elite human athletes who leap from 10-meter platforms.

Engineers can now measure for themselves what diving does to the human body, without any actual humans being harmed in the process. They created mannequins, like crash-test dummies, fitted them with force sensors, and dropped them into water. Their results, published in the journal Science Advances on July 27, show just how unnatural it is for a human body to plummet headlong into the drink.

“Humans are not designed to dive into water,” says Sunghwan Jung, a biological engineer at Cornell University, and one of the researchers behind the study.

Jung’s group have spent the past several years studying how various living things crash into water. Initially, they focused on animals: the gannet, for one; the porpoise; and the basilisk lizard, famed for running on water’s surface before gravity forces their feet under.

Those animals’ bodies likely evolved and adapted to their aquatic environments. They might need to dive under the water to find food or to avoid predators swooping down from above. Humans, who evolved in drier, terrestrial environments, have no such biological need. For us, that tendency makes diving much more dangerous.

“Humans are different,” says Jung. “Humans dive into water for fun—and likely get injured frequently.”

[Related: Swimming is the ultimate brain exercise. Here’s why.]

Jung and his colleagues wanted to measure the force the human body experienced when it crashed into the water surface. To do this, they 3D-printed mannequins and fitted them with sensors. The sensors that could record the force the dummy diver was experiencing, and in turn, how that force changed over a splash.

They measured three different poses, each mimicking one of their diving animals. To emulate the rounded head of a porpoise, a mannequin dropped into water head-first. To emulate the pointed beak of a bird, the second pose had the mannequin’s hands joined in a V-shape beyond its head. And to copy how a lizard falls in, a third pose had the mannequin plunge with its feet.

As the bodies experienced the force of the impact, the researchers found that the rate of change in the force varied depending on the shape. A rounded shape, like a human head, underwent a more brutal jolt than a pointier shape.

From this, they estimated a few heights above which diving in a particular posture would be dangerous. Diving feet-first from above around 50 feet would put you at risk of knee injury, they say. Diving hands-first from above roughly 40 feet could put you through enough force to hurt your collarbone. And diving from just 27 feet, head-first, might cause spinal cord injuries, the researchers believe.

You likely won’t encounter diving boards that high at your local pool, but it’s not inconceivable that you’d jump from that high when, say, diving from a cliff.

“The modelling is really solid, and it’s very interesting to look at the different impacts,” said Chet Moritz, who studies how people recover from spinal cord injuries at the University of Washington and wasn’t involved with the paper.

Spinal cord injuries aren’t common, but poolside warnings beg you not to dive into shallow water for very good reason: The trauma can be utterly debilitating. A 2013 study found that most spinal cord injuries were due to falls or vehicle crashes—and diving accounted for 5 percent of them. 

But Moritz points out that the spinal cord injuries he is aware of come from striking the bottom of a pool, rather than the surface that these engineers are scrutinizing. “From my experience, I don’t know of anyone who’s had a spinal cord injury from just hitting the water itself,” he says.

Nonetheless, Jung believes that if people can’t stop diving, then his research may at least make the activity safer. “If you really need to dive, then it’s good to follow these kind of suggestions,” he says. That is to say: Try not to hit the water head-first.

Jung’s group aren’t just doing this research to improve diving safety warnings. They’re trying to make a projectile—one with a pointed front, inspired by a bird’s beak—that can better arc through deep water.

Correction (July 28, 2022): Sunghwan Jung’s last name was previously misspelled. It has been corrected.

The post What engineers learned about diving injuries by throwing dummies into a pool appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>
Have we been measuring gravity wrong this whole time? https://www.popsci.com/science/gravitational-constant-measurement/ Mon, 18 Jul 2022 11:45:50 +0000 https://www.popsci.com/?p=456843
Swimmer in a black speedo soaring off a green diving board
What goes up must comes down, but how quickly is still a small mystery. Deposit Photos

A Swiss experiment using vibrations and vacuum chambers could help firm up the gravitational constant.

The post Have we been measuring gravity wrong this whole time? appeared first on Popular Science.

]]>
Swimmer in a black speedo soaring off a green diving board
What goes up must comes down, but how quickly is still a small mystery. Deposit Photos

Gravity is everywhere. It’s the force that anchors the Earth in its orbit around the sun, stops trees from growing up forever, and keeps our breakfast cereal in its bowl. It’s also an essential component in our understanding of the universe

But just how strong is the force? We know that gravity acts the same whether an object is as light as a feather or as heavy as a stone, but otherwise, scientists don’t have a precise answer to that question, despite studying gravity in the cosmos for centuries. 

According to Isaac Newton’s law of universal gravitation, the gravitational force drawing two objects (or particles) together gets stronger the more massive those objects are and the closer they get to each other. For example, the gravity between two feathers that are five inches apart is weaker than two apples that are the same distance from one another. However, the exact calculation of the force relies on a universal variable called the gravitational constant, which is represented by “G” in equations. 

[Related: The standard model of particle physics might be broken]

Physicists don’t know exactly what value to assign to “G.” But a new approach from Switzerland might bring fresh insights on how to test better for gravity in the first place.

“These fundamental constants, they are basically baked into the fabric of the universe,” says Stephan Schlamminger, a physicist in the Physical Measurement Laboratory at the National Institute of Standards and Technology. “Humans can do experiments to find out their value, but we will never know the true value. We can get closer and closer to the truth, the experiments can get better and better, and we approximate the true value in the end.”

Why is “G” so difficult to measure?

Unlike counting, measuring is inherently imprecise, says Schlamminger, who serves as chair of the Working Group on the Newtonian Constant of Gravitation of the International Union of Pure and Applied Physics.

“If you take a tape measure and measure the length of a table, let’s say it falls between two ticks. Now you have to use your eye and figure out where [the number] is,” he says. “Maybe you can use a microscope or something, and the more advanced the measurement technique is, the smaller and smaller your uncertainty will become. But there’s always uncertainty.”

It’s the same challenge with the gravitational constant, Schlamminger says, as researchers will always be measuring the force between two objects in some form of increments, which requires them to include some uncertainty in their results.

On top of that, the gravitational force that can be tested between objects in a lab will always be limited by the size of the facility. So that makes it even trickier to measure a diversity of masses with sophisticated tools.

Finally, there can always be interference in readings, says Jürg Dual, a professor of mechanics and experimental dynamics at ETH Zurich, who has conducted a new experiment to redetermine the gravitational constant. That’s because any object with mass will exert a gravitational pull on everything else with mass in its vicinity, so experimenters need to be able to remove the external influence of Earth’s gravity, their own, and all other presences that hold weight from the test results.

What experiments have physicists tried?

In 1798, Henry Cavendish set the standard for laboratory experiments to measure the gravitational constant using a technique called the torsion balance

That technique relies on a sort of modified pendulum. A bar with two test masses on each end is suspended from its midpoint on a thin wire hanging down. Because the bar is horizontal to the Earth’s gravitational field, Cavendish was able to remove much of the planetary force from the measurements. 

Cavendish used two small lead spheres two inches in diameter as his test masses. Then he added a second set of masses, larger lead balls with a 12-inch diameter, which were hung separately from the test masses but near to each other. These are called the “source” masses. The pull of these larger lead balls causes the wire to twist. From the angle of that twist, Cavendish and his successors have been able to calculate the gravitational force acting between the test and the source masses. And because they know the mass of each object, they are able to calculate “G.” 

Similar methods have been used by experimenters in the centuries since Cavendish, but they haven’t always found the same value for “G” or the same range of uncertainty, Schlamminger says. And the disagreement in the uncertainty of the calculations is a “big enigma.”

So physicists have continued to devise new methods for measuring “G” that might one day be able to reach a more precise result. 

[Related: From the archives: The Theory of Relativity gains speed]

Just this month, a team from Switzerland, led by Dual, published a new technique in the journal Nature Physics, which may cut out noise from surroundings and produce more accurate results.

The experimental setup included two meter-long beams suspended in vacuum chambers. The researchers caused one beam to vibrate at a particular frequency; due to the gravitational force between the two beams, the other beam would then begin to move as well. Using laser sensors, the team measured the motion of the two beams and then calculated the gravitational constant based on the effect that one had on the other. 

Their initial results yielded a value for “G” that is about 2.2 percent higher than the official value recommended by the Committee on Data for Science and Technology (which is 6.67430×10−11 m3⋅kg−1s−2), and holds a relatively large window of uncertainty. 

“Our results are more or less in line with previous experimental determinations of ‘G.’ This means that Newton’s law is also valid for our situation, even though Newton didn’t ever think of a situation like the one we have presented,” Dual says. “In the future, we will be more precise. But right now, it’s a new measurement.”

This is a slow-moving but globally collaborative endeavor, says Schlamminger, who was not involved in the new research. “It’s very rare to get a paper on big ‘G,’” so while their results may not be the most precise measurement of the gravitational constant, “it’s exciting” to have a new approach and another measurement added to one of the universe’s most weighty mathematical constants.

The post Have we been measuring gravity wrong this whole time? appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.

]]>